Abstract

The neural network with two weights is constructed and its approximation ability to any continuous functions is proved. For this neural network, the activation function is not confined to the odd functions. We prove that it can limitlessly approach any continuous function from limited close subset of to and any continuous function, which has limit at infinite place, from limitless close subset of to . This extends the nonlinear approximation ability of traditional BP neural network and RBF neural network.

1. Introduction

So far, many neural networks models have been presented and applied in pattern recognition, automatic control, signal processing, and aided decision. Among these models, BP (feedforward) neural networks and RBF (radial basis function) neural networks are widely used because of their nonlinear ability of approximation to any continuous function. Up to now, these two classes of neural networks have successfully been applied to approach any nonlinear continuous function defined on bounded close subsets [17]. However, the approximation result to any continuous function defined on unbounded set has been very rare [8]. This inspires us to look for a new network which is of more wide approximation ability and can approach any continuous function both on a bounded set and on an unbounded set.

Wang et al. [9] presented a new neural network called neural networks with two weights on the base of combining the advantage of BP neural network with that of RBF neural network. This model can not only simulate BP neural network and RBF neural network, but also simulate neural networks with higher order. This neural network not only contains direction weight value with respect to BP network but also contains core weight value with respect to RBF network. The function of its neurons is of the following form: where is the output of neurons, is the activation function, is the threshold value, is the direction weight value, is the core weight value, is the input, and and are two parameters.

In (1), when , and , then (1) reduces to mathematical model BP network neurons; when takes a fixed value, and , then (1) becomes mathematical model RBF network neurons.

In [10], the approximation operators with logarithmic sigmoidal function of a class of neural networks with two weights and a class of quasi-interpolation operators were investigated. Using these operators as approximation tools, the upper bounds of estimate errors were estimated.

In [2], the authors considered single hidden layer neural networks: where represents output weights, is the activaion function, is threshold value, and is direction weight value.

The main result in [2] is as follows.

Theorem 1. Suppose is a bounded, strictly monotonously increasing, and odd function defined on , and . Then there exists a feedforward neural network (BP) with one hidden layer defined by such that where for

In this paper, one is concerned with the neural networks with two weights and a single hidden layer:

One objective is to construct a new BP neural network which is different from the BP neural network in [2] and prove this network has the approximation ability to any nonlinear continuous function. Another objective is to prove that, by adjusting the values of two parameters and , the neural network with two weights can not only approximate any continuous functions defined on a bounded close subset, namely, it has the same approximation ability as BP neural network in [2], but also approximate any continuous function defined on an unbounded set. Namely, the neural network with two weights has a better approximation ability than BP neural network and RBF neural network.

2. Theoretical Result

We use the following notations: the symbols , and represent the set of positive integers, nonnegative integers, and real number, respectively. Using to denote the space of continuous real-valued functions defined on , it is equalled by the supremum norm . Let be a real-valued function defined on . , the modulus of continuity of , is defined as . Then we have , and for any real number .

The modulus of continuity is usually considered as the measure of the smoothness of function and the approximation error in approximation theory. The function is called Lipschitz continuous and is written as , if there exists a constant such that . denotes the positive constant dependents only on and its value may be different at different occurrence.

Our main result is as follows.

Theorem 2. Suppose is a bounded, strictly monotonously increasing, and odd function defined on , and . Then there exists a feedforward neural network (BP) with one hidden layer defined by such that where, for ,

Proof. Divide into equal intervals; each has length of and let . For , we set , , , , ,.
For any , given , we assume that and note that Fix and choose , so that . Let ; it follows that There are two possible cases to consider: (i) ; (ii) .
Case (i). When , we have, for ,
Case (ii). When , we have, for , From (12) and (13), it follows that Thus from (14), we have Therefore, we have This ends the proof of Theorem 2.

Remark 3. The activation functions are assumed to be odd functions in [2], while in this paper, this assumption is removed. On the other hand, in the construction of neural networks, threshold values and direction weight values are different from those in [2].

Theorem 4. Suppose is a bounded, strictly monotonously increasing, and odd function defined on , and , . Then there exists a neural network with two weights and one hidden layer defined by with such that where is defined in Theorem 2 and is equal to in Theorem 2.

Proof. We make fractional transformation , where , . Since , thus is a continuous transformation. Hence is also a bounded close set; moreover, .
There exists an inverse transformation for fractional transformation , where . So is a continuous real-valued function defined on bounded close set . For , from Theorem 2, there exists a neural network with two weights and one hidden layer defined by such that which satisfies This ends the proof of Theorem 4.

Remark 5. When , the network with two weights is not BP and RBF neural network; thus, Theorem 4 validates that the neural network with two weights has the same approximation ability as BP and RBF neural network on any bounded close set. When , from Theorem 4, this neural network can approximate any continuous function defined on a bounded close set.

Theorem 6. Suppose is a bounded, strictly monotonously increasing, and odd function defined on ; is an unbounded subset of (all accumulation points are in beside ) and, , and then there exists a neural network with two weights and one hidden layer: with such that which satisfies

Proof. We make fractional transformation , where , . Since , thus is a contimuous transformation on and as . Let ; is a continuous function on . We prove that is a close set. If is an arbitrary accumulation point of , then or is an accumulation point of . When , . When is an accumulation point of , then there exists a sequence and another sequence , such that ,. From fractional transformation, there exists such that . From and inverse transformation , it follows that does not hold. Hence is a finite point of . Therefore , . So, is a close set. Let Then is a continuous function on . Let ; it is easy to prove that is a continuous function on . For a continuous function defined on , by means of Theorem 4, there exists a neural network with two weights such that which satisfies This ends the proof of Theorem 6.

Remark 7. When ,, the network with two weights is not BP and RBF neural network; thus, Theorem 6 validates that the neural network with two weights has the approximation ability that BP and RBF neural network do not have on any unbounded set. Namely, the network with two weights has a better approximation ability than BP and RBF neural network on any unbounded set. When , from Theorem 6, this neural network can approximate any continuous function defined on unbounded set.

3. Conclusions

In this paper, we construct a new BP neural network and prove that the network has the approximation ability to any nonlinear continuous function. In our result, the threshold values and direction weight values in [2] are different from those in our paper. Second, we prove that neural network with two weights has the same approximation ability as BP neural network and RBF neural network to any continuous function defined on any bounded close set; furthermore, we prove that neural network with two weights has a better approximation ability than BP neural network and RBF neural network to any continuous function defined on any unbounded set.