^{†}Corresponding author. Email: syedgru@gmail.com
^{*}Project supported by the Fund from National Board of Higher Mathematics (NBHM), New Delhi (Grant No. 2/48/10/2011R&DII/865).
This paper deals with H_{∞} state estimation problem of neural networks with discrete and distributed timevarying delays. A novel delaydependent concept of H_{∞} state estimation is proposed to estimate the H_{∞} performance and global asymptotic stability of the concerned neural networks. By constructing the Lyapunov–Krasovskii functional and using the linear matrix inequality technique, sufficient conditions for delaydependent H_{∞} performances are obtained, which can be easily solved by some standard numerical algorithms. Finally, numerical examples are given to illustrate the usefulness and effectiveness of the proposed theoretical results.
In the past decades, many kinds of neural networks, such as cellular neural networks, Hopfield neural networks, Cohen– Grossberg neural networks, recurrent neural networks (RNNs), complex dynamical networks (CDNs), bidirectional associative memory (BAM) neural networks, chaotic neural networks (CNNs), and static neural networks (SNNs), have been studied because they have extensive applications in different fields such as fault diagnosis, pattern recognition, signal processing, and parallel computation.^{[1– 4]} Some of these applications require the equilibrium points of the designed networks to be stable. Since axonal signal transmission delays often occur in various neural networks, and may also cause undesirable dynamic network behaviors such as oscillation and instability, it is important to study the stability of the neural networks.^{[5– 8]}
Considerable efforts have been dedicated to the stability analysis of neural networks with delays. Most neural networks can be classified as either continuous^{[9, 10]} or discrete.^{[11, 12]} While, the signal propagation is sometimes instantaneous and can be modeled with discrete delays, it may also be distributed during a certain time period so that distributed delays are incorporated into the model.^{[13– 15]} Therefore, both discrete and timevarying distributed delays should be taken into account when modeling a realistic neural network.
On the other hand, a neural network is a highly interconnected network with a large number of neurons, and as a result most neural networks are largescale and complex networks. In fact, only partial information about the neuron states is available in the outputs of largescale neural networks. It is very difficult to obtain the state estimation of the neurons in neural networks because the neural networks have complicated structures. Therefore, it is important to estimate the neuron states through available measurement outputs, and there are many remarkable attempts to design state estimators for various types of neural networks.^{[16, 17]} The delaydependent H_{∞ } problems for delayed neural networks have received substantial attention from the control community for the past few years.^{[18, 19]} The H_{∞ } control of switched neutraltype neural networks was presented in Ref. [20] to estimate the robust exponential stability. However, to the best of the authors’ knowledge, the H_{∞ } state estimation of static neural networks with discrete and distributed timevarying delays via augmented Lyapunov approach has not yet been considered, which motivates this study.
In this paper, we study the delaydependent H_{∞ } state estimation problem for a class of neural networks with discrete and distributed timevarying delays. Our main aim is to design a delaydependent state estimation gain matrix, such that the concerned system is globally asymptotically stable with a prescribed H_{∞ } performance of disturbance attenuation for all admissible parameters. A sufficient condition for the H_{∞ } state estimation is presented in terms of linear matrix inequality (LMI) using the augmented Lyapunov– Krasovskii functional (LKF) together with the zero function which guarantees the global asymptotic stability of the concerned neural networks. Finally, numerical examples are given to illustrate the usefulness and effectiveness of the proposed method.
The following notations are used. Throughout this paper, ℛ ^{n} and ℛ ^{n× n} denote, respectively, the ndimensional Euclidean space and the set of all n × n real matrices. The notation * represents the entries implied by symmetry. The matrix transpose and inverse of A are denoted as A^{T} and A^{− 1}, respectively. The X > 0 means that matrix X is real symmetric positive definite with appropriate dimensions. The I denotes the identity matrix with appropriate dimensions. Let
Consider the following neural network with discrete and distributed timevarying delays:
where x(t) = [x_{1}(t), x_{2}(t), … , x_{n}(t)]^{T} ∈ ℛ ^{n} is the state vector of the network at time t ≥ 0, n is the number of neurons, y(t) ∈ ℛ ^{m} is the network measurement, z(t) ∈ ℛ ^{q}, to be estimated, is a linear combination of the states, and w(t) ∈ ℛ ^{p} is the noise input belonging to L_{2}[0, ∞ ). Here A = diag{a_{1}, … , a_{n}} is a diagonal matrix with a_{i} > 0, i = 1, 2, … , n; B, W_{0}, and W_{1} represent the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix, respectively; and, C, D, B_{1}, B_{2}, and H are known real constant matrices with appropriate dimensions. The g(x(t)) = [g(x_{1}(t)), … , g(x_{n}(t))]^{T} ∈ ℛ ^{n} denotes the neuron activation function at time t, and J = [J_{1}, … , J_{n}]^{T}∈ ℛ ^{n} is a constant external input vector. The ϕ (t) is the initial condition. The τ (t) and d(t) denote the discrete timevarying and distributed timevarying delays, respectively, and are assumed to satisfy
where
Assumption 1 Each neuron activation function g_{i}(t), (i = 1, 2, … , n) is continuous and bounded, and satisfies the following condition:
with
We consider the following state estimator for the neural network (1):
where
Define the error
where e(t) = [e_{1}(t), e_{2}(t), … , e_{n}(t)]^{T} ∈ ℛ ^{n} is the state vector of the transformed system and
where k ∈ ℛ , k ≠ 0.
Definition 1 Given a prescribed level of noise attenuation γ > 0, the error system (5) is said to be globally asymptotically stable with a prescribed level of noise attenuation γ , if there is a proper state estimator (4) such that the equilibrium point of the result error system (5) with w(t) = 0 is globally asymptotically stable, and
Lemma 1 (Schur complement^{[21]}) Let M, P, Q be given matrices such that Q > 0, then
Lemma 2^{[22]} For any constant matrix M ∈ ℛ ^{n× n}, M = M^{T} > 0, scalar η > 0, vector function w : [0, η ] → R^{n} such that the integrations concerned are well defined, we have
Lemma 3^{[23]} For real matrices P > 0, M_{i} (i = 1, 2, 3) with appropriate dimensions, and τ (t) satisfying Eq. (2), we have
where
Lemma 4^{[24]} For any scalar τ (t) ≥ 0 and any constant matrix Q ∈ ℜ ^{n× n}, Q = Q^{T} > 0, the following inequality holds:
where ξ (t) is defined in Lemma 3 and V is a freeweighting matrix with appropriate dimensions.
Lemma 5^{[25]} The inequalities
are equivalent to the following condition:
where R_{1}, R_{2}, and Δ are constant matrices with appropriate dimensions, variable m ∈ [0, β ] ∈ ℛ , and β > 0.
Theorem 1 For given scalars
where
with
Proof Define the Lyapunov functional candidate as follows:
where
By calculating the time derivative of V(x_{t}), we obtain
By using Lemma 2, we obtain
From LMI (10), we have
where
where
By using Lemma 4, we have
In addition, for positive diagonal matrices Γ _{1} > 0, Γ _{2} > 0, the following equations hold based on formula (6):
By combining inequality (10) to inequality (18), we obtain
where the inequality
is used, ζ (t) is defined by
where
Obviously, if
when
In this section, we present a delaydependent sufficient condition for the solvability of the H_{∞ } performance of the concerned neural network.
Theorem 2 Consider the neural network (1) and let γ > 0 be a prescribed scalar. The H_{∞ } state estimation problem is solvable if there exist matrices P > 0, P_{11} > 0, P_{12}, P_{22} > 0, Q_{1} > 0, Q_{2}, Q_{3} > 0, R > 0, S > 0, T_{1} > 0, T_{2} > 0, Z_{1} > 0, Z_{2}, Z_{3} > 0, T > 0, and any appropriate dimensions matrices M_{l}, N_{l}, U_{m}, V_{m}, (l = 1, 2, … , 6, m = 1, 2), such that the following LMIs hold:
where
with
Proof Define K = P^{− 1}G, then pre and post multiplying LMIs (7) and (8) by diag
Remark 1 In order to show the reduced conservatism of our stability criteria, we consider the following systems as a special case of the system (1) reduced to a neural network with timevarying delays described by
Using the same method in Theorem 1, we can obtain the following results.
Corollary 1 For given scalars
where
with
Remark 2 Recently, the H_{∞ } state estimation of static neural networks with timevarying delays was studied in Ref. [27]. Furthermore, the delaydependent H_{∞ } state estimation of neural networks with discrete and distributed delays was investigated in Ref. [28]. The H_{∞ } cluster synchronization and state estimation for delayed complex dynamical networks were presented in Ref. [29]. In Theorem 2, we present a sufficient condition to ensure that the error system (5) is globally asymptotically stable with an H_{∞ } performance index γ > 0. The delaydependent condition is derived by constructing an augmented Lyapunov– Krasovskii functional with Lemmas 3 and 4.
Example 1 Consider the neural network (27) with the following matrix parameters:
By solving LMI in Corollary 1 with
Example 2 Consider the neural network (1) with the following matrix parameters:
By solving LMI in Theorem 2 with
The state estimator gain matrix is then obtained as
The minimum H_{∞ } performance index γ with different
Example 3 Consider the neural network (1) with the following matrix parameters:
By solving LMI in Theorem 2 with
The result is presented in Table 2.
In this work, we have studied the H_{∞ } state estimation control of neural networks with discrete and distributed timevarying delays. Some improved delaydependent H_{∞ } state estimation analyses have been established in terms of linear matrix inequality by constructing an appropriate type of LKF for delayed neural networks. It is shown that a desired state estimation controller gain matrix can be constructed when the given linear matrix inequalities are feasible. Numerical examples are given to demonstrate the effectiveness and the usefulness of the proposed method. The results are also compared with existing methods. In this paper, we have considered augmented LKFs to show the less conservative results. However, we construct the augmented LKFs with single integral terms. We would like to point out that it is possible to extend our main results to a more general discretetime uncertain neutral with interval timevarying and distributed delays by using the augmented LKF approach with triple integral terms. The results will appear in the near future.
1 

2 

3 

4 

5 

6 

7 

8 

9 

10 

11 

12 

13 

14 

15 

16 

17 

18 

19 

20 

21 

22 

23 

24 

25 

26 

27 

28 

29 

30 
