中国物理B ›› 2010, Vol. 19 ›› Issue (10): 100201-100201.doi: 10.1088/1674-1056/19/10/100201

• •    下一篇

L2L learning of dynamic neural networks

Choon Ki Ahn   

  1. Department of Automotive Engineering, Seoul National University of Technology, 172 Gongneung 2-dong, Nowon-gu, Seoul 139-743, Korea
  • 收稿日期:2009-11-21 修回日期:2010-04-05 出版日期:2010-10-15 发布日期:2010-10-15
  • 基金资助:
    Project supported by the Grant of the Korean Ministry of Education, Science and Technology (The Regional Core Research Program/Center for Healthcare Technology Development).

$\mathscr{L}$2–$\mathscr{L}$$\infty$ learning of dynamic neural networks

Choon Ki Ahn   

  1. Department of Automotive Engineering, Seoul National University of Technology, 172 Gongneung 2-dong, Nowon-gu, Seoul 139-743, Korea
  • Received:2009-11-21 Revised:2010-04-05 Online:2010-10-15 Published:2010-10-15
  • Supported by:
    Project supported by the Grant of the Korean Ministry of Education, Science and Technology (The Regional Core Research Program/Center for Healthcare Technology Development).

摘要: This paper proposes an $\mathscr{L}$2$\mathscr{L}$$\infty$ learning law as a new learning method for dynamic neural networks with external disturbance. Based on linear matrix inequality (LMI) formulation, the $\mathscr{L}$2$\mathscr{L}$$\infty$ learning law is presented to not only guarantee asymptotical stability of dynamic neural networks but also reduce the effect of external disturbance to an $\mathscr{L}$2$\mathscr{L}$$\infty$ induced norm constraint. It is shown that the design of the $\mathscr{L}$2$\mathscr{L}$$\infty$ learning law for such neural networks can be achieved by solving LMIs, which can be easily facilitated by using some standard numerical packages. A numerical example is presented to demonstrate the validity of the proposed learning law.

Abstract: This paper proposes an $\mathscr{L}$2–$\mathscr{L}$$\infty$ learning law as a new learning method for dynamic neural networks with external disturbance. Based on linear matrix inequality (LMI) formulation, the $\mathscr{L}$2–$\mathscr{L}$$\infty$ learning law is presented to not only guarantee asymptotical stability of dynamic neural networks but also reduce the effect of external disturbance to an $\mathscr{L}$2–$\mathscr{L}$$\infty$ induced norm constraint. It is shown that the design of the $\mathscr{L}$2–$\mathscr{L}$$\infty$ learning law for such neural networks can be achieved by solving LMIs, which can be easily facilitated by using some standard numerical packages. A numerical example is presented to demonstrate the validity of the proposed learning law.

Key words: $\mathscr{L}$2–$\mathscr{L}$$\infty$ learning law, dynamic neural networks, linear matrix inequality, Lyapunov stability theory

中图分类号:  (Matrix theory)

  • 02.10.Yn
02.30.Yy (Control theory) 07.05.Mh (Neural networks, fuzzy logic, artificial intelligence)