Project/Area Number |
13680463
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | TOKAI UNIVERSITY |
Principal Investigator |
KAMIMURA Ryotaro Tokai University, Information Technology Center, Professor, 総合情報センター, 教授 (80176643)
|
Co-Investigator(Kenkyū-buntansha) |
UCHIDA Osamu Tokai University, Department of Information Technology and Electronics, Assistant Professor, 電子情報学部, 講師 (50329306)
KAMIMURA Taeko Senshu University, Department of Letters, Professor, 文学部, 教授 (30205926)
NAKNISHI Shohachiro Tokai University, Department of Information Technology and Electronics, Professor, 電子情報学部, 教授 (30056254)
|
Project Period (FY) |
2001 – 2003
|
Project Status |
Completed (Fiscal Year 2003)
|
Budget Amount *help |
¥3,600,000 (Direct Cost: ¥3,600,000)
Fiscal Year 2003: ¥900,000 (Direct Cost: ¥900,000)
Fiscal Year 2002: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 2001: ¥1,700,000 (Direct Cost: ¥1,700,000)
|
Keywords | Neural networks / Unsupervised learning / Supervised learning / Information Maximization / Back-propagation / Radial-basis function / Cost / Competitive learning / 情報量 / 情報量最大化 / 教師付き学習 / 教師なし学習 / 特徴抽出 / 特徴発見 / 汎化能力 / 情報理論 / 自己組織化マップ / 言語習得 / 授受動詞 / 自然言語処理 |
Research Abstract |
We propose a new information-theoretic approach to neural computing. In principle, we postulate that living systems manage to control information in a strategic way to keep their existence in extremely uncertain outer conditions. For example, living systems must maximize information content on outer environment to maintain their existence from destructive forces from outer environment. Applying this principle to neural networks, we have proposed a new information-theoretic competitive learning method that is completely different from conventional methods in that competition processes are realized by maximizing mutual information. Based upon this information-theoretic competitive learning, we have developed the following five different approaches to neural computing. (1) Cost-sensitive information maximization : in maximizing information, the associated cost must be minimized. This method can produce internal representations more faithful to input patterns. (2) Supervised information-theoretic competitive learning : unsupervised information-theoretic competitive learning is extended to supervised learning. In this method, we use multi-layered networks in which the output layer is added to the first competitive layer. (3) Teacher-directed learning: Teacher-directed learning is a new supervised learning method using information maximization in which no error back-propagation is used. Thus, the method is very efficient and biological sound method. In the previous method, we used the sigmoid activation functions with some problems in increasing information. To remedy this, we use the Gaiussian activation functions. (5) Unification of information maximization and minimization : We try to unify information maximization and minimization, because the shortcomings of both methods can be resolved in this framework. In this special session, we offer the present state of research of strategic information control.
|