IMPROVEMENT OF CONVERGENCE OF LEARNING OF MULTI-LAYER NEURAL NETWORKS AND APPLICATION FOR SEARCH ENGINE
Project/Area Number |
13680472
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | TOKYO METROPOLITAN COLLEGE OF TECHNOLOGY |
Principal Investigator |
HARA Kazuyuki TOKYO METROPOLITAN COLLEGE OF TECHNOLOGY, DEPARTMENT OF ELECTRICAL AND INFORMATION ENGINEERING, ASSOCIATE PROFESSOR, 電子情報工学科, 助教授 (30311004)
|
Co-Investigator(Kenkyū-buntansha) |
NAKAYAMA Kenji KANAZAWA UNIVERSITY, GRADUATE SCHOOL OF NATURAL SCIENCE AND TECHNOLOGY, PROFESSOR., 大学院・自然科学研究科, 教授 (00207945)
|
Project Period (FY) |
2001 – 2002
|
Project Status |
Completed (Fiscal Year 2002)
|
Budget Amount *help |
¥1,300,000 (Direct Cost: ¥1,300,000)
Fiscal Year 2002: ¥500,000 (Direct Cost: ¥500,000)
Fiscal Year 2001: ¥800,000 (Direct Cost: ¥800,000)
|
Keywords | Multilayer nerural networks / Error distribution / improving convergence of learning / margin / ensemble learning / on-line learning / 単純パーセプトロン / 階層形ニューラルネットワーク / 収束性の改善 / 例題数の偏り / 対称性の破壊 |
Research Abstract |
In this study, we investigated improvement of convergence of learning of the multi-layer neural networks and its application for search engine. Abstract of the results are follows : (1) Baised Classification with number of the data in the class. Learning method of probability of updating the connection weight is proportional to the mean equated error, is investigated. It keeps balance of the number of the data express large error and small error, then minority class becomes learnable. (2)Learning method to obtain Early symmetry braking. We investigated the learning method for mutilayer neural networks updating only one connection weight to avoid the stopping of the learning. (3)Perceptron learning with a margin. We introduced a margin a la Gardner to improve the perceptron learning. Our algorithm is superior to Hebbian learning at the early stage of the learning. (4)Analysis of ensemble learning through linear learning machine. We analyzed generalization error of ensemble learning related to the number of the weak learner K. As the result, it has been shown that at the limit of K goes to infinity, the generalization error is a half of that of single percentron.
|
Report
(3 results)
Research Products
(20 results)