Study on cooperation mechanism and it's dynamic behavior of many learning machines
Project/Area Number |
16500146
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Sensitivity informatics/Soft computing
|
Research Institution | Tokyo metropolitan college of industrial technology (2006) Tokyo Metropolitan College of Technology (2004-2005) |
Principal Investigator |
HARA Kazuyuki Tokyo metropolitan college of industrial technology, Monozukuri engineering department, Professor, ものづくり工学科, 教授 (30311004)
|
Co-Investigator(Kenkyū-buntansha) |
MIYOSHI Seiji Kobe city college of technology, Department of electronic engineering, Assistant professor, 電子工学科, 助教授 (10270307)
|
Project Period (FY) |
2004 – 2006
|
Project Status |
Completed (Fiscal Year 2006)
|
Budget Amount *help |
¥3,100,000 (Direct Cost: ¥3,100,000)
Fiscal Year 2006: ¥600,000 (Direct Cost: ¥600,000)
Fiscal Year 2005: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 2004: ¥1,500,000 (Direct Cost: ¥1,500,000)
|
Keywords | ensemble learning / cooperate mechanism / dynamic process / on-line learning / simple perceptron / statistical mechanics / アンサンブル学習 / 相互学習 / 線形パーセプトロン / 非線型パーセプトロン / 情報統計力学 / 多数決 |
Research Abstract |
Ensemble learning algorithms, such as bagging and Ada-boost, try to improve upon the performance of a weak learning machine by using many weak learning machines ; such learning algorithms have recently received considerable attention. We have analyzed the dynamics of the generalization error of ensemble learning by using statistical mechanics methods within the framework of on-line learning. Within this framework, the overlap (or direction cosine) between the teacher and the initial student weight vectors plays important roles in ensemble learning. When overlaps between the teacher and the students are homogeneous, a simple average of the student outputs can be used as an integration method for ensemble learning (bagging). From our analysis, we found that the generalization error was equal to half that of a single linear perceptron when the number of linear perceptrons K became infinite for the no noise case. In addition, we found that the generalization error converged with that of th
… More
e infinite case with 0(1/K) when the number of linear perceptrons was finite for both the no noise case and the noisy case. In an inhomogeneous case, the generalization error can be improved by introducing weights to average the outputs of the learning machines (i.e., to use a weighted average rather than a simple average), and the weights should be adapted to minimize the generalization error (i.e., parallel boosting). In ensemble learning, there is no interaction between the students. In mutual learning, learning is performed between two students who learn from a teacher in advance. Therefore, the knowledge each student has obtained from the teacher is exchanged, which may improve the performance of the students. Moreover, the interaction may mimic the integration mechanism of ensemble learning. We showed that the mutual learning asymptotically converged into bagging. Moreover, a student with a larger initial overlap for mutual learning transiently passes through a state of parallel boosting during the learning in the limit of step size goes to zero. Less
|
Report
(4 results)
Research Products
(24 results)