2005 Fiscal Year Final Research Report Summary
Symbol Processing System Modeled after Brains
Project/Area Number |
15500095
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | Keio University |
Principal Investigator |
SAKURAI Akito Keio University, Science and Technology, Professor, 理工学部, 教授 (00303339)
|
Project Period (FY) |
2003 – 2005
|
Keywords | artificial neural networks / recurrent neural networks / grammar learning |
Research Abstract |
We have investigated a new framework of neural network learning that is composed of multiple reinforcement learning agents among which there exist multiple legitimate candidate modules. We have invented a mechanism that facilitates competitive learning among reinforcement learning agents and ascertained its validity by computer simulations. We further investigated another type of grammar acquisition by recurrent neural networks of two different types. One type of them monitors the other type and modifies itself based on the monitored observation. We found that the networks are able to learn grammatical categories and are robust against their legion. We conducted experiments of acquisition of shift-reduce parsers in which ATIS corpus in Penn TreeBank is the corpus and ILP is the fundamental learning paradigm. To alleviate drawbacks (high cost of execution time and memory requirements) of the existing learning methods, we employed grammatical categories as learning units. We invented new methods to generate rationalized negative examples based on grammatical categories and to relearn the negative examples by investigating where those examples are in fact miss-classified. We confirmed that the accuracy improved to a bit less than 90%, Finite state automata are capable enough to represent knowledge in brain but it is well-known that they are too versatile to be successfully learned. Therefore we made research on methods to approximately learn and communicate them. We have applied reinforcement learning methods and found them to be eligible. We have invented a method to prepare a large number of grammatical categories, to try to use them in communication, and to select best ones. We implemented the method in recurrent neural networks, and conducted numerical simulations. The results are promising in a sense that the original grammatical category structure is reconstructed with paying attention only to training errors (not to generalization capabilities).
|
Research Products
(12 results)