Symbol Processing System Modeled after Brains
Project/Area Number |
13680438
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | Keio University |
Principal Investigator |
SAKURAI Akito Keio University, Science and Technology, Professor, 理工学部, 教授 (00303339)
|
Project Period (FY) |
2001 – 2002
|
Project Status |
Completed (Fiscal Year 2002)
|
Budget Amount *help |
¥3,100,000 (Direct Cost: ¥3,100,000)
Fiscal Year 2002: ¥1,500,000 (Direct Cost: ¥1,500,000)
Fiscal Year 2001: ¥1,600,000 (Direct Cost: ¥1,600,000)
|
Keywords | Recurrent Neural Networks / Language Models / Counter Languages / Machine Learning / 確率的学習アルゴリズム |
Research Abstract |
During the research, we encountered puzzling experimental results that would imply that the representation capability of the recurrent neural networks (RNN) is limited further than usually believed. Those results were puzzling because they exhibit, for example, learnability althogh limited and unstable learned results. We made further investigation to obtain results to show why the RNN learning is possible and methods to circumvent the insufficent capability (a) If noise-tolerance is requested, then general counters are not learnable, therefore stacks are not learnable either. Based upon the results, we proposed a single-turn counter that cannot count-up when it counts down once and we showed constructively that the single-turn counter and in the same way finite-turn counter is implementable but the infinite turn counter is not. In consequnence we showed that RNN can represent at most a finite state automaton with finite-turn counters and that the experimental results showing learnabili
… More
ty of counters are in fact showing at most the learnability of finite-turn counters and not that of counters (b) Theoretically a finite state automaon cannot be learned without a suitable learning bias and in RNN cases it is impossible to prove or disprove in general the equivalence of two learned automaton in the RNN. We proposed a new stochastic learning alogrithm of RNN with classical perceptrons as its computation units. A bias naturally introduced by the algorithm make it possible to learn a finite state automaton. Since the state transition represented by the RNN is of finite space, it is guranteed for us to get a finite state automaton representaion ofRNN of the type. The algorithm is guranteed to converge with probability one if a solution exists, although the expected time to convergence might be infinite (c) We characterized the languages generated by a finite state automaton with finite0 or single-turn counters. The languages form a hierarchical structure different from Chomsky's hierarchy Less
|
Report
(3 results)
Research Products
(10 results)