• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Symbol Processing System Modeled after Brains

Research Project

Project/Area Number 13680438
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeSingle-year Grants
Section一般
Research Field Intelligent informatics
Research InstitutionKeio University

Principal Investigator

SAKURAI Akito  Keio University, Science and Technology, Professor, 理工学部, 教授 (00303339)

Project Period (FY) 2001 – 2002
Project Status Completed (Fiscal Year 2002)
Budget Amount *help
¥3,100,000 (Direct Cost: ¥3,100,000)
Fiscal Year 2002: ¥1,500,000 (Direct Cost: ¥1,500,000)
Fiscal Year 2001: ¥1,600,000 (Direct Cost: ¥1,600,000)
KeywordsRecurrent Neural Networks / Language Models / Counter Languages / Machine Learning / 確率的学習アルゴリズム
Research Abstract

During the research, we encountered puzzling experimental results that would imply that the representation capability of the recurrent neural networks (RNN) is limited further than usually believed. Those results were puzzling because they exhibit, for example, learnability althogh limited and unstable learned results. We made further investigation to obtain results to show why the RNN learning is possible and methods to circumvent the insufficent capability
(a) If noise-tolerance is requested, then general counters are not learnable, therefore stacks are not learnable either. Based upon the results, we proposed a single-turn counter that cannot count-up when it counts down once and we showed constructively that the single-turn counter and in the same way finite-turn counter is implementable but the infinite turn counter is not. In consequnence we showed that RNN can represent at most a finite state automaton with finite-turn counters and that the experimental results showing learnabili … More ty of counters are in fact showing at most the learnability of finite-turn counters and not that of counters
(b) Theoretically a finite state automaon cannot be learned without a suitable learning bias and in RNN cases it is impossible to prove or disprove in general the equivalence of two learned automaton in the RNN. We proposed a new stochastic learning alogrithm of RNN with classical perceptrons as its computation units. A bias naturally introduced by the algorithm make it possible to learn a finite state automaton. Since the state transition represented by the RNN is of finite space, it is guranteed for us to get a finite state automaton representaion ofRNN of the type. The algorithm is guranteed to converge with probability one if a solution exists, although the expected time to convergence might be infinite
(c) We characterized the languages generated by a finite state automaton with finite0 or single-turn counters. The languages form a hierarchical structure different from Chomsky's hierarchy Less

Report

(3 results)
  • 2002 Annual Research Report   Final Research Report Summary
  • 2001 Annual Research Report
  • Research Products

    (10 results)

All Other

All Publications (10 results)

  • [Publications] Akito Sakurai: "A Fast and Convergent Stochastic MLP Learning Algorithm"International Journal of Neural Systems. 11. 573-584 (2001)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2002 Final Research Report Summary
  • [Publications] Akito Sakurai, Daisuke Hyodo: "Simple recurrent neural networks and random indexing"Proc. International Conference on Information Processing. (2002)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2002 Final Research Report Summary
  • [Publications] T.Harada, O.Araki, A.Sakurai: "Learning Context-Free Grammars with Recurrent Neural Networks"Proc. International Joint Conference on Neural Networks. 2602-2607 (2001)

    • Description
      「研究成果報告書概要(和文)」より
    • Related Report
      2002 Final Research Report Summary
  • [Publications] Akito,Sakurai: "A Fast and Convergent Stochastic MLP Learning Algorithm"International Journal of Neural Systems. vo.11. 573-584 (2001)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2002 Final Research Report Summary
  • [Publications] Akito,Sakurai, Daisuke,Hyodo: "Simple recurrent neural networks and random indexing"Proc. International Conference on Information. Processing,in CDROM, unnumbered. (2002)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2002 Final Research Report Summary
  • [Publications] T,Harada, O,Araki, A,Sakurai: "Learning Context-Free Grammars with Recurrent Neural Networks"Proc. International Joint Conference on Neural Networks. 2602-2607 (2001)

    • Description
      「研究成果報告書概要(欧文)」より
    • Related Report
      2002 Final Research Report Summary
  • [Publications] Akito Sakurai: "A Fast and Convergent Stochastic MLP Learning Algorithm"International Journal of Neural Systems. 11. 573-584 (2001)

    • Related Report
      2002 Annual Research Report
  • [Publications] Akito Sakurai, Daisuke Hyodo: "Simple recurrent neural networks and random indexing"Proc. International Conference on Information Processing. (CD-ROM). (2002)

    • Related Report
      2002 Annual Research Report
  • [Publications] T.Harada, O.Araki, A.Sakurai: "Learning Context-Free Grammars with Recurrent Neural Networks"Proc. International Joint Conference on Neural Networks. 2602-2607 (2001)

    • Related Report
      2002 Annual Research Report
  • [Publications] Akito Sakurai: "A Fast and Convergent Stochastic MLP Learning Algorithm"International Journal of Neural Systems. 11巻6号. 573-583 (2001)

    • Related Report
      2001 Annual Research Report

URL: 

Published: 2001-04-01   Modified: 2016-04-21  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi