Mamalian-like neural networks for dynamic information processing and its learning algorithm
Project/Area Number |
04805032
|
Research Category |
Grant-in-Aid for General Scientific Research (C)
|
Allocation Type | Single-year Grants |
Research Field |
電子通信系統工学
|
Research Institution | The University of Electro-Communications |
Principal Investigator |
TAKAHASHI Haruhisa The Univ.of Electro-Comms, Comms & Syst, Associate Prof., 電気通信学部, 助教授 (90135418)
|
Co-Investigator(Kenkyū-buntansha) |
TAKEDA Mitsuo The Univ.of Electro-Comms, Comms & Syst, Professor, 電気通信学部, 教授 (00114926)
TOMITA Etsuji The Univ.of Electro-Comms, Comms & Syst, Professor, 電気通信学部, 教授 (40016598)
|
Project Period (FY) |
1992 – 1993
|
Project Status |
Completed (Fiscal Year 1993)
|
Budget Amount *help |
¥2,000,000 (Direct Cost: ¥2,000,000)
Fiscal Year 1993: ¥800,000 (Direct Cost: ¥800,000)
Fiscal Year 1992: ¥1,200,000 (Direct Cost: ¥1,200,000)
|
Keywords | PAC learning / Recurrent network / VC dimension / Speech recognition / Neural network / サンプル計算量 / 汎化 / 分類雑音 |
Research Abstract |
(1)It is mathematically investigated as to what kind of internal representations are separable by a single output unit of a three layr feednext neural network. A topologically described necessary and sufficient condition is shown for partitions of input spaces to be classified by the output unit. Then an efficient algorithm is proposed for checking if a given partition of the input space is resulted in linear separation at the output unit. (2)(3)These papers improves the sample complexity needed for reliable generalization in the PAC learnability in machine learning. By introducing an ill-posed learning algorithm which gives error worse over the candidates of network realizarions that are attained by minimizing empirical error, we can refine the order of the sample complexity, whereas the previous methods seek the uniform error over the whole configuration space. Essential VC dimension of concept classes, which is smaller than or equal to the number of modifiable system parameters, is introduced for calculating the generalization error instead of the traditional VC dimension analysis. Noisy learning is also treated. (4)In this paper we propose a very simple recurrent neural network(VSRN)architecture which is a three-layr network and contains only self-loop recurrent connections in the hidden layr. The role of the recurrent connection is explained by the network dynamic and its function will be acquired by learning from finite examples like a mamalian action. Through the learning process some characteristic functions observed in the mamalian auditory systems are found automatically acquired by the network. These contain on-neuron, off-neuron and on-off-neuron. This architecture can perform phoneme spotting in real time by utilizing these characteristic functions. Some simulation experiments are done to investigate the recognition performance.
|
Report
(3 results)
Research Products
(19 results)