Real-time speech recognition and model selection via recurrent neural networks
Project/Area Number |
06650401
|
Research Category |
Grant-in-Aid for General Scientific Research (C)
|
Allocation Type | Single-year Grants |
Research Field |
情報通信工学
|
Research Institution | The University of Electro-Communications |
Principal Investigator |
TAKAHASHI Haruhisa The Univ.of Electro-Communications, Dept.of Communications and Systems Eng., Associate Prof., 電気通信学部, 助教授 (90135418)
|
Co-Investigator(Kenkyū-buntansha) |
YOSHIDA Toshinobu The Univ.of Electro-Communications, Dept of Computer Sciences and Information Ma, 電気通信学部, 助教授 (30114341)
TOMITA Etsuji The Univ.of Electro-Communications, Dept.of Communications and Systems Eng., Pro, 電気通信学部, 教授 (40016598)
|
Project Period (FY) |
1994 – 1995
|
Project Status |
Completed (Fiscal Year 1995)
|
Budget Amount *help |
¥2,000,000 (Direct Cost: ¥2,000,000)
Fiscal Year 1995: ¥500,000 (Direct Cost: ¥500,000)
Fiscal Year 1994: ¥1,500,000 (Direct Cost: ¥1,500,000)
|
Keywords | Speech recognition / Neural networks / Machine learning / Probably Approximately Correct |
Research Abstract |
We performed the study on the theme of this report by intensively investigating the theoretical base of learning. In the first year we developed a very simple recurrent neural network (VSRN) architecture which is a three-layr network and contains only self-loop recurrent connections in the hidden layr. The role of the recurrent connection is explained by the network dynamics and its function will be acquired by learning from finite examples like a mamalian action. Through the learning process some characteristic functions observed in the mamalian auditory systems are founed automatically acquired by the network. In the second year we investigated mainly the theoretical framework of how our network can learn well by proposing a new method for analysing the generalization performance. To achieve this, we undertake a comparison of learning and hypothesis testing, which leads to a novel notion of regular interpolation dimension and an ill-disposed learning algorithm that produces ill-disposed hypotheses. This unites the learning and the hypothesis testing in a common viewpoint such that the base of hypothesis testing inequalities can be directly used for estimating ill-disposed hypotheses on training examples. The regular interpolation dimension is no greater than the number of modifiable system parameters. We analyze the ill-disposed learning algorithm both in the PAC learning model and in an average-case setting to obtain more explicit bounds on learning curves and sample complexity in terms of the regular interpolation dimension, than those in terms of the VC dimension. The results are applied and extended to the other algorithm such as a Gibbs algorithm and the inconsistent learning to obtain explicit bound of the learning curves and sample complexity.
|
Report
(3 results)
Research Products
(18 results)