Development and Applications of Learning Algorithms for Neural Networks
Project/Area Number |
02650235
|
Research Category |
Grant-in-Aid for General Scientific Research (C)
|
Allocation Type | Single-year Grants |
Research Field |
電子通信系統工学
|
Research Institution | The University of Elector-Communications |
Principal Investigator |
TAKAHASHI Haruhisa The University of Electro-Comminications Dept. of Communications and Systems, Associate Professor, 電気通信学部, 助教授 (90135418)
|
Co-Investigator(Kenkyū-buntansha) |
TAKEDA Mitsuo The University of Electro-Communications Dept. of Communications and Systems, Pr, 電気通信学部, 教授 (00114926)
TOMITA Etsuji The University of Electro-Communications Dept. of Communications and Systems, Pr, 電気通信学部, 教授 (40016598)
|
Project Period (FY) |
1990 – 1991
|
Project Status |
Completed (Fiscal Year 1991)
|
Budget Amount *help |
¥2,200,000 (Direct Cost: ¥2,200,000)
Fiscal Year 1991: ¥1,000,000 (Direct Cost: ¥1,000,000)
Fiscal Year 1990: ¥1,200,000 (Direct Cost: ¥1,200,000)
|
Keywords | Neural network / Learning Algorithm / Back Propagation / Recurrent Propagation / 線型分離 / 線型分離可能性 |
Research Abstract |
(1)It is mathematically investigated as to what kind of internal representations are separable. by a single output unit of a three layer feed next neural network. A topologically described necessary and sufficient condition is shown for partitions of input spaces to be classified by the output unit. Then an efficient algorithm is proposed for checking if a given partition of the input space is resulted in linear separation at the output unit. (2)We propose in this paper a new recurrent propagation learning algorithm. A biologically plausible neurodynamics is derived from which a quick algorithm to compute fixed points is obtained applying an approximation of stochastic process. The sensitivity of the networks is also obtained and from that a new recurrent propagation teaming algorithm is proposed. Our algorithm can run 10 times more quick over the previous ones. Furthermore, some unstable problems in recurrent propagation are overcome. Some simulation results are given to compare the re
… More
current propagations with Backpropagation and Mean Field Networks. (3)Associative memory is realized by the, recurrent propagation Teaming rule as equilibrium states ; of the network. Since the number of hidden units can be increased unrestrictedly, the information capacity can be sufficiently large. This is the main advantage of the proposed network compared with previous ones that have no hidden units. Some printer fonts are used for patterns to be memorized in our experiments. The data show that the more the number of hidden units, the more the rate of memorization and correct association. (4)A new teaming network is proposed in this chapter. Although the Backpropagation Teaming method has been succeeded in many applications, some difficulties exists, such as the local minimal problem, too long Teaming Teaming times, and capability of approximation to continuous mappings. We combine the competitive learning and supervised teaming methods in order to approximate continuous mappings. Some simulation results show that the proposed teaming algorithm works extremely more quick than the Back Propagation method and has reliable convergence property. Less
|
Report
(3 results)
Research Products
(20 results)