1997 Fiscal Year Final Research Report Summary
Studies on Optimum Design method for multilayr Neural Net works with Minimum Network Sige
Project/Area Number |
07650422
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
情報通信工学
|
Research Institution | Kanazawa University |
Principal Investigator |
NAKAYAMA Kenji Kanazawa University, Graduate School of Nat.Sci.& Tech Prof., 自然科学研究科, 教授 (00207945)
|
Co-Investigator(Kenkyū-buntansha) |
WAN Youha Kanazawa University, Faculty of Eng.Asst.Prof., 工学部, 講師 (10283095)
IKEDA Kazushi Kanazawa University, Faculty of Eng.Asst.Prof., 工学部, 講師 (10262552)
|
Project Period (FY) |
1995 – 1997
|
Keywords | Neural Networks / Multilayr Neural Networks / Activation Functions / Learning / Pattern Classification / Hidden Layr |
Research Abstract |
1. Pattern Classification by Multilayr Ne0ural Networks In the signal detection based on frequency components, when the number of the signal samples is limited, accurate detection by linear methods is difficult. The multilayr neural networks can provide high classification performance. The vectors of the signals, which have a small number samples or low SNR,are usually distributed randomly in the N dimensional space. Therefore, the boundary, which separate these vectors becomes very complicated. This can be done by using the nonlinearity of the neurons in the multilayr NNs. 2. Selection of Minimum Training Data for Generalization A data selection method has been proposed, by which the data belong to the different classes and across over the boundary are selected. These data can guarantee generalization, that is the data, which were not used in the training can be effectively separated. 3. Selection of Minimum Training Data for On-Line Training The data are successively applied to the neural networks in the on-line applications. A method, which can select the useful data and hold the minimum number of the training data, has been proposed. Through several kinds of examples, the proposed method was confirmed to be useful. 4.Optimization of Activation Functions The network size required for some applications is highly dependent on the activation functions, that is nonlinear functions. A simultaneous learning method for both connection weights and activation functons has been proposed. The parity check problem, which is a difficult task for the multilayr neural networks, can be effectively solved using the minimum number of the hidden units.
|
Research Products
(12 results)