Budget Amount *help |
¥6,000,000 (Direct Cost: ¥6,000,000)
Fiscal Year 1995: ¥2,800,000 (Direct Cost: ¥2,800,000)
Fiscal Year 1994: ¥3,200,000 (Direct Cost: ¥3,200,000)
|
Research Abstract |
In this sutdy, we first formalized the problem of training a neural network as one of an inverse problem in function approximation. Next, we provided necessary and sufficient conditions for optimal generalization capability in terms of the number of hidden units, the basis functions, and the weights connected to each hidden unit. Furthermore, we gave a methodology to construct neural networks with optimal generalization capability. From this methodology, we can see that are an infinite numbers of neural networks with the same generalization capability. From among this infinite number, we specified the ones which are most robust with respect to some kinds of faults which may occur in actual usage. Concretely, we gave methods to decide weights in neural networks which optimally suppress the influences of each of the following faults : a proportional errors in the weights, a connection fault, and a stuck-at gamma fault. Moreover, we gave the methods to decide not only weights but also basis functions for the hidden units which optimally suppress the above three faults. Next we constructed a method for incremental learning in which only the current network and one new datum are used to obtain a new network at each step, while maintaining the property that the neural network is optimally generalizing with respect to all of the data learned so far. We think our results have the potential for being applied to the problem of active learning. Moreover, we gave a solution to the problem of over-learning which occurs in training of neural networks using the error-backpropagation algorithm, i.e., we introduced the concept of admissibility defined by relation-ship between two learning criteria. According to the admissibility, we devised methods for choosing training data to prevent over-leaning.
|