Budget Amount *help |
¥3,400,000 (Direct Cost: ¥3,400,000)
Fiscal Year 2002: ¥600,000 (Direct Cost: ¥600,000)
Fiscal Year 2001: ¥2,100,000 (Direct Cost: ¥2,100,000)
Fiscal Year 2000: ¥700,000 (Direct Cost: ¥700,000)
|
Research Abstract |
Some biologically-plausible fundamental principles for neural networks were proposed so far; among them, "mapping" and "relaxation" are important. If both the mapping and the relaxation are reasonable and really exist in a biological brain, it is natural to consider that they work separately but they function cooperatively in a higher framework. In order to investigate the possibility of the higher framework, we tried to propose new models of module-based artificial neural networks and grasp the dynamical properties of the network. In general, static neurons generate the mapping function and dynamical neurons do the relaxation. According to this idea, as a goal, we proposed a module-based neural network composed of static and dynamical neurons, and discussed what effect can be produced by the integration of mapping and relaxation. The proposed network can be obtained when a specially-designed total energy function is minimized. Although the derivation process is similar to the case of t
… More
he well-known Hopfield network, state variables in the network include, in addition to the output and the potential of dynamical neurons, another type of state variable converted from the direct output of dynamical neurons using a mapping function. If we suppose a layered network with only static neurons corresponding to the mapping function, the layered network comprises a forward subnet and a backward subnet; connection weights in the forward and backward subnets are modified based on propagated error signals through the backward subnet, and, at the same time, the final output of the backward subnet is utilized for overall network-dynamics. As a result, the proposed network offers a higher framework in which relaxation can be carried out in a warped space due to the cooperation of static and dynamical neurons. Furthermore, it gives an interpretation for the back propagated error signals in the case of delta-rule based learning; although the backward subnet for calculating the delta values is usually assumed to be virtual, it must actually exist for network relaxation in the proposed network. Less
|