Project/Area Number |
05505003
|
Research Category |
Grant-in-Aid for Developmental Scientific Research (A)
|
Allocation Type | Single-year Grants |
Research Field |
Electronic materials/Electric materials
|
Research Institution | TOHOKU UNIVERSITY |
Principal Investigator |
SHIBATA Tadashi Associate Professor, Dept.Electric Engineering Tohoku University, 工学部, 助教授 (00187402)
|
Co-Investigator(Kenkyū-buntansha) |
MORITA Mizuho Associate Professor, Graduate School of Information Sciences Tohoku University, 大学院情報科学研究科, 助教授 (50157905)
OHMI Tadahiro Professor, Dept.Electric Engineering Tohoku University, 工学部, 教授 (20016463)
|
Project Period (FY) |
1993 – 1994
|
Project Status |
Completed (Fiscal Year 1994)
|
Budget Amount *help |
¥32,900,000 (Direct Cost: ¥32,900,000)
Fiscal Year 1994: ¥10,100,000 (Direct Cost: ¥10,100,000)
Fiscal Year 1993: ¥22,800,000 (Direct Cost: ¥22,800,000)
|
Keywords | NEURAL NETWORK / SELF LEARNING / HARDWARE LEARNING / NEURON MOS TRANSISTOR / SYNAPSE / EEPROM / BACK PROPAGATION / GENERALIZATION / 学習アルゴリズム / ヘブルール / 自己学習 / アナログニューラルネットワーク / フローティングゲート / ハードウェア学習アルゴリズム |
Research Abstract |
Neural network hardware having an on-chip self-learning capability has been developed using a high-functionality device called Neuron MOS Transistor (vMOS) as a key circuit element. A vMOS can perform weighted summation of multiple input signals and thresholding all at a single transistor level based the charge sharing among multiple capacitors. An electronic synapse cell been constructed with six transistors by merging a floating-gate EEPROM memory cell into a new-concept vMOS differential-source-follower circuitry. The synapse can represent both positive (excitatory) and negative (inhibitory) weights under single V_<DD> power supply and is free from standby power dissipation. An excellent linearity in the weight updating characteristics of the synapse memory has been also established by employing a simple self-feedback regime in each cell circuitry, thus making in fully compatible to the on-chip self-learning architecture of vMOS neural networks. A new hardware-oriented learning algorithm called Hardware Backpropagation (HBP) has been developed by simplifyng and modifying the original Backpropagation (BP) algorithm. As a result, all learning actions are controlled by only digital signals with simple on-chip digital circuitry, thus enabling the direct implementation of the learning algorithm on the chip. The analog nature of the learning control is created by vMOS circuit technology. A new concept of "learning enhancement" has been introduced in order to guarantee the long-term stability of the learned state of analog neural networks. After optimization of the circuit parameters by extensive computer simulation, it has been demonstrated that HBP is superior to original BP both in the learning performance and in the generalization capability. The basic operation of the vMOS neural network having all above features has been experimentally verified using test circuits fabricated by a double-polysilicon CMOS process.
|