2019 Fiscal Year Research-status Report
Neuromorphic processor using superconducting adiabatic quantum flux circuits
Project/Area Number |
18K13801
|
Research Institution | Yokohama National University |
Principal Investigator |
アヤラ クリストファー 横浜国立大学, 先端科学高等研究院, 特任教員(准教授) (90772195)
|
Project Period (FY) |
2018-04-01 – 2021-03-31
|
Keywords | superconductor / microprocessor / neuron / computing / adiabatic / aqfp / tensor / eda |
Outline of Annual Research Achievements |
In this second year of this three year project, we completed the integration of the circuit placement and routing tools (from the first year) into a semi-custom design environment for implementing superconductor adiabatic quantum-flux-parametron microprocessors (DOI: 10.1088/1361-6668/ab7ec3). This design environment has the capability of generating combinational logic circuits through an optimized logic synthesis flow. We used this environment to create a prototype superconductor microprocessor chip called MANA: Monolithic Adiabatic iNtegration Architecture. This is the first to show practical logic and memory operations integrated on the same chip using adiabatic superconductor logic. The work will be presented at the 2020 Symposia on VLSI Technology and Circuits.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
The progress of this research is going rather smoothly. We have an established design flow and design methodology for which we can develop our computing ideas. In this year, we used the established design flow to successful design and physically test a prototype microprocessor chip. This is a very important step towards achieving practical chips using adiabatic superconductor circuits. Although the microprocessor is not neuromorphic based, it confirms that we have the pieces in place, namely that we can successfully integrate logic and memory, to continue our study into neuromorphic architecture research which will be the focus in the final year of this project. We have also explored the idea of implementing a bfloat16 (floating point format for machine learning) accelerator.
|
Strategy for Future Research Activity |
The main focus of this year is to develop, tape-out, and test components for a neuromorphic architecture. This includes the design of a bfloat16 accelerator to carry-out machine learning calculations using a fused multiply-add architecture suitable for machine learning datapaths such as the Tensor core. Our most immediate step is optimize a 16-bit adder architecture and combine it with operand compression trees as this the key component of the accelerator. We will also investigate superconductor flux-biasing circuits for weight modulation.
|
Causes of Carryover |
Articles purchased and travel expenses incurred in this fiscal year actually cost a little less than estimated, resulting in a small amount of funds to be used in the next fiscal year. This extra amount will be used in the next fiscal year for travel expenses.
|