研究課題/領域番号 |
18K13801
|
研究機関 | 横浜国立大学 |
研究代表者 |
アヤラ クリストファー 横浜国立大学, 先端科学高等研究院, 特任教員(准教授) (90772195)
|
研究期間 (年度) |
2018-04-01 – 2021-03-31
|
キーワード | superconductor / microprocessor / neuron / computing / adiabatic / aqfp / tensor / eda |
研究実績の概要 |
In this second year of this three year project, we completed the integration of the circuit placement and routing tools (from the first year) into a semi-custom design environment for implementing superconductor adiabatic quantum-flux-parametron microprocessors (DOI: 10.1088/1361-6668/ab7ec3). This design environment has the capability of generating combinational logic circuits through an optimized logic synthesis flow. We used this environment to create a prototype superconductor microprocessor chip called MANA: Monolithic Adiabatic iNtegration Architecture. This is the first to show practical logic and memory operations integrated on the same chip using adiabatic superconductor logic. The work will be presented at the 2020 Symposia on VLSI Technology and Circuits.
|
現在までの達成度 (区分) |
現在までの達成度 (区分)
2: おおむね順調に進展している
理由
The progress of this research is going rather smoothly. We have an established design flow and design methodology for which we can develop our computing ideas. In this year, we used the established design flow to successful design and physically test a prototype microprocessor chip. This is a very important step towards achieving practical chips using adiabatic superconductor circuits. Although the microprocessor is not neuromorphic based, it confirms that we have the pieces in place, namely that we can successfully integrate logic and memory, to continue our study into neuromorphic architecture research which will be the focus in the final year of this project. We have also explored the idea of implementing a bfloat16 (floating point format for machine learning) accelerator.
|
今後の研究の推進方策 |
The main focus of this year is to develop, tape-out, and test components for a neuromorphic architecture. This includes the design of a bfloat16 accelerator to carry-out machine learning calculations using a fused multiply-add architecture suitable for machine learning datapaths such as the Tensor core. Our most immediate step is optimize a 16-bit adder architecture and combine it with operand compression trees as this the key component of the accelerator. We will also investigate superconductor flux-biasing circuits for weight modulation.
|
次年度使用額が生じた理由 |
Articles purchased and travel expenses incurred in this fiscal year actually cost a little less than estimated, resulting in a small amount of funds to be used in the next fiscal year. This extra amount will be used in the next fiscal year for travel expenses.
|