• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Development of Deep Neural Network Accelerator Utilizing Approximate Computing

Research Project

Project/Area Number 19H04079
Research Category

Grant-in-Aid for Scientific Research (B)

Allocation TypeSingle-year Grants
Section一般
Review Section Basic Section 60040:Computer system-related
Research InstitutionTokyo Institute of Technology

Principal Investigator

YU JAEHOON  東京工業大学, 科学技術創成研究院, 准教授 (70726976)

Co-Investigator(Kenkyū-buntansha) 橋本 昌宜  京都大学, 情報学研究科, 教授 (80335207)
Project Period (FY) 2019-04-01 – 2022-03-31
Project Status Completed (Fiscal Year 2021)
Budget Amount *help
¥17,550,000 (Direct Cost: ¥13,500,000、Indirect Cost: ¥4,050,000)
Fiscal Year 2021: ¥4,940,000 (Direct Cost: ¥3,800,000、Indirect Cost: ¥1,140,000)
Fiscal Year 2020: ¥4,940,000 (Direct Cost: ¥3,800,000、Indirect Cost: ¥1,140,000)
Fiscal Year 2019: ¥7,670,000 (Direct Cost: ¥5,900,000、Indirect Cost: ¥1,770,000)
Keywords深層ニューラルネットワーク / 近似コンピューティング / 深層学習 / 推論アクセラレータ / アクセラレータ / NPU / ニューラルネットワーク / ハードウェアアクセラレータ / 近似計算 / 電力効率 / Deep Neural Network / 蒸留 / 訓練データ削減
Outline of Research at the Start

急速に利活用が進む深層学習をIoTエッジ端末で活用すべく、GPUよりも3桁高いエネルギー効率を達成する深層ニューラルネットワーク (DNN) アクセラレータを開発する。アルゴリズム、アーキテクチャ、回路技術、設計技術を跨いだクロスレイヤー最適化で、DNNが本質的に有するネットワークと計算の冗長性を極限まで取り除き、インメモリ型近似コンピューティングで計算エネルギー効率を飛躍的に高める。研究前半では推論が高エネルギー効率実行できるアクセラレータを、研究後半ではオンライン学習可能なアクセラレータを開発し、VLSI 実装する。それによりオンライン強化学習可能なIoTシステムの実現に貢献する。

Outline of Final Research Achievements

We devised approximate computing methods for learning and inference of deep neural networks during the three-year research period. Also, we proposed an arithmetic circuit and an inference accelerator to support them. These results have been published in six international conferences and two journals. One of the most notable achievements is Hiddenite, an inference accelerator presented at ISSCC2022, the Olympics of Chips. Hiddenite significantly reduces the memory requirements of deep neural networks by using random weights. We implemented Hiddenite on a relatively old 40nm process. Yet, it showed processing efficiency equivalent to or better than inference accelerators using state-of-the-art processes.

Academic Significance and Societal Importance of the Research Achievements

本研究成果の学術的意義は、 深層学習のアルゴリズムから、アーキテクチャ、回路技術、設計技術までをカバーしたクロスレイヤー型研究による解析と最適化を行い、深層ニューラルネットワークにおいて不必要な冗長性と厳密性を取り除くためにどのようなアプローチが有効であるかを明らかにしたことにある。
またこれにより深層ニューラルネットワークを利用するために必要な計算リソースと電力リソースの制約を緩和することが可能となり、それが適用可能な 範囲を大きく広げた点で大きな社会的意義を持つ。

Report

(4 results)
  • 2021 Annual Research Report   Final Research Report ( PDF )
  • 2020 Annual Research Report
  • 2019 Annual Research Report
  • Research Products

    (8 results)

All 2022 2021 2020 2019

All Journal Article (1 results) (of which Peer Reviewed: 1 results) Presentation (6 results) (of which Int'l Joint Research: 6 results) Patent(Industrial Property Rights) (1 results)

  • [Journal Article] Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training2020

    • Author(s)
      Cheng TaiYu、Masuda Yukata、Chen Jun、Yu Jaehoon、Hashimoto Masanori
    • Journal Title

      Integration

      Volume: 74 Pages: 19-31

    • DOI

      10.1016/j.vlsi.2020.05.002

    • Related Report
      2020 Annual Research Report
    • Peer Reviewed
  • [Presentation] Hiddenite: 4K-PE Hidden Network Inference 4D-Tensor Engine Exploiting On-Chip Model Construction Achieving 34.8-to-16.0TOPS/W for CIFAR-100 and ImageNet2022

    • Author(s)
      Hirose Kazutoshi、Yu Jaehoon、Ando Kota、Okoshi Yasuyuki、Garcia-Arias Angel Lopez、Suzuki Junnosuke、Chu Thiem Van、Kawamura Kazushi、Motomura Masato
    • Organizer
      2022 IEEE International Solid- State Circuits Conference (ISSCC)
    • Related Report
      2021 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Minimizing Power for Neural Network Training with Logarithm-Approximate Floating-Point Multiplier2020

    • Author(s)
      TaiYu Cheng, Jaehoon Yu, Masanori Hashimoto
    • Organizer
      IEEE International Symposium on Circuits and Systems
    • Related Report
      2020 Annual Research Report
    • Int'l Joint Research
  • [Presentation] ProgressiveNN: Achieving Computational Scalability without Network Alteration by MSB-first Accumulative Computation2020

    • Author(s)
      Junnosuke Suzuki , Kota Ando , Kazutoshi Hirose , Kazushi Kawamura , Thiem Van Chu , Masato Motomura , Jaehoon Yu
    • Organizer
      International Symposium on Computing and Networking (CANDAR)
    • Related Report
      2020 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Minimizing Power for Neural Network Training with Logarithm-Approximate Floating-Point Multiplier2019

    • Author(s)
      TaiYu Cheng ; Jaehoon Yu ; Masanori Hashimoto
    • Organizer
      2019 29th International Symposium on Power and Timing Modeling, Optimization and Simulation (PATMOS)
    • Related Report
      2019 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Distilling Knowledge for Non-Neural Networks2019

    • Author(s)
      Shota Fukui ; Jaehoon Yu ; Masanori Hashimoto
    • Organizer
      2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
    • Related Report
      2019 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Training Data Reduction using Support Vectors for Neural Networks2019

    • Author(s)
      Toranosuke Tanio ; Kouya Takeda ; Jeahoon Yu ; Masanori Hashimoto
    • Organizer
      2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
    • Related Report
      2019 Annual Research Report
    • Int'l Joint Research
  • [Patent(Industrial Property Rights)] ニューラルネットワーク回路装置2021

    • Inventor(s)
      本村真人、劉載勲
    • Industrial Property Rights Holder
      本村真人、劉載勲
    • Industrial Property Rights Type
      特許
    • Filing Date
      2021
    • Related Report
      2021 Annual Research Report

URL: 

Published: 2019-04-18   Modified: 2023-01-30  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi