Project/Area Number |
20H04249
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Review Section |
Basic Section 61030:Intelligent informatics-related
|
Research Institution | Institute of Physical and Chemical Research |
Principal Investigator |
ZHAO Qibin 国立研究開発法人理化学研究所, 革新知能統合研究センター, チームリーダー (30599618)
|
Co-Investigator(Kenkyū-buntansha) |
曹 建庭 埼玉工業大学, 工学部, 教授 (20306989)
横田 達也 名古屋工業大学, 工学(系)研究科(研究院), 准教授 (80733964)
|
Project Period (FY) |
2020-04-01 – 2024-03-31
|
Project Status |
Completed (Fiscal Year 2023)
|
Budget Amount *help |
¥17,550,000 (Direct Cost: ¥13,500,000、Indirect Cost: ¥4,050,000)
Fiscal Year 2023: ¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2022: ¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2021: ¥4,290,000 (Direct Cost: ¥3,300,000、Indirect Cost: ¥990,000)
Fiscal Year 2020: ¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
|
Keywords | tensor network / machine learning / adversarial robustness / tensor decomposition / Tensor networks / tensor networks / Tensor Networks / Machine Learning / Robustness |
Outline of Research at the Start |
Tensor networks (TNs) have recently gained increasing attentions in machine learning, data mining and computer vision fields due to its effectiveness in efficient computation and model compression in deep learning. However, there are many open problems that are still unexplored, which limits its impact in machine learning. Therefore, our research aims to investigate the fundamental theory and develop scalable and efficient learning algorithms for TN. Moreover, we will further explore what challenging problems in machine learning can be solved by TN technology.
|
Outline of Final Research Achievements |
We studied and developed some advanced tensor decomposition and tensor network representation methods for incomplete and noisy data tensor. To improve its practicability, we also developed various algorithms for optimal tensor network structure search, and deep neural network based nonlinear flexible tensor decomposition methods. Moreover, we also studied adversarial robustness of deep neural networks under tensor representation of model parameters and developed several novel approaches for adversarial purification. Finally, our research findings can be adopted into some applications such as multi-model learning, hyperspectral image processing and etc.
|
Academic Significance and Societal Importance of the Research Achievements |
Our research further promotes fundamental technology on tensor methods for machine learning. We have shown that tensor methods are powerful for structured data analysis and also practically useful for parameter representation of deep neural networks, resulting in more efficient and robust models.
|