2023 Fiscal Year Final Research Report
Development of temporal feature extraction method based on dynamical chaos and its application to video recognition
Project/Area Number |
21K17772
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 61010:Perceptual information processing-related
|
Research Institution | Chubu University |
Principal Investigator |
Hirakawa Tsubasa 中部大学, AI数理データサイエンスセンター, 講師 (60846690)
|
Project Period (FY) |
2021-04-01 – 2024-03-31
|
Keywords | 深層学習 / Transformer / Network Pruning / 大規模事前学習モデル / 基盤モデル |
Outline of Final Research Achievements |
In this research project, we proposed a method for extracting important features or parameters for deep learning models, especially for deep learning models that require temporal information transitions, such as video data. Specifically, we proposed an effective feature extraction method for Long Short-Term Memory (LSTM), which has been widely used in deep learning models for series data, and for Transformer and Vision Transformer (ViT), which have achieved high recognition accuracy in recent years and are widely used. We proposed a branch-and-branch pruning method for feature extraction.
|
Free Research Field |
コンピュータビジョン
|
Academic Significance and Societal Importance of the Research Achievements |
本プロジェクトにおいて開発した枝刈り技術は,深層学習モデル内の冗長なパラメータを削除することで,近年,大規模化するネットワークモデルをコンパクト化・省電力化することが可能な技術である.そのため,高性能な画像認識モデルを大規模な計算機を用いることなく様々な画像認識データに対して適用することが可能となる.
|