Project/Area Number |
22K17960
|
Research Category |
Grant-in-Aid for Early-Career Scientists
|
Allocation Type | Multi-year Fund |
Review Section |
Basic Section 61030:Intelligent informatics-related
|
Research Institution | National Institute of Advanced Industrial Science and Technology |
Principal Investigator |
SALESDESOUZA LINCON 国立研究開発法人産業技術総合研究所, 情報・人間工学領域, 研究員 (40912481)
|
Project Period (FY) |
2022-04-01 – 2026-03-31
|
Project Status |
Granted (Fiscal Year 2023)
|
Budget Amount *help |
¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
Fiscal Year 2025: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2024: ¥1,040,000 (Direct Cost: ¥800,000、Indirect Cost: ¥240,000)
Fiscal Year 2023: ¥1,820,000 (Direct Cost: ¥1,400,000、Indirect Cost: ¥420,000)
Fiscal Year 2022: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
|
Keywords | subspace learning / deep neural networks / Subspace learning / Deep neural networks / Manifold optimization / Subspace methods / Pattern recognition |
Outline of Research at the Start |
We research a new algorithm for pattern recognition, which are computer programs that allow a machine to automatically recognize regularities in data, such as target objects and events. We mainly focus on the case of recognizing patterns in given multiple images of one object, addressing some inabilities of the current technology called deep learning.
|
Outline of Annual Research Achievements |
In year 2023, we continued working on problems of deep learning, attempting to alleviate them by integrating subspace learning aspects to the deep learning framework. We have worked in tasks of action recognition (AR) and domain adaptation (DA); for AR, we devised a new method called slow feature subspace, that improves the capturing of temporal information in videos; and for DA, a new method dubbed domain-sum feature transform, which works efficiently in multi-target domains scenario, a current challenge. We showcase the effectiveness of these methods in their respective tasks through experiments on real image data. We also study their theoretical underpinnings in the Grassmannian geometry, in order to build a strong theoretical foundation for these new methods.
|
Current Status of Research Progress |
Current Status of Research Progress
2: Research has progressed on the whole more than it was originally planned.
Reason
We have been able to combine subspace learning and deep neural networks to improve the performance in tasks of image set recognition, domain adaptation, action recognition. We studied the underlying theoretical mechanisms of our newly created techniques/ how they relate to other methods which is useful to expand ourunderstanding of these models.
|
Strategy for Future Research Activity |
We will work on new ways to combine subspace learning and deep neural network that can address their problems and improve performance.
|