• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Development of learning subspace-based methods for pattern recognition

Research Project

Project/Area Number 22K17960
Research Category

Grant-in-Aid for Early-Career Scientists

Allocation TypeMulti-year Fund
Review Section Basic Section 61030:Intelligent informatics-related
Research InstitutionNational Institute of Advanced Industrial Science and Technology

Principal Investigator

SALESDESOUZA LINCON  国立研究開発法人産業技術総合研究所, 情報・人間工学領域, 研究員 (40912481)

Project Period (FY) 2022-04-01 – 2026-03-31
Project Status Granted (Fiscal Year 2023)
Budget Amount *help
¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
Fiscal Year 2025: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2024: ¥1,040,000 (Direct Cost: ¥800,000、Indirect Cost: ¥240,000)
Fiscal Year 2023: ¥1,820,000 (Direct Cost: ¥1,400,000、Indirect Cost: ¥420,000)
Fiscal Year 2022: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Keywordssubspace learning / deep neural networks / Subspace learning / Deep neural networks / Manifold optimization / Subspace methods / Pattern recognition
Outline of Research at the Start

We research a new algorithm for pattern recognition, which are computer programs that allow a machine to automatically recognize regularities in data, such as target objects and events. We mainly focus on the case of recognizing patterns in given multiple images of one object, addressing some inabilities of the current technology called deep learning.

Outline of Annual Research Achievements

In year 2023, we continued working on problems of deep learning, attempting to alleviate them by integrating subspace learning aspects to the deep learning framework. We have worked in tasks of action recognition (AR) and domain adaptation (DA); for AR, we devised a new method called slow feature subspace, that improves the capturing of temporal information in videos; and for DA, a new method dubbed domain-sum feature transform, which works efficiently in multi-target domains scenario, a current challenge. We showcase the effectiveness of these methods in their respective tasks through experiments on real image data. We also study their theoretical underpinnings in the Grassmannian geometry, in order to build a strong theoretical foundation for these new methods.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

We have been able to combine subspace learning and deep neural networks to improve the performance in tasks of image set recognition, domain adaptation, action recognition.
We studied the underlying theoretical mechanisms of our newly created techniques/ how they relate to other methods which is useful to expand ourunderstanding of these models.

Strategy for Future Research Activity

We will work on new ways to combine subspace learning and deep neural network that can address their problems and improve performance.

Report

(2 results)
  • 2023 Research-status Report
  • 2022 Research-status Report
  • Research Products

    (7 results)

All 2023 2022

All Journal Article (4 results) (of which Int'l Joint Research: 4 results,  Peer Reviewed: 4 results,  Open Access: 1 results) Presentation (3 results) (of which Int'l Joint Research: 1 results)

  • [Journal Article] Domain-Sum Feature Transformation For Multi-Target Domain Adaptation2023

    • Author(s)
      Takumi Kobayashi, Lincon Souza, Kazuhiro Fukui
    • Journal Title

      Proceedings of the British Machine Vision Conference (BMVC)

      Volume: 2023 Pages: 0197-0197

    • Related Report
      2023 Research-status Report
    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Slow feature subspace: A video representation based on slow feature analysis for action recognition2023

    • Author(s)
      Beleza Suzana Rita Alves、Shimomoto Erica K.、Souza Lincon S.、Fukui Kazuhiro
    • Journal Title

      Machine Learning with Applications

      Volume: 14 Pages: 100493-100493

    • DOI

      10.1016/j.mlwa.2023.100493

    • Related Report
      2023 Research-status Report
    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Grassmannian learning mutual subspace method for image set recognition2023

    • Author(s)
      Lincon S. Souza, Naoya Sogi, Bernardo B. Gatto, Takumi Kobayashi, Kazuhiro Fukui
    • Journal Title

      Neurocomputing

      Volume: 517 Pages: 20-33

    • DOI

      10.1016/j.neucom.2022.10.040

    • Related Report
      2022 Research-status Report
    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Temporal-stochastic tensor features for action recognition2022

    • Author(s)
      Batalo Bojan、Souza Lincon S.、Gatto Bernardo B.、Sogi Naoya、Fukui Kazuhiro
    • Journal Title

      Machine Learning with Applications

      Volume: 10 Pages: 100407-100407

    • DOI

      10.1016/j.mlwa.2022.100407

    • Related Report
      2022 Research-status Report
    • Peer Reviewed / Open Access / Int'l Joint Research
  • [Presentation] Domain-Sum Feature Transformation For Multi-Target Domain Adaptation2023

    • Author(s)
      Takumi Kobayashi, Lincon Souza, Kazuhiro Fukui
    • Organizer
      British Machine Vision Conference (BMVC)
    • Related Report
      2023 Research-status Report
  • [Presentation] Analysis of Temporal Tensor Datasets on Product Grassmann Manifold2022

    • Author(s)
      Bojan Batalo, Lincon S. Souza, Naoya Sogi, Bernardo B. Gatto, Kazuhiro Fukui
    • Organizer
      CVPR 2022 Workshop on Vision Datasets Understanding
    • Related Report
      2022 Research-status Report
    • Int'l Joint Research
  • [Presentation] Environmental sound classification based on CNN latent subspaces2022

    • Author(s)
      Maha Mahyub, Lincon S. Souza, Bojan Batalo, Kazuhiro Fukui
    • Organizer
      International Workshop on Acoustic Signal Enhancement (IWAENC 2022)
    • Related Report
      2022 Research-status Report

URL: 

Published: 2022-04-19   Modified: 2024-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi