• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Developing models to simulate information processing and internal representations of human visual motion perception

Research Project

Project/Area Number 20H00603
Research Category

Grant-in-Aid for Scientific Research (A)

Allocation TypeSingle-year Grants
Section一般
Review Section Medium-sized Section 61:Human informatics and related fields
Research InstitutionKyoto University

Principal Investigator

Nishida Shin'ya  京都大学, 情報学研究科, 教授 (20396162)

Co-Investigator(Kenkyū-buntansha) 吹上 大樹  日本電信電話株式会社NTTコミュニケーション科学基礎研究所, 人間情報研究部, 研究員 (50869302)
Project Period (FY) 2020-04-01 – 2024-03-31
Project Status Completed (Fiscal Year 2023)
Budget Amount *help
¥45,370,000 (Direct Cost: ¥34,900,000、Indirect Cost: ¥10,470,000)
Fiscal Year 2023: ¥8,970,000 (Direct Cost: ¥6,900,000、Indirect Cost: ¥2,070,000)
Fiscal Year 2022: ¥8,970,000 (Direct Cost: ¥6,900,000、Indirect Cost: ¥2,070,000)
Fiscal Year 2021: ¥8,970,000 (Direct Cost: ¥6,900,000、Indirect Cost: ¥2,070,000)
Fiscal Year 2020: ¥18,460,000 (Direct Cost: ¥14,200,000、Indirect Cost: ¥4,260,000)
Keywords視覚系シミュレータ / 運動視 / メタマー / 人工神経回路 / 映像メディア技術
Outline of Research at the Start

サイエンスとエンジニアリングを融合し、人間の特性を活かしたメディア技術開発を進めるために、これまでに蓄積されてきた多種多様な認知科学的知見を計算シミュレーションモデルに結晶化させることは有効である。本プロジェクトでは運動視に注目し、実験分析的手法とデータ駆動型手法(人工神経回路による機械学習)を組み合わせ、実験室実験やクラウドソーシングによって必要な心理データを補い、低次から高次までの運動視情報処理をモデル化し、任意の映像入力に対する運動視系の中間情報表現および最終出力が予測できるようにする。

Outline of Final Research Achievements

In order to scientifically understand human information processing and develop innovative information technology, we worked on the functional elucidation and modelling of the human visual system. With regard to visual motion perception, we developed a psychophysical method to visualize the human-perceived motion flow map, and revealed the limitations of prediction of human perception by existing vision science models and state-of-the-art computer vision models. We proposed a new motion detection model combining trainable motion energy sensing and spatial information integration by a self-attention mechanism, showing that it can predict many aspects of human perceptual characteristics. We also revealed the coarse-to-fine matching mechanism in binocular stereopsis and the feature invariance of the region segmentation processing algorithm based on temporal differences in stimulus change.

Academic Significance and Societal Importance of the Research Achievements

人間の情報処理の科学的理解に基づいて革新的な情報技術を開発するため、過去に視覚科学で蓄積された膨大な科学的知見をシミュレーションモデルに結晶化し、視覚科学に精通していないエンジニアにも人間の特性を活かした技術開発が可能となる仕組み作りを目指す。まずは、主要な認識機能である運動視を中心に人間の視覚情報処理をシミュレートするモデルを構築し、任意の映像入力に対する中間情報表現および最終出力が予測できるようにする。データ駆動型の研究手法を取り入れ、知覚運動マップを可視化するための新たな心理物理実験手法を確立するとともに、深層学習を活用した人間の運動視のシミュレーションモデルを構築した。

Report

(6 results)
  • 2023 Annual Research Report   Final Research Report ( PDF )
  • 2022 Annual Research Report
  • 2021 Annual Research Report
  • 2020 Comments on the Screening Results   Annual Research Report
  • Research Products

    (23 results)

All 2024 2023 2022 2021 2020 Other

All Int'l Joint Research (1 results) Journal Article (5 results) (of which Int'l Joint Research: 1 results,  Peer Reviewed: 4 results,  Open Access: 5 results) Presentation (16 results) (of which Int'l Joint Research: 11 results,  Invited: 3 results) Remarks (1 results)

  • [Int'l Joint Research] 国立台湾大学(その他の国・地域)

    • Related Report
      2023 Annual Research Report
  • [Journal Article] Coarse-to-fine interaction on perceived depth in compound grating2023

    • Author(s)
      Chen Pei-Yin、Chen Chien-Chung、Nishida Shin'ya
    • Journal Title

      Journal of Vision

      Volume: 23 Issue: 9 Pages: 4852-4852

    • DOI

      10.1167/jov.23.9.4852

    • Related Report
      2023 Annual Research Report
    • Peer Reviewed / Open Access / Int'l Joint Research
  • [Journal Article] Psychophysical measurement of perceived motion flow of naturalistic scenes2023

    • Author(s)
      Yang Yung-Hao、Fukiage Taiki、Sun Zitang、Nishida Shin’ya
    • Journal Title

      iScience

      Volume: 26 Issue: 12 Pages: 108307-108307

    • DOI

      10.1016/j.isci.2023.108307

    • Related Report
      2023 Annual Research Report
    • Peer Reviewed / Open Access
  • [Journal Article] Decoupled spatiotemporal adaptive fusion network for self-supervised motion estimation2023

    • Author(s)
      Sun Zitang、Luo Zhengbo、Nishida Shin’ya
    • Journal Title

      Neurocomputing

      Volume: 534 Pages: 133-146

    • DOI

      10.1016/j.neucom.2023.03.012

    • Related Report
      2023 Annual Research Report
    • Peer Reviewed / Open Access
  • [Journal Article] Does training with blurred images bring convolutional neural networks closer to humans with respect to robust object recognition and internal representations?2023

    • Author(s)
      Yoshihara Sou、Fukiage Taiki、Nishida Shin'ya
    • Journal Title

      Frontiers in Psychology

      Volume: 14 Pages: 1047694-1047694

    • DOI

      10.3389/fpsyg.2023.1047694

    • Related Report
      2022 Annual Research Report
    • Peer Reviewed / Open Access
  • [Journal Article] Shape Bias獲得へ向けて:人間の視覚発達過程に基づいた,段階的な画像ぼかしによる畳み込みニューラルネットワークの訓練2021

    • Author(s)
      吉原 創、吹上 大樹、西田 眞也
    • Journal Title

      VISION

      Volume: 33 Issue: 1 Pages: 1-5

    • DOI

      10.24636/vision.33.1_1

    • NAID

      130007975944

    • ISSN
      0917-1142, 2433-5630
    • Year and Date
      2021-01-20
    • Related Report
      2020 Annual Research Report
    • Open Access
  • [Presentation] Temporal Dynamics Gap between Position Tracking and Attribute Tracking2024

    • Author(s)
      Yen-Ju Chen, Zitang Sun, Shin’ya Nishida
    • Organizer
      Vision Sciences Society
    • Related Report
      2023 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Temporal characteristics of perceived motion flow of naturalistic movies2024

    • Author(s)
      Yung-Hao Yang, Taiki Fukiage, Zitang Sun, Shin’ya Nishida
    • Organizer
      Vision Sciences Society
    • Related Report
      2023 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Acquisition of second-order motion perception by learning to recognize the motion of objects made by non-diffusive materials2024

    • Author(s)
      Zitang Sun, Yen-Ju Chen, Yun-Hao Yang, Shin’ya Nishida
    • Organizer
      Vision Sciences Society
    • Related Report
      2023 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Modeling of Human Motion Perception Mechanism: A Simulation based on Deep Neural Network and Attention Transformer2023

    • Author(s)
      Sun Zitang, Yun-Hao Yang, Shin’ya Nishida
    • Organizer
      Vision Sciences Society
    • Related Report
      2023 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Temporal limits of visual segmentation based on temporal asynchrony in luminance, color, motion direction, and their mixtures2023

    • Author(s)
      Yen-Ju Cheng, Shin’ya Nishida
    • Organizer
      Vision Sciences Society
    • Related Report
      2023 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Local image statistics can account for the perceived naturalness of image contrast.2023

    • Author(s)
      Taiki Fukiage, Shin’ya Nishida
    • Organizer
      Vision Sciences Society
    • Related Report
      2023 Annual Research Report
    • Int'l Joint Research
  • [Presentation] A Comparative Analysis of Visual Motion Perception: Computer Vision Models versus Human Abilities2023

    • Author(s)
      Sun Zitang, Yen-Ju Chen, Yun-Hao Yang, Shin’ya Nishida
    • Organizer
      Conference on Cognitive Computational Neuroscience
    • Related Report
      2023 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Modelling Human Visual Motion Processing with Trainable Motion Energy Sensing and a Self-attention Network for Adaptive Motion Integration2023

    • Author(s)
      Sun Zitang, Yen-Ju Chen, Yun-Hao Yang, Shin’ya Nishida
    • Organizer
      NeulIPS (Conference on Neural Information Processing Systems)
    • Related Report
      2023 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Modeling of Human Motion Perception Mechanism: A Simulation based on Deep Neural Network and Attention Transformer2023

    • Author(s)
      Sun Zitang, Yung-Hao Yang, Shin'ya Nishida
    • Organizer
      Vision Sciences Society
    • Related Report
      2022 Annual Research Report
  • [Presentation] Psychophysical measurement of perceived motion flow in naturalistic scenes2022

    • Author(s)
      Yung-Hao Yang; Taiki Fukiage; Shin’ya Nishida
    • Organizer
      Vision Sciences Society
    • Related Report
      2022 Annual Research Report
    • Int'l Joint Research
  • [Presentation] 人間の認知情報処理の科学的理解と映像技術の開発2022

    • Author(s)
      西田眞也
    • Organizer
      MIRU2022
    • Related Report
      2022 Annual Research Report
    • Invited
  • [Presentation] 現実的な入力に対して動く人間の視覚系のモデルの構築に向けて2022

    • Author(s)
      西田眞也
    • Organizer
      第188回CG・第32回DCC・第231回CVIM合同研究発表会
    • Related Report
      2022 Annual Research Report
    • Invited
  • [Presentation] Towards acquisition of shape bias: Training convolutional neural networks with blurred images2021

    • Author(s)
      Sou Yoshihara, Taiki Fukiage, Shin'ya Nishida
    • Organizer
      Vision Sciences Society
    • Related Report
      2021 Annual Research Report
    • Int'l Joint Research
  • [Presentation] ぼけた画像を体験することが 視覚系に与える効果の in silico分析2021

    • Author(s)
      西田眞也、吉原創、吹上大樹
    • Organizer
      日本視覚学会夏季大会
    • Related Report
      2021 Annual Research Report
  • [Presentation] Vision Science for Display Technologies2021

    • Author(s)
      Shin'ya Nishida
    • Organizer
      International Display Workshops
    • Related Report
      2021 Annual Research Report
    • Int'l Joint Research / Invited
  • [Presentation] Shape Bias獲得へ向けて: 人間の視覚発達過程に基づいた、段階的な画像ぼかしによる 畳み込みニューラルネットワークの訓練2020

    • Author(s)
      吉原 創、吹上 大樹、西田 眞也
    • Organizer
      日本視覚学会
    • Related Report
      2020 Annual Research Report
  • [Remarks] Modelling Human Visual Motion Processing with ...

    • URL

      https://huggingface.co/spaces/Zitang/Self-attention-based-V1MT-motion-model

    • Related Report
      2023 Annual Research Report

URL: 

Published: 2020-04-28   Modified: 2025-01-30  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi