• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Developing an integrated account of intentions and affordances for a model of visual attention

Research Project

Project/Area Number 23K11169
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeMulti-year Fund
Section一般
Review Section Basic Section 61010:Perceptual information processing-related
Research InstitutionOkayama University

Principal Investigator

Yucel Zeynep  岡山大学, 環境生命自然科学学域, 准教授 (20586250)

Project Period (FY) 2023-04-01 – 2025-03-31
Project Status Discontinued (Fiscal Year 2023)
Budget Amount *help
¥2,210,000 (Direct Cost: ¥1,700,000、Indirect Cost: ¥510,000)
Fiscal Year 2025: ¥650,000 (Direct Cost: ¥500,000、Indirect Cost: ¥150,000)
Fiscal Year 2024: ¥780,000 (Direct Cost: ¥600,000、Indirect Cost: ¥180,000)
Fiscal Year 2023: ¥780,000 (Direct Cost: ¥600,000、Indirect Cost: ¥180,000)
Keywordssaliency / affordance / segmentation / gaze / perception / action / intention
Outline of Research at the Start

[1] After determining strategy of functional segmentation of tool objects according to affordances, we will generation of baseline saliency from tool images. [2] We will collect data from human subjects and and determine the distribution of the gaze bias for each functional segment. [3] We will apply spatial modulation on the baseline saliency computed at [1] by introducing attractors and repellers from the abovementioned models.

Outline of Annual Research Achievements

We examined the performance of four recent saliency models EML-NET, SalGAN, DeepGaze IIE, and DeepGaze on images of hand tools. These objects have distinct segments with various roles, and studies suggest that tool segments inherently attract human attention. We tested the models on a dataset containing both tool and non-tool images, then compared their predictions with human gaze data using six criteria. The results show that the models often struggle to predict saliency accurately for tool images compared to non-tool images. This suggests a need to address this limitation in saliency modeling for tool-specific contexts.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

We have demonstarted that the existing state-of-the-art saliency models are not as efficient in representing eye gaze patterns over tool images as they are in representing the eye gaze patterns over other images from ordinary daily life scenes. This justifies an effort to improve the existing methods.

Strategy for Future Research Activity

We focus on the state-of-the-art visual saliency prediction model of DeepGaze IIE and make an effort to refine it to account for this bias. Since the integration of transfer learning into saliency prediction over the last decade has notably enhanced prediction performance, we will initially curate a custom image data set featuring tools, non-tools and ambiguous images and record empirical gaze data from human participants to be used in fine-tuning. In this way, we will improve the model’s performance for this specific stimulus category and evaluate it by IG and NSS metrics.

Report

(1 results)
  • 2023 Research-status Report
  • Research Products

    (4 results)

All 2023

All Presentation (4 results)

  • [Presentation] Effect of tool specificity on the performance of DNN-based saliency prediction methods2023

    • Author(s)
      Kengo Matsui, Timothee Languille, Zeynep Yucel
    • Organizer
      International Conference on Smart Computing and Artificial Intelligence (SCAI 2023)
    • Related Report
      2023 Research-status Report
  • [Presentation] Experiment design and verification for assessing the acquisition of strategic planning ability2023

    • Author(s)
      Natchanon Manatphaiboon, Shogo Hamachi, Zeynep Yucel, Pattara Leelaprute, Akito Monden
    • Organizer
      International Conference on Learning Technologies and Learning Environments (LTLE 2023)
    • Related Report
      2023 Research-status Report
  • [Presentation] Dependence of perception of vocabulary difficulty on contexture2023

    • Author(s)
      Parisa Supitayakul, Rika Kuramitsu, Zeynep Yucel, Akito Monden, Koichi Takeuchi
    • Organizer
      International Conference on Learning Technologies and Learning Environments (LTLE 2023)
    • Related Report
      2023 Research-status Report
  • [Presentation] Using a personality-aware recommendation system for comparing inventory performances2023

    • Author(s)
      Natsu Nishimura, Zeynep Yucel, Akito Monden
    • Organizer
      International Conference on Smart Computing and Artificial Intelligence (SCAI 2023)
    • Related Report
      2023 Research-status Report

URL: 

Published: 2023-04-13   Modified: 2024-12-25  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi