• 研究課題をさがす
  • 研究者をさがす
  • KAKENの使い方
  1. 前のページに戻る

Developing an integrated account of intentions and affordances for a model of visual attention

研究課題

研究課題/領域番号 23K11169
研究種目

基盤研究(C)

配分区分基金
応募区分一般
審査区分 小区分61010:知覚情報処理関連
研究機関岡山大学

研究代表者

Yucel Zeynep  岡山大学, 環境生命自然科学学域, 准教授 (20586250)

研究期間 (年度) 2023-04-01 – 2025-03-31
研究課題ステータス 中途終了 (2023年度)
配分額 *注記
2,210千円 (直接経費: 1,700千円、間接経費: 510千円)
2025年度: 650千円 (直接経費: 500千円、間接経費: 150千円)
2024年度: 780千円 (直接経費: 600千円、間接経費: 180千円)
2023年度: 780千円 (直接経費: 600千円、間接経費: 180千円)
キーワードsaliency / affordance / segmentation / gaze / perception / action / intention
研究開始時の研究の概要

[1] After determining strategy of functional segmentation of tool objects according to affordances, we will generation of baseline saliency from tool images. [2] We will collect data from human subjects and and determine the distribution of the gaze bias for each functional segment. [3] We will apply spatial modulation on the baseline saliency computed at [1] by introducing attractors and repellers from the abovementioned models.

研究実績の概要

We examined the performance of four recent saliency models EML-NET, SalGAN, DeepGaze IIE, and DeepGaze on images of hand tools. These objects have distinct segments with various roles, and studies suggest that tool segments inherently attract human attention. We tested the models on a dataset containing both tool and non-tool images, then compared their predictions with human gaze data using six criteria. The results show that the models often struggle to predict saliency accurately for tool images compared to non-tool images. This suggests a need to address this limitation in saliency modeling for tool-specific contexts.

現在までの達成度 (区分)
現在までの達成度 (区分)

2: おおむね順調に進展している

理由

We have demonstarted that the existing state-of-the-art saliency models are not as efficient in representing eye gaze patterns over tool images as they are in representing the eye gaze patterns over other images from ordinary daily life scenes. This justifies an effort to improve the existing methods.

今後の研究の推進方策

We focus on the state-of-the-art visual saliency prediction model of DeepGaze IIE and make an effort to refine it to account for this bias. Since the integration of transfer learning into saliency prediction over the last decade has notably enhanced prediction performance, we will initially curate a custom image data set featuring tools, non-tools and ambiguous images and record empirical gaze data from human participants to be used in fine-tuning. In this way, we will improve the model’s performance for this specific stimulus category and evaluate it by IG and NSS metrics.

報告書

(1件)
  • 2023 実施状況報告書
  • 研究成果

    (4件)

すべて 2023

すべて 学会発表 (4件)

  • [学会発表] Effect of tool specificity on the performance of DNN-based saliency prediction methods2023

    • 著者名/発表者名
      Kengo Matsui, Timothee Languille, Zeynep Yucel
    • 学会等名
      International Conference on Smart Computing and Artificial Intelligence (SCAI 2023)
    • 関連する報告書
      2023 実施状況報告書
  • [学会発表] Experiment design and verification for assessing the acquisition of strategic planning ability2023

    • 著者名/発表者名
      Natchanon Manatphaiboon, Shogo Hamachi, Zeynep Yucel, Pattara Leelaprute, Akito Monden
    • 学会等名
      International Conference on Learning Technologies and Learning Environments (LTLE 2023)
    • 関連する報告書
      2023 実施状況報告書
  • [学会発表] Dependence of perception of vocabulary difficulty on contexture2023

    • 著者名/発表者名
      Parisa Supitayakul, Rika Kuramitsu, Zeynep Yucel, Akito Monden, Koichi Takeuchi
    • 学会等名
      International Conference on Learning Technologies and Learning Environments (LTLE 2023)
    • 関連する報告書
      2023 実施状況報告書
  • [学会発表] Using a personality-aware recommendation system for comparing inventory performances2023

    • 著者名/発表者名
      Natsu Nishimura, Zeynep Yucel, Akito Monden
    • 学会等名
      International Conference on Smart Computing and Artificial Intelligence (SCAI 2023)
    • 関連する報告書
      2023 実施状況報告書

URL: 

公開日: 2023-04-13   更新日: 2024-12-25  

サービス概要 検索マニュアル よくある質問 お知らせ 利用規程 科研費による研究の帰属

Powered by NII kakenhi