• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Eye Guidance in Immersive Display Environments

Research Project

Project/Area Number 18K11415
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeMulti-year Fund
Section一般
Review Section Basic Section 61020:Human interface and interaction-related
Research InstitutionOsaka Institute of Technology

Principal Investigator

Hashimoto Wataru  大阪工業大学, 情報科学部, 教授 (80323278)

Project Period (FY) 2018-04-01 – 2023-03-31
Project Status Completed (Fiscal Year 2022)
Budget Amount *help
¥3,120,000 (Direct Cost: ¥2,400,000、Indirect Cost: ¥720,000)
Fiscal Year 2020: ¥520,000 (Direct Cost: ¥400,000、Indirect Cost: ¥120,000)
Fiscal Year 2019: ¥2,080,000 (Direct Cost: ¥1,600,000、Indirect Cost: ¥480,000)
Fiscal Year 2018: ¥520,000 (Direct Cost: ¥400,000、Indirect Cost: ¥120,000)
Keywords空間解像度 / ボケ / 時間解像度 / フレームレート / 収縮歪み / HMD / 没入感 / 重心動揺 / フレームレート制御 / 時間解像度制御 / 解像度制御 / 有効視野 / 注視点 / 被写界深度 / 映像の歪み / 頭部運動
Outline of Final Research Achievements

We explored methods for guiding the user's gaze in immersive displays that cover the user's surroundings with images, with the aim of guiding the user's gaze in conjunction with head movements. First, we found that gaze guidance by spatial resolution control can guide the user's gaze out of the field of view. However, it was difficult to guide gaze unconsciously due to the discomfort of being forced to do so. Next, the method using constriction and distortion of the image tended not to contribute to gaze guidance. However, the method using the image contraction distortion tended not to contribute to gaze guidance, but was more effective under the condition that the guided object glided, and induced many head movements. In the case of gaze guidance by temporal resolution control, it was found that attention tended to be directed to the area with low frame rate, although there were individual differences.

Academic Significance and Societal Importance of the Research Achievements

没入映像環境において,頭を動かしてでも周りが見たくなるような視線誘導方法を模索した.周辺視野に点滅させるなどの方法は従来から存在するが,本研究ではできるだけ元の映像を変質させず,空間解像度制御(ボケ)や画像の収縮歪み,時間解像度制御(フレームレート)に着目しているのが特徴である.無意識下での視線誘導は困難であったが,注意を向けることによる視線誘導ができることを明らかにした.

Report

(6 results)
  • 2022 Annual Research Report   Final Research Report ( PDF )
  • 2021 Research-status Report
  • 2020 Research-status Report
  • 2019 Research-status Report
  • 2018 Research-status Report
  • Research Products

    (6 results)

All 2022 2021 2020 2019

All Journal Article (1 results) (of which Peer Reviewed: 1 results) Presentation (5 results)

  • [Journal Article] Presenting a Sense of Self-motion by Transforming the Rendering Area Based on the Movement of the User’s Viewpoint2021

    • Author(s)
      Tomoya Yamashita, Wataru Hashimoto, Satoshi Nishiguchi,and Yasuharu Mizutani
    • Journal Title

      HCI International 2021 - Late Breaking Posters

      Volume: Part I Pages: 410-417

    • Related Report
      2021 Research-status Report
    • Peer Reviewed
  • [Presentation] 没入型HMDを用いたVR空間移動時における描画面の微小移動によるベクション誘発手法の提案2022

    • Author(s)
      山下知也,橋本渉,西口敏司,水谷泰治
    • Organizer
      第27回日本バーチャルリアリティ学会大会
    • Related Report
      2022 Annual Research Report
  • [Presentation] 回転するハーフミラーを用いたペッパーズゴーストによる全方位型立体像提示装置2022

    • Author(s)
      柴田龍一,橋本渉,水谷泰治,西口敏司
    • Organizer
      第27回日本バーチャルリアリティ学会大会
    • Related Report
      2022 Annual Research Report
  • [Presentation] 没入型映像システムにおける描画面加工によるベクションの提示の試み2021

    • Author(s)
      山下 知也,橋本 渉,西口 敏司,水谷 泰治
    • Organizer
      画像電子学会
    • Related Report
      2021 Research-status Report
  • [Presentation] ユーザ視点の加速度に基づいた描画面移動と拡大縮小による移動感覚提示の試み2020

    • Author(s)
      山下知也,橋本渉
    • Organizer
      日本バーチャルリアリティ学会
    • Related Report
      2020 Research-status Report
  • [Presentation] 映像の収縮歪みによる視線誘導の検討2019

    • Author(s)
      荒木凌,竹内凌一,橋本渉,水谷泰治,西口敏司
    • Organizer
      情報処理学会インタラクション2019
    • Related Report
      2018 Research-status Report

URL: 

Published: 2018-04-23   Modified: 2024-01-30  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi