• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Adaptive object handling by sharing embodiment and biological information

Research Project

Project/Area Number 18H04108
Research Category

Grant-in-Aid for Scientific Research (A)

Allocation TypeSingle-year Grants
Section一般
Review Section Medium-sized Section 61:Human informatics and related fields
Research InstitutionThe University of Tokyo

Principal Investigator

KUNIYOSHI Yasuo  東京大学, 大学院情報理工学系研究科, 教授 (10333444)

Co-Investigator(Kenkyū-buntansha) 長久保 晶彦  国立研究開発法人産業技術総合研究所, 情報・人間工学領域, 主任研究員 (00357617)
Project Period (FY) 2018-04-01 – 2021-03-31
Project Status Completed (Fiscal Year 2021)
Budget Amount *help
¥44,330,000 (Direct Cost: ¥34,100,000、Indirect Cost: ¥10,230,000)
Fiscal Year 2020: ¥12,350,000 (Direct Cost: ¥9,500,000、Indirect Cost: ¥2,850,000)
Fiscal Year 2019: ¥15,860,000 (Direct Cost: ¥12,200,000、Indirect Cost: ¥3,660,000)
Fiscal Year 2018: ¥16,120,000 (Direct Cost: ¥12,400,000、Indirect Cost: ¥3,720,000)
Keywordsロボティクス / 身体性 / 遠隔操作 / 深層模倣学習 / 視線計測 / 神経情報計測 / バーチャルリアリティ
Outline of Final Research Achievements

Deep imitation learning enables the learning of complex visuomotor skills from raw pixel inputs. However, this approach suffers from the problem of overfitting to the training images. The neural network can easily be distracted by task-irrelevant objects. In this research, we use the human gaze measured by a head-mounted eye tracking device to discard task-irrelevant visual distractions. We propose a mixture-density network-based behavior cloning method that learns to imitate the human gaze. The model predicts gaze positions from raw pixel images and crops images around the predicted gazes. Only these cropped images are used to compute the output action. This cropping procedure can remove visual distractions becase the gaze is rarely fixated on task-irrelevant objects. We evaluated our method on several manipulation tasks including handling multiple objects, needle threding, picking a small object, knot tying, and banana peeling.

Academic Significance and Societal Importance of the Research Achievements

本研究は、自律ロボットの物体操作能力の向上のための基盤技術に関するものである。現状のロボット技術では、食材のように形状や固さなどのばらつきが多い対象物体はモデル化が困難であり、扱うことができなかった。我々は、深層模倣学習に着目した。人が直感的にロボットを遠隔操作するためのシステムと操縦時の視線情報を計測するプラットフォームを開発した。人が物体操作時に重要な情報を持つ部分に注意を向けることを利用し、人の視線情報を模倣することで、複雑な環境や対象物体において、特に重要な情報のみを用いて模倣学習を行うことで、格段に性能向上可能であることを実証した。

Report

(4 results)
  • 2021 Final Research Report ( PDF )
  • 2020 Annual Research Report
  • 2019 Annual Research Report
  • 2018 Annual Research Report
  • Research Products

    (22 results)

All 2021 2020 2019 2018 Other

All Journal Article (7 results) (of which Int'l Joint Research: 2 results,  Peer Reviewed: 7 results,  Open Access: 1 results) Presentation (13 results) (of which Int'l Joint Research: 4 results) Remarks (2 results)

  • [Journal Article] メカニカルグローブによるロボットハンドデザインの第三者評価2021

    • Author(s)
      金井嵩幸, 大村 吉幸, 長久保 晶彦, 國吉 康夫
    • Journal Title

      日本ロボット学会誌

      Volume: 39

    • NAID

      130008069771

    • Related Report
      2020 Annual Research Report
    • Peer Reviewed
  • [Journal Article] Gaze-Based Dual Resolution Deep Imitation Learning for High-Precision Dexterous Robot Manipulation2021

    • Author(s)
      Kim Heecheol、Ohmura Yoshiyuki、Kuniyoshi Yasuo
    • Journal Title

      IEEE Robotics and Automation Letters

      Volume: 6 Issue: 2 Pages: 1630-1637

    • DOI

      10.1109/lra.2021.3059619

    • Related Report
      2020 Annual Research Report
    • Peer Reviewed / Open Access
  • [Journal Article] Using Human Gaze to Improve Robustness Against Irrelevant Objects in Robot Manipulation Tasks2020

    • Author(s)
      Kim Heecheol、Ohmura Yoshiyuki、Kuniyoshi Yasuo
    • Journal Title

      IEEE Robotics and Automation Letters

      Volume: 5 Issue: 3 Pages: 4415-4422

    • DOI

      10.1109/lra.2020.2998410

    • Related Report
      2020 Annual Research Report
    • Peer Reviewed
  • [Journal Article] Adversarial Imitation Learning between Agents with Different Numbers of State Dimensions2019

    • Author(s)
      Yoshida Taketo、Kuniyoshi Yasuo
    • Journal Title

      2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE)

      Volume: - Pages: 179-186

    • DOI

      10.1109/aike.2019.00040

    • Related Report
      2018 Annual Research Report
    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Generating an image of an object’s appearance from somatosensory information during haptic exploration2019

    • Author(s)
      Sekiya Kento、Ohmura Yoshiyuki、Kuniyoshi Yasuo
    • Journal Title

      2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

      Volume: - Pages: 8132-8137

    • DOI

      10.1109/iros40897.2019.8967795

    • Related Report
      2018 Annual Research Report
    • Peer Reviewed
  • [Journal Article] Efficient Event-Driven Forward Kinematics of Open Kinematic Chains with O(Log n) Complexity2018

    • Author(s)
      Wakatabe Ryo、Morita Kohei、Cheng Gordon、Kuniyoshi Yasuo
    • Journal Title

      2018 IEEE International Conference on Robotics and Automation (ICRA)

      Volume: - Pages: 6975-6982

    • DOI

      10.1109/icra.2018.8461211

    • Related Report
      2018 Annual Research Report
    • Peer Reviewed / Int'l Joint Research
  • [Journal Article] Synaptic excitatory-inhibitory balance affect information integration via attractor dynamics2018

    • Author(s)
      Keiko Fujii, Yoshiyuki Ohmura, Yasuo Kuniyoshi
    • Journal Title

      Association for the Scientific Study of Consciousness (ASSC2018)

      Volume: -

    • Related Report
      2018 Annual Research Report
    • Peer Reviewed
  • [Presentation] Transformer-based deep imitation learning for dual-arm robot manipulation2021

    • Author(s)
      Heecheol Kim, Yoshiyuki Ohmura, Yasuo Kuniyoshi
    • Organizer
      2021 IEEE/RSJ International Conference on Intelligent Robots and Systems
    • Related Report
      2020 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Unsupervised temporal segmentation using models that discriminate between demonstrations and unintentional actions2021

    • Author(s)
      Takayuki Komatsu, Yoshiyuki Ohmura, and Yasuo Kuniyoshi
    • Organizer
      2021 IEEE/RSJ International Conference on Intelligent Robots and Systems
    • Related Report
      2020 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Unsupervised Learning of shape-invariant Lie group transformer by embedding ordinary differential equation2021

    • Author(s)
      Takumi Takada, Yoshiyuki Ohmura, and Yasuo Kuniyoshi
    • Organizer
      2021 IEEE International Conference on Development and Learning
    • Related Report
      2020 Annual Research Report
    • Int'l Joint Research
  • [Presentation] Learning to grasp multiple objects with a robot hand using tactile information2021

    • Author(s)
      Kento Sekiya, Yoshiyuki Ohmura, Yasuo Kuniyoshi
    • Organizer
      第26回ロボティクスシンポジア
    • Related Report
      2020 Annual Research Report
  • [Presentation] Generating an image of an object's appearance from somatosensory information during haptic exploration.2019

    • Author(s)
      K. Sekiya, Y. Ohmura, Y. Kuniyoshi
    • Organizer
      2019 IEEE/RSJ International Conference on Intelligenet Robots and System (IROS)
    • Related Report
      2019 Annual Research Report
    • Int'l Joint Research
  • [Presentation] 体性感覚情報に基づく手探り中の物体の外観の生成2019

    • Author(s)
      関谷研人、大村吉幸、國吉康夫
    • Organizer
      ロボティクス・メカトロニクス講演会2019(Robomech2019)
    • Related Report
      2019 Annual Research Report
  • [Presentation] LSTMによる物体操作時の柔軟物変形予測2019

    • Author(s)
      伊藤龍一郎、金井嵩幸, 大村吉幸, 新山龍馬, 國吉康夫
    • Organizer
      ロボティクス・メカトロニクス講演会2019(Robomech2019)
    • Related Report
      2019 Annual Research Report
  • [Presentation] LSTMによる物体操作時の柔軟物変形予測2019

    • Author(s)
      伊藤龍一郎,金井嵩幸,大村吉幸,新山龍馬,國吉康夫
    • Organizer
      ロボティクス・メカトロニクス講演会2019(Robomech2019)
    • Related Report
      2018 Annual Research Report
  • [Presentation] 体性感覚情報に基づく手探り中の物体の外観の生成2019

    • Author(s)
      関谷研人,大村吉幸,國吉康夫
    • Organizer
      ロボティクス・メカトロニクス講演会2019(Robomech2019)
    • Related Report
      2018 Annual Research Report
  • [Presentation] 強化学習における報酬なしスキル獲得の階層化2019

    • Author(s)
      皿海孝典,狩野泉実,國吉康夫
    • Organizer
      第33回人工知能学会全国大会(JSAI2019)
    • Related Report
      2018 Annual Research Report
  • [Presentation] 多自由度ロボットアームを用いた物体性質の自発的獲得2019

    • Author(s)
      小松 高歩, 金 東敏, 鈴木 裕真, 國吉 康夫
    • Organizer
      日本発達神経科学学会第8回学術集会
    • Related Report
      2018 Annual Research Report
  • [Presentation] 報酬寄与率を考慮したパラメータノイズによる深層強化学習の探索と活用の調節2018

    • Author(s)
      狩野泉実,田中一敏,新山龍馬,國吉康夫
    • Organizer
      ロボティクス・メカトロニクス講演会2018(Robomech2018)
    • Related Report
      2018 Annual Research Report
  • [Presentation] 自由エネルギー原理による生成モデル理解と環境認識に基づく適応的行動2018

    • Author(s)
      荻島諒也, 米倉将吾, 國吉康夫
    • Organizer
      ロボティクス・メカトロニクス講演会2018(Robomech2018)
    • Related Report
      2018 Annual Research Report
  • [Remarks] 國吉新山研究室ホームページ

    • URL

      http://www.isi.imi.i.u-tokyo.ac.jp/

    • Related Report
      2020 Annual Research Report
  • [Remarks] 國吉・新山研究室ホームページ

    • URL

      http://www.isi.imi.i.u-tokyo.ac.jp/?lang=ja

    • Related Report
      2019 Annual Research Report 2018 Annual Research Report

URL: 

Published: 2018-04-23   Modified: 2023-01-30  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi