• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to previous page

Model-free Robot Programming with Visual/Depth/Force Information

Research Project

Project/Area Number 15K05890
Research Category

Grant-in-Aid for Scientific Research (C)

Allocation TypeMulti-year Fund
Section一般
Research Field Intelligent mechanics/Mechanical systems
Research InstitutionYokohama National University

Principal Investigator

Maeda Yusuke  横浜国立大学, 大学院工学研究院, 准教授 (50313036)

Research Collaborator NAKAGAWA Yoshinori  
FUJIURA Keiichi  
YONEOKA Yuya  
IMAI Kenta  
AIZAWA Kouki  
Project Period (FY) 2015-04-01 – 2018-03-31
Project Status Completed (Fiscal Year 2017)
Budget Amount *help
¥4,420,000 (Direct Cost: ¥3,400,000、Indirect Cost: ¥1,020,000)
Fiscal Year 2017: ¥1,040,000 (Direct Cost: ¥800,000、Indirect Cost: ¥240,000)
Fiscal Year 2016: ¥2,080,000 (Direct Cost: ¥1,600,000、Indirect Cost: ¥480,000)
Fiscal Year 2015: ¥1,300,000 (Direct Cost: ¥1,000,000、Indirect Cost: ¥300,000)
Keywords知能ロボティックス / ロボット / ロボット教示 / ビューベーストアプローチ
Outline of Final Research Achievements

We studied "view-based teaching/playback," a model-free robot programming method based on appearance. Development of techniques including visualization of force information through photoelasticity enabled us to apply it to various tasks using visual/depth/force information. We demonstrated its successful applications including pushing and wall-tracking.

Report

(4 results)
  • 2017 Annual Research Report   Final Research Report ( PDF )
  • 2016 Research-status Report
  • 2015 Research-status Report
  • Research Products

    (8 results)

All 2018 2017 2016 Other

All Journal Article (1 results) (of which Peer Reviewed: 1 results,  Open Access: 1 results) Presentation (6 results) (of which Int'l Joint Research: 2 results) Remarks (1 results)

  • [Journal Article] 光弾性を用いた力情報可視化に基づくビューベースト教示再生2018

    • Author(s)
      中川 義教,石井 聡一,前田 雄介
    • Journal Title

      計測自動制御学会論文集

      Volume: 54

    • Related Report
      2017 Annual Research Report
    • Peer Reviewed / Open Access
  • [Presentation] Autoencoderを用いたビューベースト教示再生2018

    • Author(s)
      藤浦 圭一,前田 雄介
    • Organizer
      第23回ロボティクスシンポジア
    • Related Report
      2017 Annual Research Report
  • [Presentation] 光弾性を用いたビューベースト教示再生による倣い作業の実現2017

    • Author(s)
      中川 義教, 前田 雄介
    • Organizer
      第22回ロボティクスシンポジア
    • Place of Presentation
      磯部ガーデン(群馬県安中市)
    • Year and Date
      2017-03-15
    • Related Report
      2016 Research-status Report
  • [Presentation] ディープラーニングを用いたビューベースト教示再生2017

    • Author(s)
      藤浦 圭一,前田 雄介
    • Organizer
      日本機械学会ロボティクス・メカトロニクス講演会 2017
    • Related Report
      2017 Annual Research Report
  • [Presentation] Lighting- and Occlusion-robust View-based Teaching/Playback for Model-free Robot Programming2016

    • Author(s)
      Yusuke Maeda, Yoshito Saito
    • Organizer
      14th Int. Conf. on Intelligent Autonomous Systems (IAS-14)
    • Place of Presentation
      上海(中国)
    • Year and Date
      2016-07-03
    • Related Report
      2016 Research-status Report
    • Int'l Joint Research
  • [Presentation] View-Based Teaching/Playback with Photoelasticity for Force-Control Tasks2016

    • Author(s)
      Yoshinori Nakagawa, Soichi Ishii, Yusuke Maeda
    • Organizer
      14th Int. Conf. on Intelligent Autonomous Systems (IAS-14)
    • Place of Presentation
      上海(中国)
    • Year and Date
      2016-07-03
    • Related Report
      2016 Research-status Report
    • Int'l Joint Research
  • [Presentation] GPGPUを用いたビューベースト教示再生2016

    • Author(s)
      米岡 裕矢, 長谷川 文美, 前田 雄介
    • Organizer
      日本機械学会ロボティクス・メカトロニクス講演会 2016 (ROBOMECH 2016)
    • Place of Presentation
      パシフィコ横浜(神奈川県横浜市)
    • Year and Date
      2016-06-08
    • Related Report
      2016 Research-status Report
  • [Remarks] 前田研究室ホームページ

    • URL

      http://www.iir.me.ynu.ac.jp/index-j.html

    • Related Report
      2017 Annual Research Report 2015 Research-status Report

URL: 

Published: 2015-04-16   Modified: 2019-03-29  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi