Object Segmentation and Successive Modeling Based on Human Action Observation
Project/Area Number |
23500242
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Osaka University |
Principal Investigator |
MAE Yasushi 大阪大学, 基礎工学研究科, 准教授 (50304027)
|
Project Period (FY) |
2011 – 2013
|
Project Status |
Completed (Fiscal Year 2013)
|
Budget Amount *help |
¥5,330,000 (Direct Cost: ¥4,100,000、Indirect Cost: ¥1,230,000)
Fiscal Year 2013: ¥780,000 (Direct Cost: ¥600,000、Indirect Cost: ¥180,000)
Fiscal Year 2012: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
Fiscal Year 2011: ¥2,990,000 (Direct Cost: ¥2,300,000、Indirect Cost: ¥690,000)
|
Keywords | 行動環境認識 / 物体モデリング / 行動モニタリング / 知能ロボティクス |
Research Abstract |
I developed a method to acquire automatically the appearance model of objects moved by human in everyday environments. It solves the problem that automatic visual segmentation of everyday objects in life environment without specific knowledge about objects. The experimental results show the method can automatically acquire the object images as the appearance models of the objects by using human action which moves objects by hands in everyday life.
|
Report
(4 results)
Research Products
(24 results)