Interactive Teaching of Task-Oriented Visual Recognition
Project/Area Number |
16500111
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Osaka University |
Principal Investigator |
MIURA Jun Osaka University, Graduate School of Engineering, Associate Professor, 大学院・工学研究科, 助教授 (90219585)
|
Co-Investigator(Kenkyū-buntansha) |
SHIRAI Yoshiaki Ritsumeikan University, College of Information Science and Engineering, Professor, 情報理工学部, 教授 (50206273)
SAKIYAMA Takuro Niihama National College of Technology, Department of Electrical Engineering and Information Science, Assistant Professor, 電気情報工学科, 講師 (70335371)
SHIMADA Nobutaka Ritsumeikan University, College of Information Science and Engineering, Associate Professor, 情報理工学部, 助教授 (10294034)
|
Project Period (FY) |
2004 – 2005
|
Project Status |
Completed (Fiscal Year 2005)
|
Budget Amount *help |
¥3,700,000 (Direct Cost: ¥3,700,000)
Fiscal Year 2005: ¥1,200,000 (Direct Cost: ¥1,200,000)
Fiscal Year 2004: ¥2,500,000 (Direct Cost: ¥2,500,000)
|
Keywords | robot teaching / interactive teaching / visual recognition / personal service robot / mobile manipulator / interactive vision / エレベータ乗降動作 |
Research Abstract |
We have been investigating an interactive teaching of personal service robots, which helps us in usual home or office enviromnents. The results of this research are as follows : 1.Representation for task models. A task model describes steps of operations, what information is necessary for each operation, and a teaching method for each piece of information. When a robot executes an operation, it examines the task model and check if it has necessary information for the operation. If not, the robot asks a human operator to teach it. In this way, the robot leads the interaction between the robot and the human. 2.Representation for objects. We examined what kinds of information to be taught to the robot for many objects in home or offices, and classify the information into the following : shape, view, and mechanism as an invariable part, and pose and state of the mechanism as a variable part. We develop concrete object models and teaching methods using GUI for several objects such as refriger
… More
ators. 3.Teaching of object information. We develop a method of easily teaching informative positions in the scene for assisting object recognition of the robot. We teach the robot them using an LED-based device. From those position information as well as a task model and object models, the robot determines which position to focus on and which image features to use for recognition. 4.Interactive vision. Object recognition sometimes fails due to, for example, changes of illumination condition. We develop a method of asking necessary information for completing object recognition based on the analysis of the current recognition result and the estimation of possible situations. The robot asks for the information which is useful for discriminating such situations. 5.Development of experimental system. We develop a personal service robot which has a 6-DOF manipulator, a hand, three cameras, a laser range finder. 6.Experiments. We performed experiments for two tasks : fetching a can from a refrigerator and taking an elevator. In the experiments, we successfully taught several kinds of information, used for visual recognition, to the robot. Examples of taught information are : shape and size of refrigerators, position of the handles on their doors, doors and buttons of an elevator, the route to the elevator hall. Less
|
Report
(3 results)
Research Products
(17 results)