Project/Area Number |
14102018
|
Research Category |
Grant-in-Aid for Scientific Research (S)
|
Allocation Type | Single-year Grants |
Research Field |
Intelligent mechanics/Mechanical systems
|
Research Institution | The University of Tokyo |
Principal Investigator |
ISHIKAWA Masatoshi The University of Tokyo, Graduate School of Information Science and Technology, Professor, 大学院情報理工学系研究科, 教授 (40212857)
|
Co-Investigator(Kenkyū-buntansha) |
NAMIKI Akio The University of Tokyo, Graduate School of Information Science and Technology, Lecturer, 大学院情報理工学系研究科, 講師 (40376611)
KOMURO Takashi The University of Tokyo, Graduate School of Information Science and Technology, Lecturer, 大学院情報理工学系研究科, 講師 (10345118)
OKU Hiromasa The University of Tokyo, Graduate School of Information Science and Technology, Research Associate, 大学院情報理工学系研究科, 助手 (40401244)
鏡 慎吾 東京大学, 大学院・情報理工学系研究科, 助手 (90361542)
石井 抱 広島大学, 大学院・工学研究科, 助教授 (40282686)
橋本 浩一 東京大学, 大学院・情報理工学系研究科, 助教授 (80228410)
|
Project Period (FY) |
2002 – 2006
|
Project Status |
Completed (Fiscal Year 2006)
|
Budget Amount *help |
¥103,870,000 (Direct Cost: ¥79,900,000、Indirect Cost: ¥23,970,000)
Fiscal Year 2006: ¥14,300,000 (Direct Cost: ¥11,000,000、Indirect Cost: ¥3,300,000)
Fiscal Year 2005: ¥33,410,000 (Direct Cost: ¥25,700,000、Indirect Cost: ¥7,710,000)
Fiscal Year 2004: ¥32,630,000 (Direct Cost: ¥25,100,000、Indirect Cost: ¥7,530,000)
Fiscal Year 2003: ¥10,530,000 (Direct Cost: ¥8,100,000、Indirect Cost: ¥2,430,000)
Fiscal Year 2002: ¥13,000,000 (Direct Cost: ¥10,000,000、Indirect Cost: ¥3,000,000)
|
Keywords | Intelligent robot / Mechanical engineering, control / Network / Smart sensor information system / 高速ビジョン / 並列分散処理システム / 多眼ビジョン / ネットワークビジョン / ビジョンチップ / センサフィードバック / センサフュージョン / センサーフュージョン / 階層並列構造 / 分散ネットワーク / 再構成可能 / リアルタイム制御 / 高速視覚 |
Research Abstract |
The purpose of this study is to develop a recognition-behavior integration system which can act in a flexible manner at high-speed in the real environment. Using basic technologies, such as sensory-motor integration based on hierarchical parallel architecture, ultra-high-speed recognition and motion, and distributed network processing architecture, we have developed a high-speed realtime processing system in which a large number of sensors and actuators are connected based on distributed network architecture. 1.A high-speed vision network system with TCP/IP communication function. In order to track many targets which moves at high-speed in a wide area without occlusion, we have developed a distributed cooperative network system with multiple vision systems. 2.A distributed sensor network information processing algorithm. We have proposed a sensor fusion processing method called DTKF (Delay-Tolerant Kalman Filter). DTKF is a method using Kalman Filter, and it has a good scalability against
… More
the increase of the number of sensors. The proposed method was verified by a numerical simulation of a target tracking task of high-speed vision sensor. 3.A multi-high-speed visual feedback system. We have developed a multi-high-speed visual feedback system with multiple high-speed visions. By interpolating the lack of information using several vision systems, a 3D tracking of multiple targets which is robust against occlusion is achieved. Also, the improvement of a range of measurement and accuracy, and occlusion avoidance is achieved. 4.Sensory-motor fusion based on hierarchical parallel processing architecture In order to connect a large number of sensor systems and robot systems, we have developed a parallel distributed processing system in which realtime processing systems, robot systems, and sensor systems are integrated. In the system three realtime processing systems made by dSpace Inc, are connected to each other using the CAN bus, and two robot arms, three robot hands, three high-speed vision systems (six camera heads), and other sensors (tactile sensors, force sensors, and so on) are connected with high-speed IO ports. Less
|