Project/Area Number |
13224051
|
Research Category |
Grant-in-Aid for Scientific Research on Priority Areas
|
Allocation Type | Single-year Grants |
Review Section |
Science and Engineering
|
Research Institution | Kyoto University |
Principal Investigator |
MATSUYAMA Takashi Kyoto University, Graduate School of Informatics, Professor, 情報学研究科, 教授 (10109035)
|
Co-Investigator(Kenkyū-buntansha) |
SUGIMOTO Akihiko National Institute of Informatics, Intelligent Systems Research Division, Associate Professor, 知能システム研究系, 助教授 (30314256)
SATO Yoichi The University of Tokyo, Institute of Industrial Science, Associate Professor, 生産技術研究所, 助教授 (70302627)
MAKI Atsuto The University of Tokyo, Graduate School of Informatics, Associate Professor, 情報学研究科, 助教授 (60362414)
KAWASHIMA Hiroaki The University of Tokyo, Graduate School of Informatics, Assistant Professor, 情報学研究科, 助手 (40346101)
SUMI Kazuhiko The University of Tokyo, Graduate School of Informatics, COE Researcher, 情報学研究科, COE研究員 (90372573)
波部 斉 京都大学, 工学研究科, 助手 (80346072)
|
Project Period (FY) |
2001 – 2005
|
Project Status |
Completed (Fiscal Year 2005)
|
Budget Amount *help |
¥140,100,000 (Direct Cost: ¥140,100,000)
Fiscal Year 2005: ¥28,000,000 (Direct Cost: ¥28,000,000)
Fiscal Year 2004: ¥36,000,000 (Direct Cost: ¥36,000,000)
Fiscal Year 2003: ¥41,100,000 (Direct Cost: ¥41,100,000)
Fiscal Year 2002: ¥35,000,000 (Direct Cost: ¥35,000,000)
|
Keywords | Man-machine symbiotic system / Human interface / Hybrid dynamical system / Interaction model / Understanding of human intentions and activities / Analysis of utterance pauses and overlaps in dialogs / Environment-embedded camera network / Wearable active vision sensor / 動的インタラクション / タイミング構造のモデル化 / 興味の推定 / 行動の誘発 / proactiveなインタラクション / イベント駆動型制御 / 漫才の対話分析 / 発話タイミング / 表情分析 / 視線・注視対象の検出 / 手持ち物体のディジタル化 / 装着型視覚センサ / 環境埋め込み型センサ / 動作解析 / 動的イベント認識 / ユビキタスディスプレイ / 注視対象 / 指差し動作 / インタラクティブ情報提示 / ロボットインタラクション / 視線情報 / 注視点制御 / 運動軌跡推定 / マルチカメラネットワーク / 非接触実時間計測 / 複数指先軌跡計測 / マン・マシン・インタフェース |
Research Abstract |
In the 21st century, our personal and social activities are conducted in two different domains: physical real world and cyber network society. For realizing smooth and casual integration of these domains, we need a novel interaction model that goes a step further beyond the existing "command-and-response model." In this research, we proposed a concept of "man-machine symbiotic systems" as the next generation interaction model. Man-machine symbiotic systems are characterized by such a feature that the systems work for humans even if they are not explicitly ordered. For realizing such a symbiotic system, it is required to design real-time bi-lateral multimodal information processing mechanisms: observing human behaviors to understand their activities and intentions, and supporting them voluntarily by presenting appropriate information at an appropriate timing. To realize man-machine symbiotic systems, we conducted the following fundamental researches and obtained a number of important results including theoretical models and real world interaction systems: 1. Observing Human Activities (recognition system): We developed a camera-network system and wearable active vision sensor systems for understanding human intentions and activities based on realtime 3D motion measurement. 2. Attracting Human (generation system): We developed an intelligent desk system, which presents versatile information on a desk top plate according to two-handed manipulations of projected objects on the plate. 3. Interacting with Human (interaction model): We proposed a dynamic event representation model which we refer to as a "hybrid dynamical system". We verified the effectiveness of the model by applying it to classify fine-grained facial expression, and to analyze the synchronization mechanism of mouth motion and speech utterance. We also explained a basic principle for realizing smooth interaction through an analysis of Japanese stand-up comedy (manzai).
|