Project/Area Number |
10558048
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 展開研究 |
Research Field |
Intelligent informatics
|
Research Institution | The University of Tokyo |
Principal Investigator |
ISHIZUKA Mitsuru The Univ. of Tokyo, School of Engineering, Professor, 大学院・工学系研究科, 教授 (50114369)
|
Co-Investigator(Kenkyū-buntansha) |
IBA Hitoshi The Univ. of Tokyo, Graduate School of Frontier Sciences, Associate Professor, 大学院・新領域創成科学研究科, 助教授 (40302773)
DOHI Hiroshi The Univ. of Tokyo, Graduate School of Frontier Sciences, Assistant, 大学院・新領域創成科学研究科, 助手 (90260504)
|
Project Period (FY) |
1998 – 1999
|
Project Status |
Completed (Fiscal Year 1999)
|
Budget Amount *help |
¥8,100,000 (Direct Cost: ¥8,100,000)
Fiscal Year 1999: ¥2,000,000 (Direct Cost: ¥2,000,000)
Fiscal Year 1998: ¥6,100,000 (Direct Cost: ¥6,100,000)
|
Keywords | Anthropomorphic Interface / Interface Agent / Multimodal Interface / Voice-driven WWW Browser / Multimodal Presentation / マルチモーダルプレゼンテーション / 擬人化エージェント / マークアップ言語 / WWW / 音声駆動WWWブラウザ |
Research Abstract |
New human interface styles are being pursued recently beyond current GUI (Graphical User Interface). Among them, we have paid our attention to multimodal anthropomorphic interface and have carried out our research. It is an interface style regarding human face-to-face communication as metaphor. In particular, we have emphasized the practical usefulness fitted to emerging WWW information environment in our research. Based on our previous research result on an interface agent system called VSA (Visual Software Agent) with a realistic moving face and voice communication capability, we have connected it to a WWW browser (Netscape Navigator). As a result, it has realized a new multimodal interface-style for the WWW information space containing huge information contents. Not only as an interface but also as a new multimodal information-content form, we have developed a prototype VPA (Visual Page Agent), which is embedded in a Web page and interacts with users through realistic moving facial images and speech dialogs. We got also having an interest in multimodal presentation using CG characters as new attractive information contents and have developed MPML (Mlutimodal Presentation Markup Language), which allows many people to write attractive multimodal presentations easily. As MPML is conformed to XML, it is fitted for the distribution in the WWW. MPML and its related tools are now practical level and available as public software.
|