Automated video contents acquisition and their use with the help of a virtual assistant
Project/Area Number |
16300023
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Media informatics/Database
|
Research Institution | Kyoto University |
Principal Investigator |
NAKAMURA Yuichi Kyoto University, Academic Center for Computing and Media Studies, professor, 学術情報メディアセンター, 教授 (40227947)
|
Co-Investigator(Kenkyū-buntansha) |
KUROHASHI Sadao Kyoto University, Graduate School of Informatics, professor, 情報学研究科, 教授 (50263108)
OZEKI Motoyuki Kyoto University, Academic Center for Computing and Media Studies, research associate, 学術情報メディアセンター, 助手 (10402744)
KUZUOKA Hideaki University of Tsukuba, Graduate School of Systems andInformation Engineering, professor, システム情報工学研究科, 教授 (10241796)
宇津呂 武仁 京都大学, 情報学研究科, 講師 (90263433)
|
Project Period (FY) |
2004 – 2006
|
Project Status |
Completed (Fiscal Year 2006)
|
Budget Amount *help |
¥11,600,000 (Direct Cost: ¥11,600,000)
Fiscal Year 2006: ¥2,900,000 (Direct Cost: ¥2,900,000)
Fiscal Year 2005: ¥4,400,000 (Direct Cost: ¥4,400,000)
Fiscal Year 2004: ¥4,300,000 (Direct Cost: ¥4,300,000)
|
Keywords | multimedia / conversational media / video processing / automated video capturing / automated video editing / natural language processing / video-based media / communication |
Research Abstract |
Recognition of users and environments: For providing supports appropriate for the progress of a task, we need automated recognition for what a user is doing or is going to do for the task. In this research, we proposed a novel method that uses two or more pairs of image sensors. In this method, an object held by a hand is reliably detected, and its 3D area, that is, its volume and location, are obtained using shape-from-silhouette in real time. The observation of such volume allows the estimation of the changes in an object's state, and can be good indices for the progress of works. Teaching environment by using acquired videos : We developed a novel multimedia system for instructing or guiding works. The system observes a user by image and speech recognition, and gives related information or appropriate advices by utilizing pre-recorded video archives. The distinctive feature of our media is that the system quietly observes a user and interrupts the user only when he/she really needs a
… More
help, for example, in a situation that the user is at a standstill or asks a question. Otherwise, the system only presents related information that may be useful to the user, and it does not require any responses from the user. Agent behaviors for virtual assistant : 1.We examined the relationship between nonverbal behaviors of artificial agents and human meta-representation. A meta-representation is a kind of mental representation. As the evaluative environment, we have developed a behavior detection system and a CG agent who can perform some basic behaviors. As a result, a regular correspondence was found between agent's nonverbal behaviors and human meta-representation. We also found a tendency that the subjects were favorably impressed by the agent when their meta-representation was positive. 2.We proposed a novel framework for supporting our work or daily life by realizing an artificial agent that mediates between a automated supporting system and a human using the system. The support system can realize sufficient accuracy with natural cooperation of a human. We made a prototype system with a dog-style robot (AIBO) as an agent, and implemented some functions mentioned above. We conducted an experiment with the prototype system, and got the results that the performance of recognition is improved and users' impressions are mostly better than the case of without the agent. Less
|
Report
(4 results)
Research Products
(26 results)