Project/Area Number |
07680409
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | Oita University |
Principal Investigator |
ENDO Tsutomu Oita University, Faculty of Engineering, Professor, 工学部, 教授 (10112294)
|
Co-Investigator(Kenkyū-buntansha) |
KAGAWA Tsuneo Oita University, Faculty of Engineering, Research Associate, 工学部, 助手 (90253773)
OHKI Hidehiro Oita University, Faculty of Engineering, Research Associate, 工学部, 助手 (80194091)
|
Project Period (FY) |
1995 – 1997
|
Project Status |
Completed (Fiscal Year 1997)
|
Budget Amount *help |
¥2,200,000 (Direct Cost: ¥2,200,000)
Fiscal Year 1997: ¥600,000 (Direct Cost: ¥600,000)
Fiscal Year 1996: ¥600,000 (Direct Cost: ¥600,000)
Fiscal Year 1995: ¥1,000,000 (Direct Cost: ¥1,000,000)
|
Keywords | Knowledge acquisition / Multimedia / Multmodal interface / Natural language understanding / Natural language generation / Gesture recognition / Information integration / Cooperative understanding / ジャスチャ認識 |
Research Abstract |
This research intends to implement a multimedia communication system for problem solving and knowledge acquisition, focusing on the first grade mathematics. 1.Dialogue-based problem solving system. We have developed a problem solving system based on co-reference between drill texts of the first grade mathematics and dialogue with a tescher. It has been designed with emphasis on the issue in what situation of problem solving process it should utter, what sentence it should utter, and how it should interpret the response using contextual information. The constituents of the context are surface and case strusture of utterances, intention and sttention of the speaker, situation of dialogue, and world knowledge. 2.Cooperative understanding of utterances and gestures. The teacher in general utters a sentence while he/she makes such a gesture as pointing to some objects in the text, or drawing some pictures. We have proposed a method of cooperative understanding of utterances and gestures in dialogue. The point of interest in the gesture is the movement of teacher's fingertip or pen. The location of the tip of the pen is detected from moving imades. Then the relationship between the features extracted from the moving points and linguistic information in the utterance as well as interpreting procedure of the gesture were presented. 3.Knowledge acquisition, speech processing and face recognition. First, we made the assumption that knowledge acquired from dialogue is heuristics for problem solving, and presented a method of knowledge generalization and reuse. Then we confirmed that the speech interface can be used for dialogue-based problem solving process by experiment of speech recognition. Next, facial expression is important information in multimodal dialogue. We have proposed a method of detecting face elements such as eyes, nose and mouth. It uses local autocorrelation features, discriminant analysis, and genetic algorithm.
|