Project/Area Number |
10480072
|
Research Category |
Grant-in-Aid for Scientific Research (B).
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Intelligent informatics
|
Research Institution | Kyoto Institute of Technology |
Principal Investigator |
KUROKAWA Takao Kyoto Institute of Technology, Graduate School of Science and Technology Professor, 工芸科学研究科, 教授 (00029539)
|
Co-Investigator(Kenkyū-buntansha) |
OGATA Masahito Kyoto Institute of Technology, Graduate School of Science and Technology Research Associate, 工芸科学研究科, 助手 (50273545)
|
Project Period (FY) |
1998 – 2000
|
Project Status |
Completed (Fiscal Year 2000)
|
Budget Amount *help |
¥1,500,000 (Direct Cost: ¥1,500,000)
Fiscal Year 2000: ¥700,000 (Direct Cost: ¥700,000)
Fiscal Year 1999: ¥800,000 (Direct Cost: ¥800,000)
|
Keywords | Japanese Sign Language / Machine interpretation / Japanese / Facial expression / Human model / Image synthesis-by-rule / Sign animation / 人体モデル / レントゲン検察 / レントゲン検査 |
Research Abstract |
This research aimed at improving communication environments for the auditorily impaired by means of a system that mutually translated Japanese Sign Language (JSL) into Japanese and vice versa. For the purpose we developed elemental technologies and integrated many of them into one system that transformed Japanese sentences into ISL ones. (1) A sign sentence database was build base on a corpus of video images taken from TV Sign News. We used it to find JSL grammatical rules concerning verbs, pointing, tense and so on. We also discovered semantic and syntactic rules of nonmanual expressions accompanying manual signs including eye opening, nodding and blinking. (2) A method of recognizing manual signs using a neural network and finite automata, each of which could accept a specific sign word was studied and we achieved a recognition rate of 75 to 80% of JSL words. (3) As for language processing for Japanese-into-JSL translation, we proposed a new framework using case frames which represented the semantic structure of a Japanese or a JSL sentence. In the framework a case frame of a Japanese sentence was transformed into one of the corresponding JSL sentence. The JSL expression in symbolic lists could easily be derived from the sign case frame and, therefore, its sign animation could be synthesized at once. Besides, we realized to synthesize animations that were correct from the standpoint of some grammatical rules. (4) A human model used for sign animations was improved in order for its three dimensional movement to be easily judged. Complicated trajectories of the hands in the three dimensional space were programmed and each of them was represented by a simple code. (5) The above technologies were integrated in a system and it was going to be applied to medical environments such as instruction to the hearing impaired in stomach X-ray inspection.
|