|Budget Amount *help
¥1,500,000 (Direct Cost : ¥1,500,000)
Fiscal Year 2000 : ¥700,000 (Direct Cost : ¥700,000)
Fiscal Year 1999 : ¥800,000 (Direct Cost : ¥800,000)
This research aimed at improving communication environments for the auditorily impaired by means of a system that mutually translated Japanese Sign Language (JSL) into Japanese and vice versa. For the purpose we developed elemental technologies and integrated many of them into one system that transformed Japanese sentences into ISL ones.
(1) A sign sentence database was build base on a corpus of video images taken from TV Sign News. We used it to find JSL grammatical rules concerning verbs, pointing, tense and so on. We also discovered semantic and syntactic rules of nonmanual expressions accompanying manual signs including eye opening, nodding and blinking.
(2) A method of recognizing manual signs using a neural network and finite automata, each of which could accept a specific sign word was studied and we achieved a recognition rate of 75 to 80% of JSL words.
(3) As for language processing for Japanese-into-JSL translation, we proposed a new framework using case frames which represe
nted the semantic structure of a Japanese or a JSL sentence. In the framework a case frame of a Japanese sentence was transformed into one of the corresponding JSL sentence. The JSL expression in symbolic lists could easily be derived from the sign case frame and, therefore, its sign animation could be synthesized at once. Besides, we realized to synthesize animations that were correct from the standpoint of some grammatical rules.
(4) A human model used for sign animations was improved in order for its three dimensional movement to be easily judged. Complicated trajectories of the hands in the three dimensional space were programmed and each of them was represented by a simple code.
(5) The above technologies were integrated in a system and it was going to be applied to medical environments such as instruction to the hearing impaired in stomach X-ray inspection.