2004 Fiscal Year Final Research Report Summary
Vision-based Sign Language Recognition in Complicated Background and Occlusion
Project/Area Number |
15300058
|
Research Category |
Grant-in-Aid for Scientific Research (B)
|
Allocation Type | Single-year Grants |
Section | 一般 |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Osaka University |
Principal Investigator |
SHIRAI Yoshiaki Osaka University, School of Engineering, Professor, 大学院・工学研究科, 教授 (50206273)
|
Co-Investigator(Kenkyū-buntansha) |
MIURA Jun Osaka University, School of Engineering, Associate Professor, 大学院・工学研究科, 助教授 (90219585)
SAKIYAMA Takuro Osaka University, School of Engineering, Research Associate, 大学院・工学研究科, 助手 (70335371)
SHIMADA Nobutaka Ritsumeikan University, Associate Professor, 情報理工学部, 助教授 (10295371)
|
Project Period (FY) |
2003 – 2004
|
Keywords | Hand posure estimation / Sign Language Recognition / Complex background / 3-D Shape Model / Apperance variation modeling / Tracking racking in image sequence / Transitin Network / Hidden Markov Model |
Research Abstract |
This research achieved techniques which enable to recognize sign language words from image sequences in background-free conditions. It includes tree sub-themes below : 1.hand region detection and tracking in complex backgrounds using sign language word dictionary Hand region and its shape change was stably detected and tracked even when the hand moves quickly in cluttered backgrounds using a Transition network built from examples of sign language scenes. The network registers hand state models using motion feature significantly when hand moves quickly or using shape feature when the motion is slow. A novel model-matching criterion based on the probabilities of image contrast occurrance in both of hand and background was proposed. 2.Sign language word recognition based on temporal and spacio image features 2-D image features including the number of finger-like parts, principal axes of hand region, position of hand relative to the face and 2-D motion parameters were extracted from about 50 s
… More
ign language words and trained Hidden Markov Model (HMM) for word recognition. Because mutual occlusion between both hands and the face, when occlusion is detected, each region is tracked using textures stored in the preceding frames. Recognition success rate was improved by using two stage matching : first rough matching by the HMM using only position and motion features is done, and then precise matching by the HMM using all of features is done. 3.Hand shape estimation based on 3-D shape model and learning of appearance deformation 3-D shape parameters was estimated from monocular image sequence based on silouette matching. A number of possible hand shape silhouette images are generated in advance and the pairs of the silhouette and the joint angles and viewing parameters are registered in a model database. Because the degree of freedom of human hand is quite large, the models have to been sparsely sampled. In compensation for it, locally-compressed manifold architechture, which extracts and stored the possible appearance changes around each model in a parametric way, is novelly proposed. A PC-clustered estimation system was built and established 10 fps estimation even far the complicatedly self-occluded hand shape. Less
|
Research Products
(17 results)