Project/Area Number |
06555111
|
Research Category |
Grant-in-Aid for Scientific Research (A)
|
Allocation Type | Single-year Grants |
Section | 試験 |
Research Field |
情報通信工学
|
Research Institution | SEIKEI UNIVERSITY |
Principal Investigator |
OOKURA Motohiro SEIKEI UNIVERSITY,DEPARTMENT OF ENGINEERING,ASSOCIATE PROFESSOR, 工学部, 助教授 (30119341)
|
Co-Investigator(Kenkyū-buntansha) |
HARASHIMA Hiroshi UNIVERSITY OF TOKYO,DEPARTMENT OF ENGINEERING PROFESSOR, 工学部, 教授 (60011201)
CHIBA Hirohiko SYUKUTOKU UNIVERSITY,DEPARTMENT OF SOCIETY,ASSOCIATE PROFESSOR, 社会学部, 助教授 (40207296)
YAMADA Hiroshi KAWAMURA WOMAN'S UNIVERSITY,DEPARTMENT OF BASICS,ASSOCIATE PROFESSOR, 教養科目, 助教授 (80191328)
MORISHIMA Shigeo SEIKEI UNIVERSITY,DEPARTMENT OF ENGINEERING,ASSOCIATE PROFESSOR, 工学部, 助教授 (10200411)
|
Project Period (FY) |
1994 – 1996
|
Project Status |
Completed (Fiscal Year 1996)
|
Budget Amount *help |
¥12,100,000 (Direct Cost: ¥12,100,000)
Fiscal Year 1996: ¥2,200,000 (Direct Cost: ¥2,200,000)
Fiscal Year 1995: ¥3,600,000 (Direct Cost: ¥3,600,000)
Fiscal Year 1994: ¥6,300,000 (Direct Cost: ¥6,300,000)
|
Keywords | EXPRESSION SYNTHESIS / EMOTION SPACE / EXPRESSION RECOGNITION / EXPRESSION ESTIMATION / ACTION UNIT / TEXTURE MAPPING / NEURAL NETWORK / LIP SYNCHRONIZATION / コンピュータグラフィックス / 表情分析 / 3次元モデル / モーションキャプチャ / コンピュータビジョン / ヒューマンコンピュータインタラクション / 感性情報処理 / FACS / モデリング / エージェント / 3次元グラフィックス |
Research Abstract |
In this research, firstly a system for measuring 3d struction of face is constructed. In this system, markers are located on surface of the face and geometry is captured by 2 cameras located on front and side by tracking each marker. By using this system, quantitative description of Facial Action Coding System and feature of lip shape when utterance are realized. And this system make it possible to synthesized facial expression as it is by modifying 3d wireframe and texture mapping of frontal face image. Next, six basic expression are realized by combination of action unit and 3d emotion space is constructed by neural network for identity mapping. Specific facial expression is described by the point in 3d emotion space and this emotion space can realize both analysis and synthesis of facial expression. By estimation of this emotion space, expression is defined continuously in this space and it is approved this emotion space is good in psychological meanings. Finaly, automatic face expression recognition is realized without any markers. In this system, spatial frequency in mouth and eye region is calculated and converted to one of the emotion categories by neural network. This system is running in real time and recognition performance is similar to human subjective recognition.
|