A study on an incremental lip-sync technique for speech animation
Project/Area Number |
24700191
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Gunma National College of Technology (2015) Japan Advanced Institute of Science and Technology (2012-2014) |
Principal Investigator |
KAWAMOTO SHINICHI 群馬工業高等専門学校, 電子情報工学科, 講師 (70418507)
|
Project Period (FY) |
2012-04-01 – 2016-03-31
|
Project Status |
Completed (Fiscal Year 2015)
|
Budget Amount *help |
¥4,420,000 (Direct Cost: ¥3,400,000、Indirect Cost: ¥1,020,000)
Fiscal Year 2014: ¥1,170,000 (Direct Cost: ¥900,000、Indirect Cost: ¥270,000)
Fiscal Year 2013: ¥1,300,000 (Direct Cost: ¥1,000,000、Indirect Cost: ¥300,000)
Fiscal Year 2012: ¥1,950,000 (Direct Cost: ¥1,500,000、Indirect Cost: ¥450,000)
|
Keywords | リップシンク / アニメーション |
Outline of Final Research Achievements |
dependent filtering and incremental viseme recognition. This method has a simple customization technique of mouth movement in consideration of mouth movement velocity without a multi-modal database between speech and mouth movement for training. In our approach, a speech signal and a CG character data were given as inputs. This system outputs blending weights of each mouth shape based on blendshapes, which is basic technique of animation and widely used in CG software. First, we convert speech to a viseme sequence on the fly using a viseme recognizer. Then, we apply viseme-dependent filters for generating blending weights. Finally, Lip-sync animation is generated using blendshapes with calculated blending weights. As a result, the proposed method can synthesize incremental lip-sync animation with almost 300 ms delay, and synchronize mouth movement along with the speech with the same delay as input speech.
|
Report
(5 results)
Research Products
(6 results)