2015 Fiscal Year Final Research Report
A study on an incremental lip-sync technique for speech animation
Project/Area Number |
24700191
|
Research Category |
Grant-in-Aid for Young Scientists (B)
|
Allocation Type | Multi-year Fund |
Research Field |
Perception information processing/Intelligent robotics
|
Research Institution | Gunma National College of Technology (2015) Japan Advanced Institute of Science and Technology (2012-2014) |
Principal Investigator |
KAWAMOTO SHINICHI 群馬工業高等専門学校, 電子情報工学科, 講師 (70418507)
|
Project Period (FY) |
2012-04-01 – 2016-03-31
|
Keywords | リップシンク |
Outline of Final Research Achievements |
dependent filtering and incremental viseme recognition. This method has a simple customization technique of mouth movement in consideration of mouth movement velocity without a multi-modal database between speech and mouth movement for training. In our approach, a speech signal and a CG character data were given as inputs. This system outputs blending weights of each mouth shape based on blendshapes, which is basic technique of animation and widely used in CG software. First, we convert speech to a viseme sequence on the fly using a viseme recognizer. Then, we apply viseme-dependent filters for generating blending weights. Finally, Lip-sync animation is generated using blendshapes with calculated blending weights. As a result, the proposed method can synthesize incremental lip-sync animation with almost 300 ms delay, and synchronize mouth movement along with the speech with the same delay as input speech.
|
Free Research Field |
音声情報処理
|