Development of the command input interface using lip motion features in consideration of mental and physical conditions
Project/Area Number |
24500140
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Research Field |
Media informatics/Database
|
Research Institution | Akita University |
Principal Investigator |
NISHIDA MAKOTO 秋田大学, 工学(系)研究科(研究院), 教授 (70091816)
|
Co-Investigator(Kenkyū-buntansha) |
KAGEYAMA Yoichi 秋田大学, 大学院工学資源学研究科, 教授 (40292362)
|
Project Period (FY) |
2012-04-01 – 2015-03-31
|
Project Status |
Completed (Fiscal Year 2014)
|
Budget Amount *help |
¥5,070,000 (Direct Cost: ¥3,900,000、Indirect Cost: ¥1,170,000)
Fiscal Year 2014: ¥1,560,000 (Direct Cost: ¥1,200,000、Indirect Cost: ¥360,000)
Fiscal Year 2013: ¥1,300,000 (Direct Cost: ¥1,000,000、Indirect Cost: ¥300,000)
Fiscal Year 2012: ¥2,210,000 (Direct Cost: ¥1,700,000、Indirect Cost: ¥510,000)
|
Keywords | ヒューマンインターフェース / 口唇 / 心理変化 / 動き特徴 / 色彩情報 / 画像処理 / 無声発話 / 有声発話 |
Outline of Final Research Achievements |
This study analyzed the lip information (i.e., lip motion features, physical features of lip), and developed elemental technologies for human machine interface with high versatility. The following conclusions could be derived: (i) lip shape features of local region are useful information to classify lip shapes as similar categories, and the proposed method is useful to narrow down identification targets, (ii) focus on the variation of lip motion features can serve as efficient indices for determining the occurrence of amusement feelings, and (iii)lip motions can be significantly affected by vocalization, and high recognition rates are obtained when the same voicing state data is used for input data and registered data.
|
Report
(4 results)
Research Products
(28 results)