Project/Area Number |
25370681
|
Research Category |
Grant-in-Aid for Scientific Research (C)
|
Allocation Type | Multi-year Fund |
Section | 一般 |
Research Field |
Foreign language education
|
Research Institution | National Institute of Technology, Toyama College (2013-2014, 2016) Fukui National College of Technology (2015) |
Principal Investigator |
COOPER T.D. 富山高等専門学校, 一般教養科, 准教授 (70442449)
|
Co-Investigator(Kenkyū-buntansha) |
塚田 章 富山高等専門学校, 電子情報工学, 教授 (40236849)
的場 隆一 富山高等専門学校, 電子情報工学, 准教授 (30592323)
成瀬 喜則 富山大学, 教職実践開発研究科, 教授 (00249773)
|
Project Period (FY) |
2013-04-01 – 2017-03-31
|
Project Status |
Completed (Fiscal Year 2016)
|
Budget Amount *help |
¥4,680,000 (Direct Cost: ¥3,600,000、Indirect Cost: ¥1,080,000)
Fiscal Year 2015: ¥910,000 (Direct Cost: ¥700,000、Indirect Cost: ¥210,000)
Fiscal Year 2014: ¥780,000 (Direct Cost: ¥600,000、Indirect Cost: ¥180,000)
Fiscal Year 2013: ¥2,990,000 (Direct Cost: ¥2,300,000、Indirect Cost: ¥690,000)
|
Keywords | nonverbal communication / nonverbal / gesture recognition / speech recognition / facial analysis / non-verbal communication / facial expression / NVC analysis / communication / facial recognition |
Outline of Final Research Achievements |
Our research involved the developing a software-based gesture and speech recognition system which helps improve Japanese students’ abilities to effectively communicate in English. It’s difficult for Japanese students to get enough speaking practice and feedback in EFL classes, so we created a Virtual Interviewing System which focused on verbal communication. However, the importance of non-verbal cues, such as gestures, and body language in an English conversation is about 45-60%. The new system has 3 components: facial- (FR), speech- (SR) and gestural recognition (GR), each giving important feedback for verbal and non-verbal communication. We built upon existing SR, and increased the accuracy for Japanese-accented speakers. With FR, our system was able to correctly identify and distinguish between identical twins. Finally, we improved upon the GR component by adding a Hidden Markov Model, which allowed the detection of key shapes made when gesturing, and increasing the accuracy.
|