• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

2021 Fiscal Year Annual Research Report

Development and Practical Evaluation of Teaching Materials for Developing English Nonverbal Communication Skills

Research Project

Project/Area Number 17K02964
Research InstitutionUniversity of Toyama

Principal Investigator

COOPER T・D  富山大学, 学術研究部教養教育学系, 准教授 (70442449)

Co-Investigator(Kenkyū-buntansha) 的場 隆一  富山高等専門学校, 電子情報工学, 准教授 (30592323)
塚田 章  富山高等専門学校, 電子情報工学, 教授 (40236849)
五味 伸之  福井工業高等専門学校, 機械工学科, 講師 (80600634)
成瀬 喜則  富山大学, 学術研究部教育学系, 教授 (00249773)
Project Period (FY) 2017-04-01 – 2022-03-31
Keywordsfacial expression / nonverbal communication / kinect camera / mock job interview / L1 L2 differences
Outline of Annual Research Achievements

In this research, we developed and evaluated a system that assessed the English communicative ability of students with a focus on nonverbal communication, namely facial expression. When speakers speak, "how" speakers say something becomes just as important as "what" they say (Cooper, 2013). Teaching presentation and interview skills to students is essential to help foster communicative competence (Canale, 1980). Data was collected under the guise of a mock job interview. Mock-job interviews have been seen as necessary to boost learners' "confidence and performance" (Hansen, 2009, p.318). We used a mock job interview in our experiment for two reasons: 1] when the subject is sitting down, there is minor variation in the three-dimensional data (x,y,z) we are collecting, and 2] participants are receiving something for their time. Students received both electronic feedback from type A, B, and C data: A] data from the Kinect, B] data from the researcher's scoring, C] quantitative data from surveys. Participants were asked typical job interview questions in first Japanese by the student researcher and then in English by the primary researcher. The researcher and the Kinect sensor camera collected empirical quantitative data through video and video analysis. Qualitative data was collected through the interview after the results of the analysis and data were explained to the interviewees. Interviewees were happy to partake in the experiment, and the results showed that awareness of their facial expressions was a positive learning outcome.

  • Research Products

    (3 results)

All 2022 2021

All Journal Article (1 results) (of which Int'l Joint Research: 1 results,  Peer Reviewed: 1 results) Presentation (2 results) (of which Int'l Joint Research: 2 results)

  • [Journal Article] Extension of iterated learning model based on real-world experiment.2021

    • Author(s)
      Matoba, R., Yonezawa, T., Hagiwara, S., Cooper, T., & Nakamura, M.
    • Journal Title

      Artificial Life and Robotics

      Volume: 26-2 Pages: 228-234

    • Peer Reviewed / Int'l Joint Research
  • [Presentation] Two Faces A Virtual Interview Coach for Japanese and English.2022

    • Author(s)
      Cooper, T.D., Tsukada, A., & Fukugawa, K.
    • Organizer
      18th Annual CamTESOL Conference on English Language Teaching: Teachers as learners. Phnom Penh, Cambodia.
    • Int'l Joint Research
  • [Presentation] IPA 4.0 Incorporating Facial Expression Analysis into the Virtual Interview and Presentation Assistant.2021

    • Author(s)
      Cooper, T.D., Tsukada,A., Matoba, R., & Takashima, Y.
    • Organizer
      EuroCALL Gathering 2020 (European Association for Computer-Assisted Language Learning). Copenhagen, Denmark.
    • Int'l Joint Research

URL: 

Published: 2022-12-28  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi