• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

2020 Fiscal Year Research-status Report

Articulatory settings of English, Japanese, and Chinese bilingual and trilingual speakers

Research Project

Project/Area Number 20K00606
Research InstitutionThe University of Aizu

Principal Investigator

Wilson Ian  会津大学, コンピュータ理工学部, 教授 (50444930)

Project Period (FY) 2020-04-01 – 2023-03-31
Keywordsarticulatory setting / English / Japanese / bilingual / frequency of occurrence
Outline of Annual Research Achievements

Our original plan was to collect ultrasound tongue image video data and high-speed lip and jaw data from bilingual and trilingual Japanese and Chinese participants. However, due to COVID-19 we could not collect any new speech data, which would have required face-to-face mask-less communication. Instead, we purchased new hardware and software, and trained an RA. We read papers on the frequency of occurrence of phonemes in Japanese versus English. Together with our Research Assistant, we worked on modelling expected articulatory settings based on frequency of occurrence of phonemes across English and Japanese.

After implementing precautions against the spread of COVID-19, we hosted Dr. Kikuo Maekawa, president of the Phonetic Society of Japan, and two accompanying professors for a discussion of ultrasound data collection in our laboratory. We invited Mr. Takayuki Nagamine, a PhD student at Lancaster University (UK), overlapping with Dr. Maekawa. Mr. Nagamine’s PhD studies are focused on articulatory settings, the general topic of this kakenhi grant. We held very fruitful discussions, and gave an invited Zoom webinar at the March 2021 National Institute for Japanese Language and Linguistics (NINJAL) Colloquium. Without new data though, it was impossible to publish a paper.

Current Status of Research Progress
Current Status of Research Progress

2: Research has progressed on the whole more than it was originally planned.

Reason

Although our FY2020 plan was to use ultrasound and video to collect new speech data from bilingual and trilingual Japanese and Chinese speakers, COVID-19 prevented us from this. However, we purchased hardware and software, trained a Research Assistant to use them, reviewed literature, and brainstormed with other researchers.

To not waste time, we jumped ahead to our FY2021 plan. We read existing literature on the frequency of occurrence of phonemes in Japanese versus English. With a Research Assistant, we modelled expected articulatory settings based on the frequency of occurrence of phonemes across languages.

Thus, although forced to switch the order of some items in our research, we made reasonable progress and are prepared to collect data face-to-face when the situation permits.

Strategy for Future Research Activity

As soon as COVID-19 clears up and vaccinations become widespread in Japan, we will start speech data collection. Until that time, we will continue to model articulatory setting based on phoneme frequencies of occurrence. We will also prepare Mandarin Chinese stimuli and have them checked by a native speaker of Mandarin Chinese. We will start to search for bilingual and trilingual participants who can join us as soon as data collection becomes safely possible.

Causes of Carryover

Because of COVID-19 in FY2020, phonetic speech data could not be collected. So, there were no participants to pay honoraria to. Also, because of the absence of data collection, not as much computer equipment was needed, so it was not purchased yet. We plan to purchase that equipment as soon as it becomes apparent that we can safely proceed to collect data face-to-face in the laboratory.

  • Research Products

    (1 results)

All 2020

All Presentation (1 results) (of which Invited: 1 results)

  • [Presentation] Articulatory Settings: An Overview and Some Japanese Data2020

    • Author(s)
      Ian WILSON
    • Organizer
      第115回国立国語研究所コロキウム (115th NINJAL Colloquium)
    • Invited

URL: 

Published: 2021-12-27  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi