• Search Research Projects
  • Search Researchers
  • How to Use
  1. Back to project page

2017 Fiscal Year Annual Research Report

Individual-based and automatic clustering of World Englishes and its application to assist international communication

Research Project

Project/Area Number 26240022
Research InstitutionThe University of Tokyo

Principal Investigator

峯松 信明  東京大学, 大学院工学系研究科(工学部), 教授 (90273333)

Co-Investigator(Kenkyū-buntansha) 牧野 武彦  中央大学, 経済学部, 教授 (00269482)
山内 豊  東京国際大学, 商学部, 教授 (30306245)
齋藤 大輔  東京大学, 大学院工学系研究科(工学部), 講師 (40615150)
Project Period (FY) 2014-04-01 – 2018-03-31
Keywords世界諸英語 / 国際コミュニケーション / 発音の地図化 / 発音クラスタリング / 発音の構造的表象 / 外国語教育 / 聞き取り難さ / シャドーイング
Outline of Annual Research Achievements

本研究では一貫して,世界諸英語発音の分類・クラスタリング,すなわち様々な英語発音の地図化が主目的であった。発音の地図化は,ある話者の英語の発音が,他者とどれだけ違うのか,自身の英語が世界中の英語(世界諸英語)の中でどこに位置するのかを可視化することを意味する。しかしながら,研究の遂行に従い,発音の違い(音声学的な表層的違い)ではなく,ある話者はどのような英語を聞き取り易い/難いと感じるのか,聞き取り易さ/難さの違い(認知的な違い)を推定する必要性がより高いことに気付かされた。換言すれば,自身の英語と異なっていても聞き取り易い英語は存在し,発音の違いがコミュニケーションを妨げる場合,妨げない場合がある。認知的な違いに焦点を絞った場合,学習者音声の音響分析では不十分であり,聞き手の聴取活動・態度を分析する必要がある。更に,聞き取り難くなる原因として,訛りのみならず,周辺雑音や,電話などの伝送特性,体格の違いによる声の違いなども考慮する必要がある。これら二点に対して技術的な解答を示すべく,研究期間を1年延長し「聞き手が感じる聞き取り難さ,嫌悪感」をあぶり出す手法として,聞き手にシャドーイングさせ,そのシャドー音声の崩れの様子に着眼する手法を検討した。その結果,学習者音声を分析するよりも,聞き手のシャドーイング音声を分析した方が,聞き手(シャドワー)が感じる「聞き取り難さ」をより高精度に予測できることが示された。また,様々な雑音・歪みに対して頑健な聴取ができるよう,High Variability Phonetic Training (HVPT) に着眼し,これを音声分析合成技術を使って拡張することを提案し,良好な結果を得た。技術的に拡張された HVPT 法の発表に対して優秀発表賞を受賞するなど,高く評価された。

Research Progress Status

平成29年度が最終年度であるため、記入しない。

Strategy for Future Research Activity

平成29年度が最終年度であるため、記入しない。

  • Research Products

    (28 results)

All 2019 2018 2017

All Journal Article (5 results) (of which Peer Reviewed: 5 results,  Open Access: 5 results) Presentation (13 results) (of which Int'l Joint Research: 13 results) Book (1 results) Funded Workshop (9 results)

  • [Journal Article] 音声分析・合成・認識技術を用いた多様な外国語教育支援2018

    • Author(s)
      峯松信明
    • Journal Title

      日本音響学会誌

      Volume: 74 Pages: 525, 530

    • DOI

      10.20697/jasj.74.9_525

    • Peer Reviewed / Open Access
  • [Journal Article] Many-to-many and Completely Parallel-data-free Voice Conversion Based on Eigenspace DNN2018

    • Author(s)
      Hashimoto Tetsuya、Minematsu Nobuaki、Saito Daisuke
    • Journal Title

      IEEE/ACM Transactions on Audio, Speech, and Language Processing

      Volume: 27 Pages: 332, 341

    • DOI

      10.1109/TASLP.2018.2878949

    • Peer Reviewed / Open Access
  • [Journal Article] Wasserstein GAN and Waveform Loss-Based Acoustic Model Training for Multi-Speaker Text-to-Speech Synthesis Systems Using a WaveNet Vocoder2018

    • Author(s)
      Zhao Yi、Takaki Shinji、Luong Hieu-Thi、Yamagishi Junichi、Saito Daisuke、Minematsu Nobuaki
    • Journal Title

      IEEE Access

      Volume: 6 Pages: 60478~60488

    • DOI

      10.1109/ACCESS.2018.2872060

    • Peer Reviewed / Open Access
  • [Journal Article] Accent Sandhi Estimation of Tokyo Dialect of Japanese Using Conditional Random Fields2017

    • Author(s)
      Masayuki SUZUKI, Ryo KUROIWA, Keisuke INNAMI, Shumpei KOBAYASHI, Shinya SHIMIZU, Nobuaki MINEMATSU, Keikichi HIROSE
    • Journal Title

      IEICE Transactions on Information and Systems

      Volume: E100.D Pages: 655, 661

    • DOI

      10.1587/transinf.2016AWI0004

    • Peer Reviewed / Open Access
  • [Journal Article] Development and Evaluation of Online Infrastructure to Aid Teaching and Learning of Japanese Prosody2017

    • Author(s)
      Nobuaki MINEMATSU, Ibuki NAKAMURA, Masayuki SUZUKI, Hiroko HIRANO, Chieko NAKAGAWA, Noriko NAKAMURA, Yukinori TAGAWA, Keikichi HIROSE, Hiroya HASHIMOTO
    • Journal Title

      IEICE Transactions on Information and Systems

      Volume: E100.D Pages: 662, 669

    • DOI

      10.1587/transinf.2016AWI0007

    • Peer Reviewed / Open Access
  • [Presentation] Inter-learner shadowing with speech technologies enables automatic and objective measurement of comprehensibility of learners' utterances2019

    • Author(s)
      Nobuaki Minematsu, Yusuke Inoue, Daisuke Saito, Yutaka Yamauchi and Kumi Kanamura
    • Organizer
      AAAL
    • Int'l Joint Research
  • [Presentation] COMPUTER-AIDED HIGH VARIABILITY PHONETIC TRAINING TO IMPROVE ROBUSTNESS OF LEARNERS’ LISTENING COMPREHENSION2019

    • Author(s)
      Haoyu Zhang, Yusuke Inoue, Daisuke Saito, Nobuaki Minematsu, Yutaka Yamauchi
    • Organizer
      ICPhS
    • Int'l Joint Research
  • [Presentation] A Study of Objective Measurement of Comprehensibility through Native Speakers' Shadowing of Learners' Utterances2018

    • Author(s)
      Yusuke Inoue, Suguru Kabashima, Daisuke Saito, Nobuaki Minematsu, Kumi Kanamura, Yutaka Yamauchi
    • Organizer
      INTERSPEECH
    • Int'l Joint Research
  • [Presentation] Natives’ shadowability as objectively measured comprehensibility of non-native speech2018

    • Author(s)
      N. Minematsu, Y. Inoue, S. Kabashima, D. Saito, Y. Yamauchi, K. Kanamura
    • Organizer
      ISAPh
    • Int'l Joint Research
  • [Presentation] DNN-BASED SCORING OF LANGUAGE LEARNERS’ PROFICIENCY USING LEARNERS’ SHADOWINGS AND NATIVE LISTENERS’ RESPONSIVE SHADOWINGS2018

    • Author(s)
      Suguru Kabashima, Yuusuke Inoue, Daisuke Saito, Nobuaki Minematsu
    • Organizer
      Spoken Language Technology
    • Int'l Joint Research
  • [Presentation] Automatic Scoring of Shadowing Speech based on DNN Posteriors and their DTW2017

    • Author(s)
      J. Yue, F. Shiozawa, S. Toyama, Y. Yamauchi, K. Ito, D. Saito, N. Minematsu
    • Organizer
      INTERSPEECH
    • Int'l Joint Research
  • [Presentation] Acoustic-to-articulatory mapping based on mixture of probabilistic canonical correlation analysis2017

    • Author(s)
      H. Uchida, D. Saito, N. Minematsu
    • Organizer
      INTERSPEECH
    • Int'l Joint Research
  • [Presentation] Development and Maintenance of Practical and In-service Systems for Recording Shadowing Utterances and Their Assessment2017

    • Author(s)
      J. Yue, D. Saito, N. Minematsu, Y. Yamauchi
    • Organizer
      SLaTE
    • Int'l Joint Research
  • [Presentation] New Features and Effectiveness of Suzuki-kun, the First and Only Prosodic Reading Tutor of Tokyo Japanese2017

    • Author(s)
      N. Minematsu, D. Saito
    • Organizer
      SLaTE
    • Int'l Joint Research
  • [Presentation] Investigation of teacher-selected sentences and machine-suggested sentences in terms of correlation between human ratings and GOP-based machine scores2017

    • Author(s)
      Y. Yamauchi, J. Yue, K. Ito, N. Minematsu
    • Organizer
      SLaTE
    • Int'l Joint Research
  • [Presentation] A recording of bilingual acoustic-articulatory data from a Japanese-Chinese bilingual speaker with a 3D-EMA system2017

    • Author(s)
      H. Uchida, T. Hashimoto, D. Saito, N. Minematsu,
    • Organizer
      ISSP
    • Int'l Joint Research
  • [Presentation] An automatic evaluation system of L2 oral simultaneous reproduction2017

    • Author(s)
      Y. Yamauchi and N. Minematsu,
    • Organizer
      ACTFL
    • Int'l Joint Research
  • [Presentation] Introduction to OJAD for practical prosody training of Japanese2017

    • Author(s)
      N. Minematsu
    • Organizer
      ACTFL
    • Int'l Joint Research
  • [Book] Digital resources for learning Japanese2018

    • Author(s)
      Motoko Ueyama, Irena Srdanovic
    • Total Pages
      235
    • Publisher
      Bononia University Press
    • ISBN
      978-88-6923-297-8
  • [Funded Workshop] Tutorial Workshop of OJAD (Warsaw, Poland)2017

  • [Funded Workshop] Tutorial Workshop of OJAD (Luzern, Switzerland)2017

  • [Funded Workshop] Tutorial Workshop of OJAD (Bern, Switzerland)2017

  • [Funded Workshop] Tutorial Workshop of OJAD (Gothenberg, Sweden)2017

  • [Funded Workshop] Tutorial Workshop of OJAD (Beijing, China)2017

  • [Funded Workshop] Tutorial Workshop of OJAD (Tainan, Taiwan)2017

  • [Funded Workshop] Tutorial Workshop of OJAD (Taipei, Taiwan)2017

  • [Funded Workshop] Tutorial Workshop of OJAD (Taichu, Taiwan)2017

  • [Funded Workshop] Tutorial Workshop of OJAD (Eugene, USA)2017

URL: 

Published: 2019-12-27  

Information User Guide FAQ News Terms of Use Attribution of KAKENHI

Powered by NII kakenhi