Acoustic-to-Articulatory Inversion for Pronunciation Training

07 November 2023, Version 1
This content is an early or alternative research output and has not been peer-reviewed by Cambridge University Press at the time of posting.

Abstract

Visual feedback of articulators using Electromagnetic- Articulography (EMA) has been shown to aid the acquisition of non-native speech sounds. Using physical EMA sensors is expensive and invasive making it impractical for providing real-world pronunciation feedback. Our work focuses on us- ing neural Acoustic-to-Articulatory Inversion (AAI) models to map speech directly to EMA sensor positions. Self-Supervised Learning (SSL) speech models, such as HuBERT, can produce representations of speech that have been shown to significantly improve performance on AAI tasks. Probing experiments have indicated that certain layers and iterations of SSL models produce representations that may yield better inversion performance than others. In this work, we build on these probing results to create an AAI model that improves upon a state-of-the-art baseline inversion model and evaluate the model’s suitability for second language pronunciation training.

Keywords

Acoustic-to-Articulatory Inversion
Pronunciation Training

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting and Discussion Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.