Mouth2Audio: intelligible audio synthesis from videos with distinctive vowel articulation
Garg, Saurabh; Ruan, Haoyao; Hamarneh, Ghassan; Behne, Dawn Marie; Jongman, Allard; Sereno, Joan; Wang, Yue
Journal article
Published version
Permanent lenke
https://hdl.handle.net/11250/3112680Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
- Institutt for psykologi [3143]
- Publikasjoner fra CRIStin - NTNU [38679]
Sammendrag
Humans use both auditory and facial cues to perceive speech, especially when auditory input is degraded, indicating a direct association between visual articulatory and acoustic speech information. This study investigates how well an audio signal of a word can be synthesized based on visual speech cues. Specifically, we synthesized audio waveforms of the vowels in monosyllabic English words from motion trajectories extracted from image sequences in the video recordings of the same words. The articulatory movements were recorded in two different speech styles: plain and clear. We designed a deep network trained on mouth landmark motion trajectories on a spectrogram and formant-based custom loss for different speech styles separately. Human and automatic evaluation show that our framework using visual cues can generate identifiable audio of the target vowels from distinct mouth landmark movements. Our results further demonstrate that intelligible audio can be synthesized from novel unseen talkers that were independent of the training data.