Plain-to-clear speech video conversion for enhanced intelligibility
Sachdeva, Shubam; Ruan, Haoyao; Hamarneh, Ghassan; Behne, Dawn Marie; Jongman, Allard; Sereno, Joan; Wang, Yue
Peer reviewed, Journal article
Published version
Permanent lenke
https://hdl.handle.net/11250/3067981Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
- Institutt for psykologi [3301]
- Publikasjoner fra CRIStin - NTNU [40715]
Originalversjon
International Journal of Speech Technology. 2023, 26 163-184. 10.1007/s10772-023-10018-zSammendrag
Clearly articulated speech, relative to plain-style speech, has been shown to improve intelligibility. We examine if visible speech cues in video only can be systematically modified to enhance clear-speech visual features and improve intelligibility. We extract clear-speech visual features of English words varying in vowels produced by multiple male and female talkers. Via a frame-by-frame image-warping based video generation method with a controllable parameter (displacement factor), we apply the extracted clear-speech visual features to videos of plain speech to synthesize clear speech videos. We evaluate the generated videos using a robust, state of the art AI Lip Reader as well as human intelligibility testing. The contributions of this study are: (1) we successfully extract relevant visual cues for video modifications across speech styles, and have achieved enhanced intelligibility for AI; (2) this work suggests that universal talker-independent clear-speech features may be utilized to modify any talker’s visual speech style; (3) we introduce “displacement factor” as a way of systematically scaling the magnitude of displacement modifications between speech styles; and (4) the high definition generated videos make them ideal candidates for human-centric intelligibility and perceptual training studies.