|dc.description.abstract||This dissertation investigated audiovisual (AV) perception of speech and music, when visual information starts before the auditory onset and provides a prediction about an upcoming corresponding sound. Previous electroencephalography (EEG) research has shown that in the brain this prediction due to the visual information can modulate early processing of the auditory signals and leads to a more suppressed and speeded up early event-related potentials (ERPs) such as N1 and P2 in AV compared to the auditory perception. However, the influence of previous experience on this prediction in AV perception has received little attention. To explore the influence of previous experience, the current project examined musical experience by examining N1 and P2 amplitudes and latencies for musicians and non-musicians. In addition, this project extends previous research by investigating the predictive effect of visual cues in AV perception using a time-frequency based approach, inter-trial phase coherence (ITPC) in delta, theta, alpha, and beta oscillation. ERP suppression and reduced latency resulting from predictive visual cues in AV perception were evaluated for four previously developed AV models
Musical experience influences AV speech and music perception. In AV speech perception, seeing facial articulation precedes and predicts the audio speech being produced, compared to the auditory speech, leads to reduced ERPs and ITPCs for musicians and non-musicians. However only musicians showed reduced N1 and suppression of alpha oscillation in AV speech. In AV music perception, seeing finger and hand movements precedes and predicts the audio music being produced, compared to the auditory music, leads to reduced ERPs and ITPCs for both groups. However only musicians showed reduced beta oscillation in AV music perception. These results indicate that early sensory processing in AV perception can be modified by musical experience. Furthermore, calculated differences in the four AV models lead to different patterns of results for N1 and P2, indicating that these models are not comparable.
Collectively, these results indicate that previous AV experience, such as that attained through musical training, influences the predictive mechanisms in AV speech and music perception. Moreover, regardless of previous musical experience, AV interaction models applied by previous research are not interchangeable.||en_US