Vis enkel innførsel

dc.contributor.advisorSvendsen, Torbjørnnb_NO
dc.contributor.authorAmdal, Ingunnnb_NO
dc.date.accessioned2014-12-19T13:29:27Z
dc.date.available2014-12-19T13:29:27Z
dc.date.created2007-06-22nb_NO
dc.date.issued2002nb_NO
dc.identifier122454nb_NO
dc.identifier.isbn82-471-5502-8nb_NO
dc.identifier.urihttp://hdl.handle.net/11250/249709
dc.description.abstractTo achieve a robust system the variation seen for different speaking styles must be handled. An investigation of standard automatic speech recognition techniques for different speaking styles showed that lexical modelling using general-purpose variants gave small improvements, but the errors differed compared with using only one canonical pronunciation per word. Modelling the variation using the acoustic models (using context dependency and/or speaker dependent adaptation) gave a significant improvement, but the resulting performance for non-native and spontaneous speech was still far from read speech. In this dissertation a complete data-driven approach to rule-based lexicon adaptation is presented, where the effect of the acoustic models is incorporated in the rule pruning metric. Reference and alternative transcriptions were aligned by dynamic programming, but with a data-driven method to derive the phone-to-phone substitution costs. The costs were based on the statistical co-occurrence of phones, association strength. Rules for pronunciation variation were derived from this alignment. The rules were pruned using a new metric based on acoustic log likelihood. Well trained acoustic models are capable of modelling much of the variation seen, and using the acoustic log likelihood to assess the pronunciation rules prevents the lexical modelling from adding variation already accounted for as shown for direct pronunciation variation modelling. For the non-native task data-driven pronunciation modelling by learning pronunciation rules gave a significant performance gain. Acoustic log likelihood rule pruning performed better than rule probability pruning. For spontaneous dictation the pronunciation variation experiments did not improve the performance. The answer to how to better model the variation for spontaneous speech seems to lie neither in the acoustical nor the lexical modelling. The main differences between read and spontaneous speech are the grammar used and disfluencies like restarts and long pauses. The language model may thus be the best starting point for more research to achieve better performance for this speaking style.nb_NO
dc.languageengnb_NO
dc.publisherFakultet for informasjonsteknologi, matematikk og elektroteknikknb_NO
dc.relation.ispartofseriesDr.ingeniøravhandling, 0809-103X; 2002:100nb_NO
dc.subjectpronunciation variationen_GB
dc.subjectlexical modellingen_GB
dc.subjectautomatic speech recognitionen_GB
dc.subjectnon-native speechen_GB
dc.subjectTECHNOLOGY: Electrical engineering, electronics and photonics: Electronicsen_GB
dc.titleLearning pronunciation variation: A data-driven approach to rule-based lecxicon adaptation for automatic speech recognitionnb_NO
dc.typeDoctoral thesisnb_NO
dc.source.pagenumber182nb_NO
dc.contributor.departmentNorges teknisk-naturvitenskapelige universitet, Fakultet for informasjonsteknologi, matematikk og elektroteknikknb_NO
dc.description.degreedr.ing.nb_NO
dc.description.degreedr.ing.en_GB


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel