Vis enkel innførsel

dc.contributor.advisorKristiansen, Lillnb_NO
dc.contributor.advisorWarakagoda, Naradanb_NO
dc.contributor.advisorKvale, Knutnb_NO
dc.contributor.authorSchie, Thormodnb_NO
dc.date.accessioned2014-12-19T14:13:08Z
dc.date.available2014-12-19T14:13:08Z
dc.date.created2010-09-05nb_NO
dc.date.issued2006nb_NO
dc.identifier349038nb_NO
dc.identifierntnudaim:1266nb_NO
dc.identifier.urihttp://hdl.handle.net/11250/262046
dc.description.abstractThis thesis presents a mobile multimodal service platform, which enable users to interact with automated services using a standard mobile terminal in a user friendly and efficient way. The concept of multimodality was introduced to the world of mobile devices and services because one saw the limitations that conventional interaction methods posed. There is a merging trend that people want to be more mobile and have access to different services when on the move. To adapt to the user needs, the service providers try to develop mobile services. The problem is that these services are becoming ever more complex and requires more interaction from the user. The paradox is that the mobile devices these new services are accessible from has not evolved in the same speed. Most of the mobile devices sold on the market today basically comprise of a display and a simple keypad. Thus, to navigate in and operate a mobile service, requires both patience and handiness using a mobile handset. The multimodal system worked on in this thesis is a speech-centric multimodal platform based on a client-server architecture. The user connect the multimodal client part with the multimodal server part and can thereafter interact with a multimodal service using both speech commands and touch-sensitive display to point at objects etc. in the graphical user interface. As a response to the user queries the system can present the results using both graphics and synthesized speech. This may not sound as a revolutionizing new concept, but what the multimodal interface provides is the possibility to give simultaneous input. I.e. the user may point at a icon on the display while simultaneously input speech commands. These two inputs interpreted one by one will give no meaning, but interpreted together they constitute a reasonable user query. In this way, the user can do interaction based on own preferences. The result of the query is also multimodal, i.e. the system present the results according to the user preference, and the user will hopefully have a better user experience. The thesis look into multimodality and relevant technology. Further it specifies requirements for the mobile multimodal service. Based on the requirements, a mobile multimodal solution is elaborated. The implementation and the presented solution elaborated in the thesis is based upon a multimodal platform which among others Telenor R&D has contributed to. The original multimodal platform base the communication between client and server on a WLAN connection. To improve the mobility, functionality for connecting client and server using third generation mobile network technology referred to as 3G, or more specific UMTS, is implemented. Further an analysis on how well the implementation cover the specified requirements is performed. Finally, considerations about the multimodality and the presented solution is discussed, with emphasize on reliability, usability and openness.nb_NO
dc.languageengnb_NO
dc.publisherInstitutt for telematikknb_NO
dc.subjectntnudaimno_NO
dc.subjectSIE7 kommunikasjonsteknologino_NO
dc.subjectTelematikkno_NO
dc.titleMobile Multimodal Service for a 3G-terminalnb_NO
dc.typeMaster thesisnb_NO
dc.source.pagenumber103nb_NO
dc.contributor.departmentNorges teknisk-naturvitenskapelige universitet, Fakultet for informasjonsteknologi, matematikk og elektroteknikk, Institutt for telematikknb_NO


Tilhørende fil(er)

Thumbnail
Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel