Logo Logo
Hilfe
Kontakt
Switch language to English
Combining Speech User Interfaces of Different Applications
Combining Speech User Interfaces of Different Applications
Recent technological advances allow for building real-time, inter-active multi-modal dialog systems for a wide variety of applications ranging from information systems to communication systems interacting with back-end services. To retrieve or update information from various information systems the user has to interact among other man-machine-interfaces (simultaneously) with speech dialog systems. This will inevitably lead a situation where a user has to interact with multiple speech dialog systems within a single thread of activity. Exposing the users to such an environment with diverse speech interfaces will result in increased cognitive load and thus bad usability. An integrated speech enabled access layer to all available information from different applications would allow the user to access information more efficiently and easily. This dissertation proposes a novel approach to build such an integrated speech user interface to different applications by combining the existing speech user interfaces of different applications automatically or semi-automatically. By analyzing the dialog specifications of different applications, functional and semantic overlaps between the applications are recognized. The overlaps are solved successfully in the level of dialog specification so that the integrated speech user interface provides transparent access to different applications, solves the problems of task sharing and enables information sharing among different applications.
spoken dialog system, combining speech applications, natural language processing, dialog modeling
Song, Dongyi
2006
Englisch
Universitätsbibliothek der Ludwig-Maximilians-Universität München
Song, Dongyi (2006): Combining Speech User Interfaces of Different Applications. Dissertation, LMU München: Fakultät für Mathematik, Informatik und Statistik
[thumbnail of song_dongyi.pdf]
Vorschau
PDF
song_dongyi.pdf

1MB

Abstract

Recent technological advances allow for building real-time, inter-active multi-modal dialog systems for a wide variety of applications ranging from information systems to communication systems interacting with back-end services. To retrieve or update information from various information systems the user has to interact among other man-machine-interfaces (simultaneously) with speech dialog systems. This will inevitably lead a situation where a user has to interact with multiple speech dialog systems within a single thread of activity. Exposing the users to such an environment with diverse speech interfaces will result in increased cognitive load and thus bad usability. An integrated speech enabled access layer to all available information from different applications would allow the user to access information more efficiently and easily. This dissertation proposes a novel approach to build such an integrated speech user interface to different applications by combining the existing speech user interfaces of different applications automatically or semi-automatically. By analyzing the dialog specifications of different applications, functional and semantic overlaps between the applications are recognized. The overlaps are solved successfully in the level of dialog specification so that the integrated speech user interface provides transparent access to different applications, solves the problems of task sharing and enables information sharing among different applications.