Research Article

Online Multimodal interaction for Speech interpretation

by  Vaishali Ingle, Aditi Deshpande
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 1 - Issue 19
Published: February 2010
Authors: Vaishali Ingle, Aditi Deshpande
10.5120/398-594
PDF

Vaishali Ingle, Aditi Deshpande . Online Multimodal interaction for Speech interpretation. International Journal of Computer Applications. 1, 19 (February 2010), 81-85. DOI=10.5120/398-594

                        @article{ 10.5120/398-594,
                        author  = { Vaishali Ingle,Aditi Deshpande },
                        title   = { Online Multimodal interaction for Speech interpretation },
                        journal = { International Journal of Computer Applications },
                        year    = { 2010 },
                        volume  = { 1 },
                        number  = { 19 },
                        pages   = { 81-85 },
                        doi     = { 10.5120/398-594 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2010
                        %A Vaishali Ingle
                        %A Aditi Deshpande
                        %T Online Multimodal interaction for Speech interpretation%T 
                        %J International Journal of Computer Applications
                        %V 1
                        %N 19
                        %P 81-85
                        %R 10.5120/398-594
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

In this paper, we describe an implementation of multimodal interaction for speech interpretation to enable access to the Web. As per W3C recommendation on 10th February 2009 the latest version of, EMMA is used for translation of speech signals into a format interpreted by the application language, greatly simplifying the process of adding multiple modes to an application. EMMA is used for annotating the interpretation of user input. The lattice is designed by considering the model, architecture, input modalities. The interpretation of the user's input is expected to be generated by signal interpretation process by speech.

References
  • EMMA:Extensible MultiModal Annotation Markup language W3C Recommendation 10th February 2009 http://www.w3.org/TR/2009/REC-emma-20090210
  • Multimodality: Simple Technologies Drive a New Breed of Complex Application Input http://www.devx.com/wireless/Article/27878/1763
  • An overview of EMMA-Extensible MultiModal Annotation-Michael Johnston,AT&T Labs Research http://www.research.att.com/~johnston/
  • Multimodal Interaction Activity: http://www.w3.org/2002/mmi/
  • EMMA: Extensible MultiModal Annotation 1.0: Implementation Report http://www.w3.org/2002/mmi/2008/emma-ir/#NotesOnTesting
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

EMMA SALT X+V speech modality annotation Token XPath interpretation lattice

Powered by PhDFocusTM