Developing rich multimodal mobile applications using W3C Standards (tutorial)

Talks

Developing rich multimodal mobile applications using W3C Standards

Add to calendar

Event details

Date:
Coordinated Universal Time
Location:
San Francisco, USA
Speakers:
Nagesh Kharidi and Raj Tumuluri

Using Openstream’s Cue-me™ platform and AT&T Speech Mashup, participants will learn how the Multimodal Architecture based on World Wide Web Consortium Standards can be used to develop rich multimodal applications. The session will present the W3C MultiModal Interaction (MMI) architecture and how the architecture integrates various components of a multimodal system (speech recognition, text-to-speech, ink annotation, handwriting, camera, etc.), into a smoothly coordinated application. Participants will use Openstream’s Cue-me™ Studio and learn hands-on how to use various components to incorporate rich interactions into an application in a portable, platform-independent way. The developed application will be deployed and run on devices running iOS, Android, Windows, etc.