EMMA 1.1: Enriched information from speech and multimodal inputs

Talks

EMMA 1.1: Enriched information from speech and multimodal inputs

Add to calendar

Event details

Date:
Coordinated Universal Time
Location:
San Francisco, USA
Speakers:
Deborah Dahl and Michael Johnston

Since EMMA 1.0 became a W3C standard in February 2009, there have been numerous implementations, including AT&T's Speech Mashup, Openstream's Cue-Me platform and Microsoft's Tellme platform. Extensive feedback has come in from these implementations regarding desired new features and clarifications of existing features. The recently published EMMA 1.1 draft addresses this feedback. This talk introduces the new features proposed for EMMA 1.1. Of particular interest are (1) standardized support for human annotation (2) enhanced grammar support including active grammars and inline specification (3) better support for specifying parameters of a recognizer or other EMMA processing component, and (4) enhanced interoperability with EmotionML, which will support the increasing role of emotion detection in CRM contexts.