Abstract
In recent years, multimodal interfaces have gained momentum as an alternative to traditional WIMP interaction styles. Existing multimodal fusion engines and frameworks range from low-level data stream-oriented approaches to high-level semantic inference-based solutions. However, there is a lack of multimodal interaction engines offering native fusion support across different levels of abstractions to fully exploit the power of multimodal interactions. We present Mudra, a unified multimodal interaction framework supporting the integrated processing of low-level data streams as well as high-level semantic inferences. Our solution is based on a central fact base in combination with a declarative rule-based language to derive new facts at different abstraction levels. Our innovative architecture for multimodal interaction encourages the use of software engineering principles such as modularisation and composition to support a growing set of input modalities as well as to enable the integration of existing or novel multimodal fusion engines.
Original language | English |
---|---|
Title of host publication | Proceedings of the 13th international conference on multimodal interfaces (ICMI '11) |
Publisher | ACM Press |
Pages | 97-104 |
Number of pages | 8 |
ISBN (Print) | 9781450306416 |
DOIs | |
Publication status | Published - 2011 |
Externally published | Yes |
Keywords
- declarative programming
- rule language
- multimodal interaction
- multimodal fusion
- Mudra