Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Conference Paper

Visual Salience and Perceptual Grouping in Multimodal Interactivity


Romary,  Laurent
Max Planck Digital Library, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

(Any fulltext), 89KB

Supplementary Material (public)
There is no public supplementary material available

Landragin, F., Bellalem, N., & Romary, L. (2001). Visual Salience and Perceptual Grouping in Multimodal Interactivity. In International Workshop on Information Presentation and Natural Multimodal Dialogue (pp. 151-155).

Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-877D-8
This paper deals with the pragmatic interpretation of multimodal referring expressions in man-machine dialogue systems. We show the importance of building up a structure of the visual context at a semantic level, in order to enrich the significant possibilities of interpretations and to make possible the fusion of this structure with the ones obtained from the linguistic and gesture semantic analyses. Visual salience and perceptual grouping are two notions that guide such a structuring. We thus propose a hierarchy of salience criteria linked to an algorithm that detects salient objects, as well as guidelines for grouping algorithms. We show how the integration of the results of all these algorithms is a complex problem. We propose simple heuristics to reduce this complexity and we conclude on the usability of such heuristics in actual systems.