English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  From unsupervised to supervised categorization in vision and haptics

Gaissert, N., Wallraven, C., & Bülthoff, I. (2009). From unsupervised to supervised categorization in vision and haptics. Poster presented at 10th International Multisensory Research Forum (IMRF 2009), New York, NY, USA.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-C40F-9 Version Permalink: http://hdl.handle.net/21.11116/0000-0003-136E-D
Genre: Poster

Files

show Files

Creators

show
hide
 Creators:
Gaissert, N1, 2, Author              
Wallraven, C1, 2, Author              
Bülthoff, I1, 2, Author              
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Categorization studies have primarily focused on the visual percept of objects. But in every-day life humans combine percepts from different modalities. To better understand this cue combination and to learn more about the mechanisms underlying categorization, we performed different categorization tasks visually and haptically and compared the two modalities. All experiments used the same set of complex, parametrically-defined, shell-like objects based on three shape parameters (see figure and [Gaissert, N., C. Wallraven and H. H. Bülthoff: Analyzing perceptual representations of complex, parametrically-defined shapes using MDS. Eurohaptics 2008, 265-274]). For the visual task, we used printed pictures of the objects, whereas for the haptic experiments, 3D plastic models were generated using a 3D printer and explored by blindfolded participants using both hands. Three different categorization tasks were performed in which all objects were presented to participants simultaneously. In an unsupervised task participants had to categorize the objects in as many groups as they liked to. In a semi-supervised task participants had to form exactly three groups. In a supervised task participants received three prototype objects (see figure) and had to sort all other objects into three categories defined by the prototypes. The categorization was repeated until the same groups were formed twice in a row. The amount of repetitions needed across modalities was the same, showing that the task was equally hard visually and haptically. For more detailed analyses we generated similarity matrices based on which stimulus was paired with which other stimulus. As a measure of consistency – within and across modalities as well as within and across tasks – we calculated cross correlations between these matrices (see figure). Correlations within modalities were always higher than across modalities. In addition, as expected, the more constrained the task, the more consistently participants grouped the stimuli. Critically, multi-dimensional scaling analysis of the similarity matrices showed that all three shape parameters were perceived visually and haptically in all categorization tasks, but that the weighting of the parameters was dependent on the modality. In line with our previous results, this demonstrates the remarkable robustness of visual and haptic processing of complex shapes.

Details

show
hide
Language(s):
 Dates: 2009-07
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: 5943
 Degree: -

Event

show
hide
Title: 10th International Multisensory Research Forum (IMRF 2009)
Place of Event: New York, NY, USA
Start-/End Date: 2009-06-29 - 2009-07-02

Legal Case

show

Project information

show

Source 1

show
hide
Title: 10th International Multisensory Research Forum (IMRF 2009)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: 679 Start / End Page: - Identifier: -