English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Disentangling cross-modal top-down predictive control by actively manipulating arbitrarily learned associations

MPS-Authors
/persons/resource/persons84459

Dwarakanath,  A
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Dwarakanath, A., & Kayser, C. (2013). Disentangling cross-modal top-down predictive control by actively manipulating arbitrarily learned associations. In 10th Göttingen Meeting of the German Neuroscience Society, 34th Göttingen Neurobiology Conference (pp. 134).


Cite as: http://hdl.handle.net/21.11116/0000-0001-4F5C-1
Abstract
Various studies have characterised the brain as an efficient coding system, thereby describing sensory processing as being optimised to the incoming statistics of natural stimuli. A key framework in this respect is that of predictive coding, which asserts that the brain actively predicts an upcoming sensory stimulus via top-down control mechanisms rather than passively registering it. This top-down control involves the propagation of a prediction to the primary sensory area where the error signal between the prediction and the incoming stimulus is calculated and propagated to higher areas for refinement of the prediction. Accurate predictions hence lead to a decrease in activity in early sensory areas, a presumed signature of predictive coding which several studies used to confirm the theory. However many studies used high level stimuli such as speech, for which subjects have strong and innate priors and which come with potentially confounding contextual variables and changes in attention. To account for such confounding effects in tests of predictive coding, we designed non-contextual priors consisting of arbitrary associations of random perceptual features. We used visual stimuli consisting of Gabor patches of six orientations (from 0° to 165°) paired with pseudo-natural acoustic soundscapes created by filtering a natural sound in six frequency bands (128 Hz to 8192 Hz, logarithmic steps). For each subject one orientation was randomly associated with one frequency band, and the subject was exposed to short presentations (1.5s) of these pairs for 15min. Subsequently we tested the impact of this learned predictive association on stimulus recognition in supra-, subliminal and occluded conditions using a 2AFC task. Stimuli were rendered subliminal using each subjects contrast detection threshold and occluded stimuli were masked to 50 by white pixel noise. During the test phase we presented stimuli both as pairs by preserving the previous pairing (control phase) and by pairing sounds with different orientations in order to see whether the acoustic predictor changes the perceived orientation. Responses were analysed by calculating bias and d' for each trial type and a shift in the 50 response bias was taken as evidence for a predictive effect on perceptual decisions. The important finding is that in the test condition we found a shift of the bias towards the orientation predicted by the sound in subliminal, catch and occluded trials. Our results hence provide evidence for predictive coding and top-down biasing in the context of arbitrary audio-visual associations.