English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Integration of Multiple Visual Inputs in the Blowfly

MPS-Authors
/persons/resource/persons83812

Bierig,  K
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator

Link
(Any fulltext)

Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Hardcastle, B., Schwyn, D., Bierig, K., & Krapp, H. (2016). Integration of Multiple Visual Inputs in the Blowfly. Poster presented at AVA Christmas Meeting 2015, London, UK.


Cite as: http://hdl.handle.net/21.11116/0000-0000-7B78-0
Abstract
The stabilization of gaze may involve multiple sensory systems. In blowflies, two visual pathways provide input to the gaze stabilization system: the high-resolution compound eyes and the simple dorsal ocelli. Individually, the corresponding pathways involved cover different dynamic input ranges, incur different processing delays, and suffer from different levels of sensor and processing noise. Information from multiple sensory pathways must be integrated in order to effect appropriate movements of the head to stabilize gaze; however, it is not entirely clear how this happens. Using high-speed videography, we investigated the combination of information from the two visual pathways at the behavioral output. We measured compensatory rotations of the head in response to a simulated roll rotation of a false-horizon around the fly, oscillating at up to 10 Hz. We found that the ocellar input reduces the response delay by an average of 5 ms but does not significantly affect the response gain or bandwidth. Our result suggests a nonlinear integration of compound eye and ocellar information. We are now performing intracellular recordings from elements along the visuomotor pathway likely to be involved in the integration of motion vision and ocellar signals, in response to the same visual stimulus used to evoke head movements in our behavioral experiments. This will allow us to study how signals affected by different processing delays along the two visual pathways are combined to ultimately reduce the delay of the behavioral output.