ausblenden:
Schlagwörter:
-
Zusammenfassung:
The prevalent means of characterizing stimulus selectivity in sensory neurons is to estimate their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble.
This approach treats each spike as an independent message but ignores the possibility that information might be conveyed through patterns of neural activity that are distributed across space or time.
In the retina for example, visual stimuli are analyzed by several parallel channels with different spatiotemporal filtering properties. How can we define the receptive field of a whole population of neurons, not just a single neuron?
Imaging methods (such as voltage-sensitive dye imaging) yield measurements of neural activity that do not contain spiking events at all. How can receptive fields be derived from this kind of data?
Even for single neurons, there is evidence that multiple features of the neural response, for example spike patterns or latencies, can carry information. How can these features be taken into account in the estimation process?
Here, we address the question of how receptive fields can be calculated from such distributed representations. We seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled, as measured by the mutual information between the two signals. As an efficient implementation of this strategy, we use an extension of reverse-correlation methods based on canonical correlation analysis [1]. We evaluate our approach using both simulated data and multi-electrode recordings from rabbit retinal ganglion cells [2]. In addition, we show how the model can be extended to capture nonlinear stimulus-response relationships and to test different coding mechanisms using kernel canonical correlation analysis [3].