Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Standardizing and benchmarking data analysis for calcium imaging

MPG-Autoren
Es sind keine MPG-Autoren in der Publikation vorhanden
Externe Ressourcen

Link
(beliebiger Volltext)

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Berens, P., Theis, L., Stone, J., Sofroniew, N., Tolias, A., Bethge, M., et al. (2017). Standardizing and benchmarking data analysis for calcium imaging. Poster presented at Computational and Systems Neuroscience Meeting (COSYNE 2017), Salt Lake City, UT, USA.


Zitierlink: https://hdl.handle.net/21.11116/0000-0000-C50B-6
Zusammenfassung
Two-photon laser scanning microscopy with fluorescent calcium indicators is used widely to measure the activity of large populations of neurons. Extracting biologically relevant signals of interest without manual intervention remains a challenge. Two key problems are identifying image regions corresponding to individual neurons, and then detecting the timing of individual spikes from their derived fluorescence traces. The neuroscience community still lacks automated and agreed-upon solutions to these problems. Motivated by algorithm benchmarking efforts in computer vision and machine learning, we built two web-based benchmarking systems, Neurofinder (http://neurofinder.codeneuro.org) and Spikefinder (http://spikefinder.codeneuro.org), to compare algorithm performance on standardized datasets. Both were built with modular and modern open-source tools, allowing easy reuse for other data analysis problems. Neurofinder considers the problem of identifying neuron somata in fluorescence movies. We assembled a collection of training datasets from multiple labs in a standardized format, each with labeled regions defined manually, in some cases guided by activity-independent anatomical markers. Algorithm results are submitted through a web application and evaluated on independent test data, for which labels have not been made public. Evaluation metrics separately assess accuracy of neuronal locations and shapes. Submitted results are stored in a database and metrics are presented in a leaderboard. Spikefinder considers the problem of detecting spike times from fluorescence traces, building on a recent quantitative comparison of existing spike inference algorithms (Theis et al. 2016). Here, we assembled training data with simultaneously measured calcium traces and electrophysiologically-recorded action potentials. Performance of submitted algorithms is evaluated on a test dataset using several metrics including correlation, information gain, and standard measures from signal detection. Both challenges are currently running with publicly contributed algorithms. We hope this approach will both improve our understanding of how current algorithms perform, and generate new crowd-sourced solutions to current and future analysis problems.