English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Markerless tracking of user-defined anatomical features with deep learning

MPS-Authors
/persons/resource/persons83805

Bethge,  M
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Mathis, A., Mamidanna, P., Abe, T., Cury, K., Murthy, V., Mathis, M., et al. (2018). Markerless tracking of user-defined anatomical features with deep learning. Poster presented at CSF Conference: Hand, Brain and Technology: The Somatosensory System (HBT 2018), Monte Verità, Switzerland.


Cite as: https://hdl.handle.net/21.11116/0000-0002-B7C7-F
Abstract
Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based
tracking, yet markers are intrusive (especially for smaller animals), and the number and location of the markers must be determined a priori. We present a highly efficient method for markerless tracking based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in a broad collection of experimental settings: mice odor trail-tracking, egg-laying behavior in drosophila, and mouse hand
articulation in a skilled forelimb task. For example, during the skilled reaching behavior, individual joints
can be automatically tracked (and a confidence score is reported). Remarkably, even when a small number of frames are labeled, the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.