ausblenden:
Schlagwörter:
-
Zusammenfassung:
Proprioceptive signals are a critical component of our ability to perform complex movements, identify our posture and adapt to environmental changes. Our movements are generated by a large number of muscles and are sensed via a myriad of different receptor types. Even the most important ones, muscle spindles, carry highly multiplexed information. For instance, arm movements are sensed via distributed
and individually ambiguous activity patterns of muscle spindles, which depend on relative joint configurations rather than the absolute hand position. This high dimensional input (~50 muscles for a human arm) of distributed information poses a challenging decoding problem for the nervous system. Given the diversity in muscle activity, what are the necessary computations that the proprioceptive system needs to perform to sense our movements? Here we studied a proprioceptive variant of the
handwritten character recognition task to gain insight into potential computations that the proprioceptive system needs to perform. We focussed on handwritten character classification of muscle length configuration patterns that were required to draw that character. We started from a dataset comprising of pen-tip trajectory data recorded while subjects were writing individual single-stroke characters of the Latin alphabet (Williams et al. ICANN 2006), and employed a musculoskeletal model of the human upper limb to generate muscle length configurations corresponding to drawing the pen-tip trajectories in multiple horizontal and vertical planes. Using this model we created a large, scalable dataset of muscle length configurations corresponding to handwritten characters of varying sizes,
locations and orientations (n > 105 samples). To determine the difficulty of this problem, we trained support vector machines (SVM) to solve a binary one-vs-all classification task on the dataset, which achieves an accuracy of 0.89 ± 0.08 (mean ± s.e.m). Contrary to naive expectation, reading out the character at the level of muscles is much easier whereas SVMs trained on the same task using pen-tip
coordinates performed relatively poorly: 0.75 ± 0.14. This suggests that the musculoskeletal system itself serves as a non-linear projection to a higher dimensional space, which simplifies character recognition. Next we focussed on goal-driven deep neural network architectures to achieve higher accuracy. Training deep neural networks requires a large, diverse datasets and challenging tasks. We found that the scalable dataset for character recognition we generated is large enough to constrain deep convolutional architectures. We are currently exploring the performance of different deeplearning architectures in solving the handwritten character classification task to investigate which representations are learned and what computations are most efficient. We found that convolutional neural networks factoring out temporal and inter-muscle ('spatial') information achieve almost perfect accuracy for the multi-class problem. These preliminary results suggest that neural networks can learn pose-invariant character recognition from muscle configurations.