English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Fine-Grained Semantic Segmentation of Motion Capture Data using Convolutional Neural Networks

Cheema, N. (2019). Fine-Grained Semantic Segmentation of Motion Capture Data using Convolutional Neural Networks. Master Thesis, Universität des Saarlandes, Saarbrücken.

Item is

Files

show Files
hide Files
:
2019 MSc Thesis Noshaba Cheema - Visual Computing.pdf (Any fulltext), 5MB
 
File Permalink:
-
Name:
2019 MSc Thesis Noshaba Cheema - Visual Computing.pdf
Description:
-
OA-Status:
Visibility:
Restricted (Max Planck Institute for Informatics, MSIN; )
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Cheema, Noshaba1, Author
Slusallek, Philipp2, Advisor
Hosseini, Somayeh2, Referee
Slusallek, Philipp2, Referee
Theobalt, C.3, Referee           
Affiliations:
1International Max Planck Research School, MPI for Informatics, Max Planck Society, Campus E1 4, 66123 Saarbrücken, DE, ou_1116551              
2External Organizations, ou_persistent22              
3Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Content

show
hide
Free keywords: -
 Abstract: Human motion capture data has been widely used in data-driven character animation. In
order to generate realistic, natural-looking motions, most data-driven approaches require
considerable efforts of pre-processing, including motion segmentation, annotation, and
so on. Existing (semi-) automatic solutions either require hand-crafted features for
motion segmentation or do not produce the semantic annotations required for motion
synthesis and building large-scale motion databases. In this thesis, an approach for a
semi-automatic framework for semantic segmentation of motion capture data based on
(semi-) supervised machine learning techniques is developed. The motion capture data is
first transformed into a “motion image” to apply common convolutional neural networks
for image segmentation. Convolutions over the time domain enable the extraction of
temporal information and dilated convolutions are used to enlarge the receptive field
exponentially using comparably few layers and parameters. The finally developed dilated
temporal fully-convolutional model is compared against state-of-the-art models in action
segmentation, as well as a popular network for sequence modeling. The models are
further tested on noisy and inaccurate training labels and the developed model is found
to be surprisingly robust and self-correcting.

Details

show
hide
Language(s): eng - English
 Dates: 2019-03-292019-03-292019-03-29
 Publication Status: Issued
 Pages: 127 p.
 Publishing info: Saarbrücken : Universität des Saarlandes
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: Cheema_MSc2019
 Degree: Master

Event

show

Legal Case

show

Project information

show

Source

show