Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Combining appearance and motion for human action classification in videos

Dhillon, P., Nowozin, S., & Lampert, C. (2009). Combining appearance and motion for human action classification in videos. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (pp. 22-29). Piscataway, NJ, USA: IEEE Service Center.

Item is

Externe Referenzen

einblenden:
ausblenden:
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Dhillon, PS, Autor           
Nowozin, S1, 2, Autor           
Lampert, C1, 2, Autor           
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: An important cue to high level scene understanding is to analyze the objects in the scene and their behavior and interactions. In this paper, we study the problem of classification of activities in videos, as this is an integral component of any scene understanding system, and present a novel approach for recognizing human action categories in videos by combining information from appearance and motion of human body parts. Our approach is based on tracking human body parts by using mixture particle filters and then clustering the particles using local non - parametric clustering, hence associating a local set of particles to each cluster mode. The trajectory of these cluster modes provides the “motion” information and the “appearance” information is provided by the statistical information about the relative motion of these local set of particles over a number of frames. Later we use a “Bag of Words” model to build one histogram per video sequence from the set of these robust appearance and motion descriptors. These histograms provide us characteristic information which helps us to discriminate among various human actions which ultimately helps us in better understanding of the complete scene. We tested our approach on the standard KTH and Weizmann human action datasets and the results were comparable to the state of the art methods. Additionally our approach is able to distinguish between activities that involve the motion of complete body from those in which only certain body parts move. In other words, our method discriminates well between activities with “global body motion” like running, jogging etc. and “local motion” like waving, boxing etc.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2009-06
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.1109/CVPR.2009.5204237
BibTex Citekey: 5900
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 1st International Workshop on Visual Scene Understanding
Veranstaltungsort: Miami, FL, USA
Start-/Enddatum: 2009-06-20 - 2009-06-25

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: Piscataway, NJ, USA : IEEE Service Center
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: 22 - 29 Identifikator: ISBN: 978-1-4244-3993-5