Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Conference Paper

Topic models for semantics-preserving video compression


Lampert,  CH
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Wanke, J., Ulges, A., Lampert, C., & Breuel, T. (2010). Topic models for semantics-preserving video compression. In J. Wang, N. Boujemaa, N. Ramirez, & A. Natsev (Eds.), MIR '10: Proceedings of the international conference on Multimedia information retrieval (pp. 275-284). New York, NY, USA: ACM Press.

Cite as: https://hdl.handle.net/21.11116/0000-0002-94D8-3

Most state-of-the-art systems for content-based video understanding tasks require video content to be represented as collections of many low-level descriptors, e.g. as histograms of the color, texture or motion in local image regions. In order to preserve as much of the information contained in the original video as possible, these representations are typically high-dimensional, which conflicts with the aim for compact descriptors that would allow better efficiency and lower storage requirements.

In this paper, we address the problem of semantic compression of video, i.e. the reduction of low-level descriptors to a small number of dimensions while preserving most of the semantic information. For this, we adapt topic models - which have previously been used as compact representations of still images - to take into account the temporal structure of a video, as well as multi-modal components such as motion information.

Experiments on a large-scale collection of YouTube videos show that we can achieve a compression ratio of 20 : 1 compared to ordinary histogram representations and at least 2 : 1 compared to other dimensionality reduction techniques without significant loss of prediction accuracy. Also, improvements are demonstrated for our video-specific extensions modeling temporal structure and multiple modalities.