Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Hochschulschrift

Signal Processing Methods for Beat Tracking, Music Segmentation, and Audio Retrieval

MPG-Autoren
/persons/resource/persons44530

Grosche,  Peter Matthias
Computer Graphics, MPI for Informatics, Max Planck Society;
International Max Planck Research School, MPI for Informatics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Grosche, P. M. (2012). Signal Processing Methods for Beat Tracking, Music Segmentation, and Audio Retrieval. PhD Thesis, Universität des Saarlandes, Saarbrücken. doi:10.22028/D291-26471.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0015-0D64-1
Zusammenfassung
The goal of music information retrieval (MIR) is to develop novel strategies
and techniques for organizing, exploring, accessing, and understanding music
data in an efficient manner.
The conversion of waveform-based audio data into semantically meaningful
feature representations by the use of digital signal processing techniques is
at the center of MIR and constitutes a difficult field of research because of
the complexity and diversity of music signals.
In this thesis, we introduce novel signal processing methods
that allow for extracting musically meaningful information from audio signals.
As main strategy, we exploit musical knowledge about the signals' properties to
derive feature representations that show a significant degree of robustness
against musical variations but still exhibit a high musical expressiveness. We
apply this general strategy to three different areas of MIR:
Firstly, we introduce novel techniques for extracting tempo and beat
information, where we particularly consider challenging music with changing
tempo and soft note onsets. Secondly, we present novel algorithms for the
automated segmentation and analysis of folk song field recordings, where one
has to cope with significant fluctuations in intonation and tempo as well as
recording artifacts. Thirdly, we explore a cross-version approach
to content-based music retrieval based on the query-by-example paradigm. In all
three areas, we focus on application scenarios where strong musical variations
make the extraction of musically meaningful information a challenging task.