English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

A computational cognitive model for the analysis and generation of voice leadings

MPS-Authors
/persons/resource/persons247689

Harrison,  Peter M. C.
Research Group Computational Auditory Perception, Max Planck Institute for Empirical Aesthetics, Max Planck Society;
Queen Mary University of London;

External Ressource
No external resources are shared
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Harrison, P. M. C., & Pearce, M. T. (2020). A computational cognitive model for the analysis and generation of voice leadings. Music Perception, 37(3), 208-224. doi:10.1525/mp.2020.37.3.208.


Cite as: http://hdl.handle.net/21.11116/0000-0006-6C39-2
Abstract
Voice leading is a common task in Western music composition whose conventions are consistent with fundamental principles of auditory perception. Here we introduce a computational cognitive model of voice leading, intended both for analyzing voice-leading practices within encoded musical corpora and for generating new voice leadings for unseen chord sequences. This model is feature-based, quantifying the desirability of a given voice leading on the basis of different features derived from Huron’s (2001) perceptual account of voice leading. We use the model to analyze a corpus of 370 chorale harmonizations by J. S. Bach, and demonstrate the model’s application to the voicing of harmonic progressions in different musical genres. The model is implemented in a new R package, “voicer,” which we release alongside this paper.