Abstract
Cognitive research is often focused on experimental condition-driven reactions. Ethological studies frequently
rely on the observation of naturally occurring specific behaviors. In both cases, subjects are filmed during the
study, so that afterwards behaviors can be coded on video. Coding should typically be blind to experimental
conditions, but often requires more information than that present on video. We introduce a method for blindcoding
of behavioral videos that takes care of both issues via three main innovations. First, of particular
significance for playback studies, it allows creation of a “soundtrack” of the study, that is, a track composed of
synthesized sounds representing different aspects of the experimental conditions, or other events, over time.
Second, it facilitates coding behavior using this audio track, together with the possibly muted original video.
This enables coding blindly to conditions as required, but not ignoring other relevant events. Third, our method
makes use of freely available, multi-platform software, including scripts we developed.