How well we can predict emotions in music? What is the evidence in the published literature for explaining what emotions the listeners can perceive in music when the source consists of audio examples. To what degree the results are dependent on the actual models, emotions, musical/acoustic features, or musical materials or participants?
To obtain answers to these questions, we have set out to record and analyse the current state of the art from the literature using a meta-analysis paradigm. We focus on Music Emotion Recognition and hence the acronym metaMER
.
The public-facing version of the repository is available at https://tuomaseerola.github.io/metaMER/
We define the aims and methods in preregistration plan.
Search databases and criteria are documented in studies/search_syntax.qmd.
Data coding and extraction is described in data template studies/extraction_details.qmd.
Data analysis is covered in analysis/analysis.qmd document.
The study report is available manuscript/manuscript.qmd document.