Scientists Have a New Way to Categorize Music
Being creatures who crave organization, humans like to put things into nice neat piles. That includes music, of course. Who doesn’t distinguish between sounds by using words like “rock,” “hip hop,” “pop” and literally hundreds of groups, sub-groups, sub-sub-sub-groups and even sub-sub-sub-sub-groups? That’s worked reasonably well for decades, but maybe there’s something better, something more…scientific.
Researchers from McGill in Montreal, the University of Cambridge and Stanford Graduate school of Business took a hard look at how people interact with music. They discovered a way to lump stuff together based on the personalities of the listeners. Learn these terms: Arousal, Valence and Depth.From Phys.Org:
From Phys.Org:
There are a multitude of adjectives that people use to describe music, but in a recent study to be published this week in the journal Social Psychological and Personality Science, researchers show that musical attributes can be grouped into three categories. Rather than relying on the genre or style of a song, the team of scientists led by music psychologist David Greenberg with the help of Daniel J. Levitin from McGill University mapped the musical attributes of song excerpts from 26 different genres and subgenres, and then applied a statistical procedure to group them into clusters.
The study revealed three clusters, which they labeled Arousal, Valence, and Depth.
- Arousal describes intensity and energy in music;
- Valence describes the spectrum of emotions in music (from sad to happy);
- Depth describes intellect and sophistication in music.
They also found that characteristics describing music from a single genre (both rock and jazz separately) could be grouped in these same three categories.The findings suggest that this may be a useful alternative to grouping music into genres, which is often based on social connotations rather than the attributes of the actual music. It also suggests that those in academia and industry (e.g. Spotify and Pandora) that are already coding music on a multitude of attributes might save time and money by coding music around these three composite categories instead.