This new Spotify feature listens to your speech, determines your mood, and then recommends appropriate music. Cool or creepy?

Last year, Spotify was granted a patent for tracking personality. The app has found a way to assess “behavioral variables” to offer up super-personalized content. That could mean music, but it will inevitably used to served up ultra-targeted advertising.

Now there’s a new patent–“Identification of taste attributes from an audio signal”–that purports to use speech recognition tech to determine the “emotional state, gender, age, or accent” of the user. Those traits can then be used for various types of recommendations (“identifying playable content, based on the processed audio signal content.”)

It continues: “intonation, stress, rhythm and the likes of units of speech” could be added to “acoustic information within a hidden Markov model architecture” (no idea what that means) but it will help the app determine if you’re “happy, angry, sad or neutral.” This is done by extracting metadata from your voice or noise in the background.

For example, it might detect traffic noise in the background. That would become a data point for music recommendation.

It gets very complicated from there, but Music Business Worldwide has a good explanation.

Alan Cross

is an internationally known broadcaster, interviewer, writer, consultant, blogger and speaker. In his 30+ years in the music business, Alan has interviewed the biggest names in rock, from David Bowie and U2 to Pearl Jam and the Foo Fighters. He’s also known as a musicologist and documentarian through programs like The Ongoing History of New Music.

Let us know what you think!

This site uses Akismet to reduce spam. Learn how your comment data is processed.