
This new Spotify feature listens to your speech, determines your mood, and then recommends appropriate music. Cool or creepy?
Last year, Spotify was granted a patent for tracking personality. The app has found a way to assess “behavioral variables” to offer up super-personalized content. That could mean music, but it will inevitably used to served up ultra-targeted advertising.
Now there’s a new patent–“Identification of taste attributes from an audio signal”–that purports to use speech recognition tech to determine the “emotional state, gender, age, or accent” of the user. Those traits can then be used for various types of recommendations (“identifying playable content, based on the processed audio signal content.”)
It continues: “intonation, stress, rhythm and the likes of units of speech” could be added to “acoustic information within a hidden Markov model architecture” (no idea what that means) but it will help the app determine if you’re “happy, angry, sad or neutral.” This is done by extracting metadata from your voice or noise in the background.
For example, it might detect traffic noise in the background. That would become a data point for music recommendation.
It gets very complicated from there, but Music Business Worldwide has a good explanation.