New technologies are shaking up the music industry. In the AI-powered, on-demand music era of 2020, one truth rings out: data has become the currency that matters. 

The larger your database, the more potential value at your fingertips. That’s why major streaming platforms like Spotify, YouTube and Apple Music dominate – they’ve embraced the latest technology trends to streamline their databases.

To truly monetize the potential value of your music catalogue, you need to adapt to your customers’ changing needs. The music industry is technology-driven – even if traditional structures make it hard for music labels, publishing houses and distribution companies to adapt quickly. As new technologies become mainstream, how your customers use them will affect how you organize your database. 

Let’s explore the four major technology trends affecting the future of the music industry, and why they’re important for music catalogue owners.

Trend 1 – Increased media production & consumption



Since the launch of Spotify in 2008, the amount of music content produced and consumed has skyrocketed. In fact, according to the Wall Street Journal, the output of songs released today is seven times that of the 1960s. It’s all fueled by the freemium approach adopted by most streaming services. Users sign up for free, and have access to an endless catalogue of content. The flip-side? Artists and creators can potentially reach millions of listeners worldwide.

With this incentive, content creators have jumped on board, signing exclusive deals with these platforms. Even the film industry joined in; Martin Scorsese recently went with Netflix for the financing and distribution of The Irishman. While these kinds of agreements lead to more content than we can consume in our lifetime, people prefer it that way. The staggering numbers prove it – Spotify currently has a 40 million song catalogue and 217 million monthly users.

More content plus more demand means one thing: the music industry is swimming in data. But here’s the tricky part – trawling through all that data to categorize it in a way that improves your business is a time-consuming task. Even the most dedicated data-scientist would agree. 

That’s where classification comes in. For example, Spotify implements genre classification to sort through its massive database, so that users can easily find something similar. Clear, reliable and speedy classification will prove critical as content and demand for it continue to increase.

Trend 2 – Voice Control


Perhaps one of the most intuitive ways to retrieve a song from a music catalogue is by using your voice. So-called “voice user interfaces” enable users to quickly play whatever they feel like. Spotify has Spotify Voice, a feature for Premium mobile users. It’s a voice assistant capable of responding to commands such as, “Play something I like”, or “Play some hip hop”.

Apple’s well-known voice assistant, Siri, performs the same function for Apple Music. Users can fully control what, how and when their music is played. Because Apple has spent years collecting data, Siri can respond to slightly more sophisticated commands. These include adding songs to your library or a playlist, and asking for the year, genre or artist of a song.

Both these music streaming giants successfully collect data from our voices through AI approaches called machine learning and Natural Language Processing. By constantly listening, they gather more data, getting to know the ins and outs of your voice so that they can respond better. But it’s not just about learning to recognize your charming accent.

For a voice assistant to play your song request, it needs to make sense of your database. Once again, it comes down to the clarity of your classification. This is also known as tagging or meta-tagging. The larger your library, the more efficient your tagging needs to be. When someone requests a “Halloween electro song”, the software’s algorithm will scan the database for tagged keywords.

But it can also scan for a particular emotion – and this is what makes voice control great. As the tone of our voices often reflect our mood, devices will more accurately retrieve a song that matches the exact emotional context you’re after – even if you’re searching for a song made by something incapable of feeling at all.

Trend 3 – AI-generated music


© Endel


Even if it doesn’t have feelings (yet?), artificial intelligence sure gets people going. One of the technology trends of the last decade, and maybe the buzzword of 2019, it’s clear that AI presents manifold opportunity. This also applies to the music industry, but not in the way some might think. Although AI-generated music dates back to the Illiac Suite of 1957, it’s attracted more interest during the last decade – just last year, the first music-making AI signed a deal with a major label.

While the quality of AI-generated music keeps improving, an algorithm that can generate Oscar-worthy film scores or emotionally riveting material by itself is a distant reality (in a galaxy, far, far away perhaps…). Currently, AI is used more as a tool for assisting in music creation, generating ideas that producers or artists turn into tracks. Google’s Magenta provides such a tool.

That said, music catalogue owners need to be aware that AI-generated music will continue improve. Those looking for alternatives to score their projects may consider exploring it as an option. If anything, the chances are higher that AI-generated music will end up in your catalogue rather than replacing it. And that it’s a collaborative effort rather than an AI original.

Trend 4 – Extended Reality


A new wave of technology trends brings new forms of media content. Enter Extended Reality (XR), a term describing immersive technologies that bridge the physical and virtual world. The two applications most relevant for music catalogue owners are Augmented Reality (AR) and Virtual Reality (VR).

Both of these rely on immersion, which refers to how believable the experience is for the user. Music is central to increasing this believability. Every iconic film score, like those by John Williams or Hans Zimmer, creates an emotional connection that draws the viewer into the world. The same applies to XR; especially VR, where spatial audio is relied upon to simulate the virtual space you’re moving around in.

Emotional and situational context are therefore critical. This applies when tagging your library for AR and VR. You’d need to identify songs that adapt to the positioning, movement and changing emotional state of users. This would mean tagging the songs for mood and other XR-related factors, if you want to increase the speed of finding the right song. Moving forward, more content creators are going to require XR-specific soundtracks.