Experience Our Biggest Web App Update with 5,000+ New Genres! 🎉 Discover Now

AI Music Now: 3 Ways how AI can be used in the Music Industry

AI Music Now: 3 Ways how AI can be used in the Music Industry

Mention “AI music” and most people seem to think of AI-generated music. In other words, they picture a robot, machine or application composing, creating and possibly performing music by itself; essentially what musicians already do very well. First, let’s address every industry professional’s worst Terminator-induced fears (should they have any): AI will never replace musicians.

Even if music composed and generated by AI is currently riding a rising wave of hype (include link to previous article), we’re far from a scenario where humans aren’t in the mix. The perception around AI infiltrating the industry comes from a lack of attention towards what AI can actually do for music professionals. That’s why it’s important to cut through the noise and discuss different use cases possible right now.

Let’s look at three ways to use AI in the music industry and why they should be embraced.

AI-based Music Generation

 

The most popular application of AI in music is in the field of AI-generated music. You might’ve have heard about AIVA and Endel (which sound like the names of a pair of northern European fairy-tale characters). AIVA, the first AI to be recognized as a composer by the music world, writes entirely original compositions. Last year, Endel, an AI that creates ambient music, signed a distribution deal with Warner Music. Both these projects signal a shift towards AI music becoming mainstream.

Generative music systems are built on machine learning algorithms and data. The more data you have, the more examples an algorithm can learn from, leading to better results after it’s completed the learning process – this is known in AI-circles as ‘training’. Although AI-generation doesn’t deliver supremely high quality yet, some of AIVA’s supposed self-made compositions stack up well compared against modern composers.

If anything, it’s the chance for co-creation that excites today’s musicians. Contemporary artists like Taryn Southern and Holly Herndon use AI technology to varying degrees, with drastically different results. Southern’s pop-ready album, I AM AI, released in 2018. It was produced with the help of AI music-generating tools such as IBM’s Watson and Google’s Magenta.

Magenta is included in the latest Ableton Live release, a widely-used piece of music production software. As more artists begin to play with AI-music tools like these, the technology becomes an increasingly valuable creative partner.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

AI-based Music Editing

Before the music arrives for your listening pleasure, it undergoes a lengthy editing process. This includes everything from mixing the stems – the different grouped elements of a song, like vocals and guitars – to mastering the finished mixdown (the rendered audio file of the song made by the sound engineer after they’ve tweaked it to their liking).

This whole song-editing journey is filled with many hours of attentive listening and considered action. Because of the amount of choices involved, having an AI to assist in making technical suggestions can speed things up. Equalization is a crucial editing step, which is as much technical as it is artistic. This refers to an audio engineer balancing out the specific frequencies of a track’s sounds, so they complement rather than conflict with each other. Using an AI to perform these basic EQ functions can provide an alternative starting point for the engineer.

Another example of fine-tuning music for consumption is the mastering process. Because published music must stick to strict formatting to for radio and TV, or film, it needs to be mastered. This final step before release usually requires a mastering engineer. They basically make the mix sound as good as possible, so it’s ready for playback on any platform.

Some of the technical changes mastering engineers make are universal. For example, they need to make every mixdown louder to match the standard of music that’s out there; or even to match the other songs on an album. Using universal techniques means AI can help, because you’ve got practices it can learn from. These practices can then be automatically applied and tailored to the song.

Companies like LANDR and Izotope are already on board. LANDR offers an AI-powered mastering service that caters to a variety of styles, while Izotope developed a plugin that includes a “mastering assistant“. Once again, AI can act as a useful sidekick for those spending hours in the editing process.

AI-based Music Analysis

Analysis is what happens when you break something down into smaller parts. In AI music terms, analysis is the process of breaking down a song into parts. Let’s say you’ve got a library full of songs and you’d like to identify all the exciting orchestral music (maybe you’re making a trailer for the next Avengers-themed Marvel movie). Through AI, analysis can be performed to highlight the most relevant music for your trailer based on your selected criteria (exciting; orchestral).

There are two types of analysis that make this magic possible: symbolic analysis and audio analysis. While symbolic analysis gathers musical information about a song from the score – including the rhythm, harmony and chord progressions, for example – audio or waveform analysis considers the entire song. This means understanding what’s unique about the fully-rendered wave (like those you see when you hit play on SoundCloud) and comparing it against other waves. Audio analysis enables the discovery of songs based on genre, timbre or emotion.

Both symbolic and audio analysis use feature extraction. Simply put, this is when you pull numbers out of a dataset. The better your data – meaning quality, well-organized and clearly tagged – the easier it is to pick up on ‘features’ of your music. These could be ‘low-level’ features like loudness, how much bass is present or the type of rhythms common in a genre. Or they could be ‘high level’ features, referring more broadly to the artist’s style, based on lyrics and the combination of musical elements at play.

AI-based music analysis makes it easier to understand what’s unique about a group of songs. If your algorithm learns the rhythms unique to Drum and Bass music, it can discover those songs by genre. And if it learns how to spot the features that make a song “happy” or “sad”, then you can search by emotion or mood. This allows for better sorting, and finding exactly what you pictured. Better sorting means faster, more reliable retrieval of the music you need, making you project process more efficient and fun.

With Cyanite we offer music analysis services via an API solution to tackle large music databases or the ready-to-use web app CYANITE. Create a free account to test AI-based tagging and music recommendations.

5 Technology Trends for Catalog Owners – How Technology is Changing the Music Industry?

5 Technology Trends for Catalog Owners – How Technology is Changing the Music Industry?

The music industry is technology-driven. As new technologies become mainstream, how customers use them affects how music industry players organize their catalogs. Even though traditional structures make it a challenge for music labels, publishing houses, and distribution companies to adapt quickly, to truly monetize the potential value of a music catalog, a continuously evolving market needs to be addressed. 

This article explores the state of technology in the music industry and outlines 5 emerging technologies that are disrupting the field.

The Current State of Technology in the Music Industry

Digital technology has been affecting the music industry for many years. Nowadays professional musicians can record music at home and the control over the distribution channels is mainly in the hands of digital platforms. These developments plus the proliferation of social media and video channels mark the democratization of the music industry. 

The pandemic brought about the inability to hold live performances which in turn propelled digital technology to even more growth. At the same time, Tik Tok reached its popularity around the same time and its easily discoverable bite-sized music has been celebrated by younger music fans. 

In 2022 the market continues to develop with new technology in music industry emerging and the center of entertainment shifting from live venues to home and virtual reality.

Emerging Technologies in the Music Industry 

These four major technology trends affect the future of the music industry and are increasingly important for music catalog owners.

Trend 1 – New media production & consumption channels

 

@Alexander Shatov from Unsplash

User-generated content (UGC) amplifies the amount of music content created these days. The delivery and consumption of music are now often happening through UGC channels such as Instagram reels, Facebook Watch, and Tik Tok. Big streaming platforms are under clear pressure as social media continues to gain further musical ground. The proliferation of these channels means that everyone can be a creator and produce music. 

This is not a new trend. Since the launch of Spotify, the amount of music content produced and consumed has skyrocketed. It was fueled by the freemium approach adopted by most streaming services. Users sign up for free and have access to an endless catalog of content. As a result, artists and creators were able potentially to reach millions of listeners worldwide.

With this incentive, content creators have jumped on board, signing exclusive deals with these platforms.  All these developments plus the rise of UGC have led to more music content than we can consume in our lifetime. 

As further entry points continue to appear for independent creators to offer content, this fully opens the floodgates of the UGC flood. AI-generated music will also be submitted by creators, which multiplies release cadences exponentially. Trawling through all the data to categorize it becomes challenging.  The music industry responded to these challenges with the development of AI tagging and classification engines, that can categorize the catalog and help create more targeted campaigns for music releases on various platforms. Just recently Soundcloud acquired Musiio – an automated tagging and playlisting engine to help categorize Soundcloud’s vast music library which proves how important categorization is for these platforms.

Trend 2 – Using AI to evaluate and benchmark a catalog

 

@Jeremy Bezanger from Unsplash

To respond to a constant increase in the amount of music content, AI is being used as the main tool for sorting and organizing the library. The basic thing such an AI does is it tags music in the catalog automatically so the classification is consistent. It can also analyze the constant stream of new songs and tag them according to the catalog’s classification. The ability of AI to categorize large amounts of music data as well as do the tagging on the fly keeps the catalog’s volume manageable

Not only does AI work with new content, but it also helps music library owners get the most out of the library in terms of revenue. AI is used to bring to light the back catalog where all the niche songs are stored in the tail and revive old music genres and subgenres. It solves the so-called long-tail problem using a combination of tagging, which makes old and niche songs easier to discover for search engines, and similarity search algorithms that find tracks similar to popular artists based on metadata.

Standing aside is the inability of search engines to respond to the needs of customers, which is one of the reasons behind the rise of user-generated content. Finding fitting songs is still a challenge as most music remains uncategorized and manually tagged. Using AI to improve the search function in the catalog is a new music technology that’s coming forward. 

To read more about AI for tagging and benchmarking, see the article on the 4 Applications of AI in the Music Industry.

Trend 3 – The rise of AI-generated music

 

@marcelalaskoski from Unsplash

It is clear that AI presents manifold opportunities to music catalog owners. But what about the music itself and music creators. Although AI-generated music dates back to the Illiac Suite of 1957, it attracted more interest during the last decade – just in 2019, the first music-making AI signed a deal with a major label.

While the quality of AI-generated music keeps improving, an algorithm that can generate Oscar-worthy film scores or emotionally riveting material is a distant reality. Currently, AI is used more as a tool for assisting in music creation, generating ideas that producers or artists turn into tracks. For example, Google’s Magenta provides such a tool

That said, music catalog owners need to be aware that AI-generated music will continue to improve. Those looking for alternatives to score their projects may consider exploring it as an option. In the future, the chances are high that AI-generated music will end up in your catalog along with other tracks, which returns us back to the question of proper classification and music search. While AI-generated music is definitely an opportunity for the music industry, it raises several problems including copyright issues and classification.

Trend 4 – Music for Extended Reality

 

A new wave of technology trends brings new forms of media content. The two applications most relevant for music catalog owners are Augmented Reality (AR) and Virtual Reality (VR).

Both rely on immersion, which refers to how believable the experience is for the user. Music is used to increase this believability. Just like the movie score creates an emotional connection with the viewer, music in AR and VR can enhance and stimulate the effect of the virtual space you’re moving around in.

The emotional and situational contexts are therefore critical. It is likely that AR and VR will follow the game industry to provide immersive music experiences. For example, adaptive soundtracks are already used in games where the music changes based on where the character is in the game and their perspective. Apple is rumored to release such an AR/VR set at the end of 2022 where music adapts to the environment. 

For AR and VR, you’d need to identify songs that adapt to the positioning, movement, and changing emotional state of users. This would mean tagging the songs for mood and other XR-related factors if you want to increase the speed of finding the right song.

Trend 5 – Music search will be assisted with technologies like Google 

The quality of the search function supported by AI tagging is already high enough, but the way music is searched for is going through a transformation. The future of music search looks similar to what Google offers now, which is the search result based on the user’s input of phrases or sentences in the search bar. According to our research, the ability of AI to translate music into text-based descriptions is one of the most anticipated technologies of 2022. 

Right now you can only search music by its meta-information such as the artist or title, or by specific descriptors, for example, mood or genre. For example, in Cyanite keyword search by weights allows to select up to 10 keywords and then specify their weights from 0 to 1 to find the right fitting track. You can also use Similarity Search which takes the reference track and gives you a list of tracks that match. To see this use case in action, see the Video Interview – How Cinephonix Integrated AI Search into Their Music Library.

The AI-based text descriptions take into account many characteristics of the song so simply typing “richly textured grand orchestral anthem featuring a lusty tenor and mezzo soprano will return a list of songs that correspond to the search query.

How the music business will change in the next 5-10 years

The development of technologies has always been challenging for the music industry. First, artists and labels lost their regular sources of income from CD sales, then the pandemic brought about the destruction of the live venues. 

AI is set to bring even more disruption. Users and AI generate an avalanche of new content that makes music professionals worried about the quality of music and the loss of a human element that is attached to it. At the same time, the speed of development of these technologies is overwhelming as they produce a crazy amount of content that needs to be classified and sorted. 

On the other hand, AI as a tool is used by labels and managers to automate repetitive tasks so they can focus on more complex goals. So these emerging technologies not only disrupt the industry but also help the music players to adapt to the ever-changing landscape. AI-assisted tagging, AI text descriptions for search, and new channels of distribution such as AR and VR represent revenue drivers and new ways of monetization for everyone involved.

I want to try out Cyanite’s AI platform – how can I get started?

If you want to get a first grip on how Cyanite works, you can also register for our free web app to analyze music and try out similarity searches without any coding needed

Contact us with any questions about our frontend and API services via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.