How AI Empowers Employees in the Music Industry

How AI Empowers Employees in the Music Industry

According to a McKinsey report, 70% of companies are expected to adopt some kind of AI technology by 2030. The music industry is not an exception. Yet, when it comes to AI, skepticism often overshadows the potential. AI is thought to be a job killer and a force that will outperform humans in creativity and value production. This popular view causes general anxiety across all industries. This article is intended to add a new point of view about AI in music and show how AI can be used to solve typical business problems such as employee motivation and getting the work done in a more meaningful way.

The music industry is special in its AI adoption journey, as it is one of the industries that has very vivid problems that can be solved by AI. The number of tracks uploaded to the internet reaches 300.000 a day and the number of artists on Spotify could be 50 million by 2025. In this situation, the output exceeds the human capacity to manage it, so sometimes there is no other way but to use the AI. Despite the benefits, the negative impact of AI is a common subject in the media – see the article here

Rise of Robots

Annie Spratt © Unsplash

For a more balanced view on the topic, let’s explore how AI can actually help humans working in the music industry if the goal is not to maximize profits and productivity but empower the workforce. 

How AI can improve the employee experience

Speed up music tagging and search 

In their work, music companies often have to deal with music search. The essential part of a music search is a well-managed and consistently tagged music library. Such a library usually uses a lot of metadata. Some of the metadata are “hard factors” such as release date, recording artist, or title. But “soft factors” such as “genre”, “mood”, “energy level”, and other music describing factors become more and more important.

Obviously, assigning tags for those soft factors to music is a tedious and subjective task. AI helps with these tagging tasks by automatically detecting all the data in a music track and categorizing the song. It can easily decipher between rock music and pop music, emotions, such as “sad” or “uplifting”, add instruments to a track, and tag thousands of songs in a very short time. 

But in fact, music tagging AI doesn’t perform without errors. AI is not perfect and it still needs human supervision.

Automatic tagging increases work speed which gives companies the opportunity to quickly add songs to their catalog and pitch them to customers. 

Clean up a huge amount of data 

As it is with AI, there is always a possibility of errors when a person manages the catalog. Especially when different team members managed a catalog over the years, the number of errors can become overwhelming. 

AI helps reduce human errors in existing catalogs as well as prevent them in the future. For existing catalogs, AI can provide an analysis of data, discover oddly tagged songs, and then eliminate discrepancies. AI will also make some mistakes but it will do them in a consistent way. You can see how it works step by step in this case study here

Enrich tedious music discussions with AI-generated data 

AI is great for visualizing data and making complex information digestible – and so it is for music! In some visualizations, songs are grouped by emotions so you can get a very comprehensive view of the library. For single tracks, AI can analyze emotions across the whole track or in a custom segment. Another way to visualize a catalog is to group songs by similarity so most similar artists are bundled up together.

Ever tried to convince a data-driven CMO that this one song has a melancholic touch and doesn’t fit too well to the campaign? Try to back up your expertise with some data the next time!

Instruments in Detail View
Instruments analysis in Cyanite app

In any case, AI can complement marketing and sales efforts by giving the companies tools to visualize the catalog and song data and then use this data to sell. On a song-by-song basis, visualization provides a snapshot of the song that can be understood easily. But, really, visualization of data emphasizes innovative data-centered positioning of the company and adds a bit of spice to the sales efforts.

At Cyanite, we even created several music analysis stories using visualization.

Reduce human bias and make data-based decisions 

AI in music can ensure every decision is based on data, not emotions. For example, when choosing tracks for a brand video it is important that it adheres to the brand guidelines. But more often, tracks are chosen simply because someone liked them. 

To avoid human bias, checking in with AI can be implemented into the business strategy for more consistent and better branding efforts. For example, in the case of a branded video, AI will offer songs that correspond to the brand profile whether it is “sexy”, “chill”, or “confident”. 

All these capabilities of AI drastically improve the quality of the end result for customers and allow to get rid of tedious and boring tasks for employees.

How AI can boost motivation

Spend more time on creative and meaningful tasks 

One of the AI benefits is that employees can focus on creative solutions rather than on repetitive tasks. This frees up individual qualities such as empathy, communication, and problem-solving. Those qualities are then useful for customer acquisition and service, as the studies confirms.  If you are working in sync and answering a sync briefing, finding right-fitting songs from the catalog with AI’s help leaves you more time to add a creative storyline why this song is a great fit beyond its pure sound. In the end, AI has the potential to increase the customer service level while contributing to higher employee satisfaction. 

Employee

LinkedIn Sales Solutions © Unsplash

Speed up learning and training new employees 

In one of the case studies, we’ve shown the process of cleaning a music library. Some assets that were created during the project can be used by the company to teach and train new employees. For example, a visualization of song categories can be used as a guide for new staff who are in charge of tagging new songs. See for more details here Also, starting to work with a catalog of 10,000 songs represents a very high entry barrier and it usually takes months to understand a catalog in depth. With a Similarity Search, like the one from Cyanite or other services like Musiio, AIMS or MusiMap, a catalog search can start intuitively and easily with a reference track. It provides guidance and creates more opportunities for meaningful human work. Overall, AI is characterized by ease of use. It is highly intuitive, doesn’t need much time to be set up, and produces results at an instant. The better UX helps not only employees but also the customers if they have access to the catalog. To see for yourself, you can try the Cyanite web app here. 

Ensure consistent and collaborative approach to work processes and policies

In general, AI follows one consistent tagging scheme and does so automatically which means less control is needed from a human side to keep things going. Having clean metadata means at any point in time, the catalog can be repurposed, offered to a third party, and integration can be done. And integration will become more and more important in the future: could you directly serve a new music-tech startup that wants to offer your catalog in their new licensing platform? How well are you equipped to seize business opportunities? When a catalog includes many different music libraries, and there is a need for a unified approach, AI will scan the catalog for keywords that are equal in meaning and eliminate the redundancies. When a catalog is being integrated into a larger audio library, the AI will draw parallels between the two tagging systems and then automatically retag every song in the style of the new catalog at little to no information loss rate.  In general, having clean metadata and the ability to repurpose catalogs allows music companies to experiment with their offers and be more agile and innovative. 

Summary 

There are many benefits of AI for music companies. But also there are quite a lot of risks. When looking at AI in the music industry, it is important to understand that AI isn’t replacing jobs but it is a tool to work with and help employees improve. Of course, AI tools are different. In the case of Cyanite, the AI is handling boring repetitive tasks such as music analysis, tagging, and search. At the same time, it gives people the opportunity to work on something more meaningful and inspiring.

However, the introduction of AI not only in the music but in any industry has the potential to bear a variety of risks. That is why we are advocates for empowering human work with AI. It is important to stay critical, question new technology, and help its creators make the right decisions.

The 4 Applications of AI in the Music Industry

The 4 Applications of AI in the Music Industry

A couple of weeks ago, Cyanite co-founder Jakob, gave a lecture in a music publishing class at Berlin’s BIMM Institute. The topic was to show and give concrete examples of AI’s real use cases in today’s music industry. The goal was to get away from the overload of buzzwords surrounding the AI topic and shed more light on AI’s actual applications and benefits.

This lecture was well received by the students, so we decided to publish its main points on the Cyanite blog. We hope you enjoy the read!

Introduction

Many people, when they hear about “AI and music”, think of robots creating and composing music. This understandably comes together with a very fearful and critical perception of robots replacing human creators. But music created by algorithms merely represents a fraction of AI applications in the music industry. 

AI Robot & Music
Picture 1. AI Robot Writing Its Own Music
This article is intended to explore:

1. Four different kinds of AI in music.

2. Practical applications of AI in the music industry. 

3. Problems that AI can solve for music companies.

4. Pros and cons of each AI application.

How does AI work? 

Before we dive into the four kinds of AI in the music industry, here are some basic concepts of how AI works. These concepts are not only valuable to understand but they can help come up with new applications of AI in the future. 

Just like humans, some AI methods like deep learning need data to learn from. In that regard, AI is like a child. Children absorb and learn to understand the world by trial and error. As a child, you point your finger at a cat and say “dog”. You then get corrected by your parents who say, “No, that’s a cat”. The brain stores all this information about the size, color, looks, and shape of the animal and identifies it as a cat from now on. 

AI is designed to follow the same learning principle. The difference is that AI is still not even close to the magical capacity of the human brain. A normal AI neural network has around 1,000 – 10,000 neurons in it, while the human brain contains 86 billion!

This means that AI can currently perform only a limited number of tasks and needs a lot of high-quality data to learn from.

One example of how data is used to train AI to detect objects in pictures is a process called reCAPTCHA. This is a system that asks you to select traffic lights in a picture to “prove you are human”.

The system collects highly valuable training data for neural networks to learn how traffic lights look like.

ai learning
Picture 2. AI Learning with reCAPTCHA
If you are interested to learn more about how this process works for detecting genres in music, you can check out this article.

The 4 types of AI in music

Now that you understand the basic AI concept, here is an overview of the four main applications of AI in the music industry. Keep in mind that there are many more possible applications.

1. AI Music Creation

2. Search & Recommendation

3. Auto-tagging

4. AI Mastering

Let’s have a closer look at what problems each area addresses, how the solutions work, and also explore their pros and cons!

Application 1: AI-Generated Music

Problem

Problems that AI can solve in the AI creation field are not very apparent. AI-generated music is, firstly, a creative and artistic field. However, if we look at it from a business context we can identify existing problems. When the music needs to adapt to changing situations, for instance, in video games or other interactive settings, AI-created music can adapt more natively to changing environments. 

Solution

AI can be trained to create custom music. For that AI needs input data and then it needs to be taught to make music. Just like a human.

To understand current AI creation capabilities here are a couple of real-world examples:

Yamaha company analyzed many hours of Glenn Gould’s performance to create an AI system that can potentially reproduce the famous pianist’s music style and maybe even create an entirely new Glenn Gould’s piece.

A team of Australian engineers won AI “Eurovision Song Contest” by creating a song with samples of noises made by koalas and Tasmanian devils. The team trained a neural network on animal noises for it to produce an original sound and lyrics. 

Who is AI-generated music for?

  • Game Studios
  • Art Galleries
  • Brands  
  • Commercials  
  • Films  
  • YouTubers  
  • Social Media Influencers

Implementation Examples

Pros of this solution

  • Cheap to produce new content
  • Customizable
  • Great potential for creative human & AI collaboration
  • Creative tools for artists.

Cons of this solution

  • The quality of fully synthesized AI music is still very low
  • No concrete application in the traditional music industry
  • Legal issues over the copyright including rights to folklore music 
  • Most AI creation models are trained on western music and can reproduce western sound only
  • Very high development cost.

Bottom line

It will take some time for AI-created music to sound adequate or have a straight use case. However, hybrid approaches that use AI to compose music with pre-recorded samples, loops, and one-shots show that the AI-generated future is not far away.

Application 2. Search & Recommendation

Problem

It can be hard to find that one song that fits the moment perfectly, whether it is a movie scene or a podcast. And the more music a catalog contains, the harder it is to efficiently search it. With 500 million songs online and 300,000 new songs uploaded to the internet every day (!!), this can easily be called an inhuman task. Platforms like Spotify develop great recommendation algorithms for seamless and enjoyable listening experiences for music consumers. However, if we look at sync, it gets a lot more difficult. Imagine a music publisher who administers around 50,000 copyrights. Effectively they can oversee maybe 10% of that catalog leaving a lot of potential unused. 

Solution

AI can be trained to detect sonic similarities in songs.  

Who are Similarity Searches for?

  • Music publishers: using reference songs to search their catalog
  • Production music libraries and beat platforms
  • DSPs that don’t have their own AI team
  • Radio apps
  • More use cases in A&R (artist and repertoire) and etc.
  • DJs needing to hold the energy high after a particularly well-received track (in the post-Covid world)
  • Basically, anyone who starts sentences like “That totally sounds like…”
  • Managers targeting look-alike audiences. 

Implementation Examples

Pros of this solution

  • Finding hidden gems in a catalog which goes far beyond the human capacity for search. Here both AI-tagging and AI search & recommendation are employed
  • Low entry barrier when working with big catalogs
  • Great and intuitive search experiences for non-professional music searchers.

Cons of this solution

  • Technical similarity vs. perceived similarity – there is still quite a lot of difference in how a human and AI function. Human perception is highly subjective and may assign higher or lower similarity to two songs, which may be different to what AI thinks. 

Bottom line

All positive. Everyone should use Similarity Search algorithms every day.

Application 3. Auto-tagging

Problem

To find and recommend music, you need a well-categorized library to deliver the tracks that exactly correspond to a search request. The artist and the song name are “descriptive metadata”, while genre, mood, energy, tempo, voice, language are “discovery metadata”. More on this topic here. The problem is that tagging music manually is one of the most tedious and subjective tasks in the music industry. You have to listen to a song and then decide the mood it evokes in you. Doing that for one song might be ok, but forget about it at scale. At the same time, tagging requires extreme accuracy and precision. Inconsistent and wrong manual tagging leads to a poor search experience, which results in music that can’t be found and monetized. Imagine tagging the 300,000 new songs uploaded to the internet every day. 

Solution

Tagging music is a task that can be done with the help of AI. Just like in the example in the first part of this article, where an algorithm detects traffic lights, neural networks can be trained to learn how, for example, rock music differs from pop or rap music.

Here is a Peggy Gou’s song, analyzed and tagged by Cyanite: 

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

AI-tagged song
Who is AI-tagging for? 

For every music company that knows the pain of manual tagging. If you work in music, chances are pretty high that you had or will have to tag songs. If you pitch a song on Spotify for Artists, you have to tag a song. If you ever made a playlist – you most probably had to deal with its categorization and tagging. If you’re an A&R and present a new artist to your team and say something like, “This is my rap artist’s new party song,” you literally just tagged a song. In all these cases it is good to have an objective AI companion to tag a song for you. 

AI-tagging is a really powerful tool at scale. You just bought a new catalog with tons of untagged songs but want to utilize it for sync: AI-tagging is a way to go. You’re a distributor tired of your clients uploading unfinished or false metadata: AI-tagging can help. You’re a production music library that picked up tons of legacy from years of manual tagging: the answer is also AI-tagging.  

Implementation Example

In the BPM Supreme library, you can see the different moods, energy levels, voice presence, and energy dynamics neatly tagged by an AI.

BPM Supreme Interface
Picture 3. BPM Supreme Cyanite Search Interface
Pros of this solution

  • Speed 
  • Consistency across catalog
  • Objectivity / reproducibility
  • Flexibility. Whenever something changes in the music industry, you can re-tag songs with new metadata at a lightning speed.

Cons of this solution

  • Development cost and time (luckily, Cyanite has a ready-to-go solution)
  • High energy consumption of deep learning models, but still less resource-heavy compared to manual tagging.

Bottom line

Tagging can not replace human work completely. But it’s a powerful and practical tool to dramatically reduce the need for manual tagging. AI-based tagging can increase the searchability of a music catalog with little to no effort.

Application 4. AI Mastering

Problem

Mastering your own music can be very expensive, especially for all DIY and bedroom producers. These categories of musicians often resort to technology to create new music. But in order to distribute music to Spotify or similar platforms, the music needs to meet certain criteria of sound quality. 

Solution

AI can be used to turn a mediocre-sounding music file into a great sound. For that, AI is trained on popular mastering techniques and on what humans have learned to recognize as a good sound. 

Who is AI mastering for?

  • DIY and bedroom producers
  • Professional musicians
  • Digital distributors 

Implementation Example

One company that is leading the field of AI mastering is LANDR. The Canada-based company has a huge community of creators and already mastered 19 million songs. Other players include eMastered and Moises.

LANDR AI Mastering
Picture 4. LANDR AI Mastering
Pros of this solution

  • Very affordable ($48/year for unlimited mastering of LO-MP3 files plus $4.99/ track for other formats vs. professional mastering starting at $30/song)
  • Fast
  • Easy for non-professionals. 

Cons of this solution

  • A standardized process that doesn’t allow room for experiments and surprises
  • Some say AI mastering is “lower quality compared to human mastering”.

Bottom line

AI mastering is an affordable tool for musicians with low budgets. For up-and-coming artists, it’s a great way to get your professionally edited music out to DSPs. For professional songwriters it’s the perfect means to let demos sound reasonably good. Professional mastering experts usually serve a different target group, so these fields are complementing each other rather than AI taking over human jobs.

Summary

To sum it up, we presented 4 different concrete use cases for AI, that work for almost every part of the value chain in the music industry. Still, the practical applications and quality differ.  AI is far from having the same complex thinking and creativity as a professional music tagger, mastering expert, or musician. But it can already help creatives do their work or even completely take over some of the expensive and tedious tasks. 

One of the biggest problems that prevents us from embracing new technology is wrong expectations. There are often two extremes: on the one side, people overestimate and expect more from AI than it can currently deliver e.g. tagging 1M songs without a single mistake or always being spot-on with music recommendations. The other camp has a lot of fear about AI taking over their jobs.

The answer may lie somewhere in between. We can embrace technology and at the same time remain critical and not blindly rely on algorithms, as there are still many facets of the human brain that AI can not imitate. 

We hope you enjoyed this read and learned more about the 4 different use cases of AI in music. If you have any feedback, questions, or contributions, you are more than welcome to reach out to jakob@cyanite.ai. You can also contact our content manager Rano if you are interested in collaborations. 

WISE Panel Video: AI – Musician’s Friend or Foe?

WISE Panel Video: AI – Musician’s Friend or Foe?

WISE hosted a virtual panel moderated by Kalam Ali (Co-Founder, Sound Obsessed) to connect music industry experts and have an open discussion about AI technologies adoption for artists. Among the guests there were Rania Kim (Creative Director, Sound Obsessed & Portrait XO), Harry Yeff/Reeps One (Director, Composer, and Artist, R1000 Studios), Heiko Hoffmann (VP Artist, Beatport) and Markus Schwarzer (CEO, Cyanite). 

 

All united by their interest in music and the future ahead of it, they shared their views on the different access points for AI to be embraced for what it is in the bigger picture: a solution to improve performances, to enhance the UX and to give inspiration within music production.

Education is the means to ensure a deeper understanding of this technology, now still highly questioned as damaging to connection people have with music. A realistic assessment of what opportunities there are for artists in implementing AI and, at the same time, what the risks of improper use are, can break these fear barriers.

Finding a middle ground between men and the autonomy of AI is key, especially in these days where a digital approach is often the only feasible way to make life feel as normal as it should be.  

The extended video of the talk is available on Youtube.

 

 

 

YouTube

By loading the video, you agree to YouTube’s privacy policy.
Learn more

Load video

Electronic music experts reveal 4 essential factors on AI-tech adoption for SMEs

Electronic music experts reveal 4 essential factors on AI-tech adoption for SMEs

We are excited to publish a recent study by Laura Callegaro, a master researcher from the Berlin School of Economics and Law, a longtime electronic music expert and co-founder of the Berlin-based techno label JTseries.

In this guest article, Laura shares 4 essential factors on adoption of AI solutions within the industry based on her research. 

Originally written by Laura Callegaro

 

In electronic music, original scenes are challenged as some small-scale events burgeon into festivals, as market growth and fan-bases develop, DJ cultures become celebrity cultures – with luxury brands like Porsche signing up female Djs – and as electronic music events become cultural experiences. For a market that has quickly turned from niche to mainstream, it may not be a surprise to see that the week’s top 100 most played Spotify tracks are almost entirely dominated by electronic music production. 

A recent survey presented at the International Music Summit in 2019, ranks electronic music as the World’s 3rd most popular genre with an estimated 1.5 billion people typically listening to it.

Source: IFPI Music Consumer Insight Report 2018

“The introduction of mature AI allows creatives and corporations alike to reimagine the creative process.” 

 

This snapshot of contemporary popularity works just as another clear indicator of a new mainstream – in which electronic music has become more central – within the global popular music market. AI is having a significant impact on those roles that are currently most systematic and routine in nature: search, audit and elements of due diligence. The introduction of mature AI allows creatives and corporations alike to reimagine the creative process, target new fans, and identify the next set of musical stars with greater accuracy and precision than we ever imagined.

The research highlights the challenges and opportunities induced by AI in this booming industry, and it focuses on the “What and Why” of SMEs managerial processes under a new edge. Many academic studies have analyzed the cultural, political and social dynamics of this field, but very few have analyzed the economics of this industry. Through semi-structured interviews of both actors – providers and users of AI music marketing tools – combined with qualitative research of primary data, the study relies on the so-called TOE framework. This is one of the most insightful for IT and system adoption research and it helps to identify the different factors of adoption into three dimensions of enterprise context: technological, organizational and environmental.

 

FACTOR 01: Trust the machine

From the data analyzed, It has been demonstrated that there is overall trust in AI systems – 90% of positive sentiment – which are perceived as free of bias. In fact, just one out of 4 users pointed out that these machines are programmed by humans, and therefore it is impossible to have AI 100% free of bias: matter unfortunately already proved by facts. In relation with the research done, this important factor shows the essential need of a tight collaboration between humans and AI powered machines, which allow us to perform ingenious results by analyzing infinite numbers of data in a matter of seconds.

 

It’s essential that humans and AI-powered machines collaborate.

FACTOR 02: Agility wins

Firm size divided the respondents’ opinion widely. The variance between the answers was based mainly on the agility of the decision making process and financial resources of the organization analyzed. A common idea among all the interviewees is that the failure of new technology would have less impact on larger firms, which normally are the ones with larger financial resources. On the other side, 50% of them recognized that agility of smaller companies simplifies the adoption process and makes it more efficient.

Based on the findings, it can be argued that technical skills and financial resources are connected. We can also notice a variation between the replies considering the role played by financial resources, where the responses were heavily dependent on the cost of the exact system the respondents had operated with.

Agility fosters efficiency

FACTOR 03 – Tech-savviness is not key

Technical skills and financial resources in organizational context are strictly connected and in the adoption phase they can sometimes turn into constraints, especially when considering SMEs users. Surprisingly 70% of providers and 50% of users don’t see tech skills or financial resources among personnel as an important factor in the adoption process. Background and expertise in AI technology are not as necessary as understanding how to employ it within the company and the huge benefit coming from the implementation of such technology outweigh the cost.

While providers of AI music solutions do not perceive tech skills as an essential factor, pointing out that sales directors have normally limited knowledge of marketing technology tools and did not use this type of innovative solutions in the past. Consequently, digital tools are not often at the top of sales managers priority list, but they recognize the value of adopting them. 

Surprisingly 70% of providers and 50% of users don’t see tech skills or financial resources among personnel as an important factor in the adoption process. Background and expertise in AI technology are not as necessary as understanding how to employ it within the company and the huge benefit coming from the implementation of such technology outweigh the cost.
Laura Callegaro

Researcher & Music Industry Expert

FACTOR 04 – Shift of power

Knowledge through data is more and more accessible

It is clear that there is a larger trend towards searching for technologies that can analyze various industry data points on up and coming artists and predict who the next big stars may be. What this study has brought to light, and that has been confirmed also by the interviewees (especially by providers) is that we are assisting a big shift of power. From managers, booking agents and label owners straight to the artists’ hands: thanks to new technology applied to marketing and the manifold new ways of music consumption, potentially everyone could be the manager of himself.

However, at the moment, this could be a false hope since marketing and managerial skills are still required.

For the music business AI may serve as one of the most influential tools for growth, as we enter a new era where humans – from artists and songwriters to A&Rs (artists and repertoire) and digital marketers in labels – will be complemented by AI in various forms and at different extent. This study and the global challenges the industry is facing are just additional proofs of the essential need of AI in this ever evolving industry. 

“We enter a new era where humans will be complemented by AI in
various forms and at different extent
.” 

About the author

Laura Callegaro conducted this study during her master’s at the Berlin School of Economics and Law. She is a longtime electronic music expert and a real marketing wizard. As co-founder of the Berlin-based techno label JTseries and music-arts collective ENIGMA, Laura is actively contributing in revamping the music industry.

 

AI Music Now: 3 Ways how AI can be used in the Music Industry

AI Music Now: 3 Ways how AI can be used in the Music Industry

Mention “AI music” and most people seem to think of AI-generated music. In other words, they picture a robot, machine or application composing, creating and possibly performing music by itself; essentially what musicians already do very well. First, let’s address every industry professional’s worst Terminator-induced fears (should they have any): AI will never replace musicians.

Even if music composed and generated by AI is currently riding a rising wave of hype (include link to previous article), we’re far from a scenario where humans aren’t in the mix. The perception around AI infiltrating the industry comes from a lack of attention towards what AI can actually do for music professionals. That’s why it’s important to cut through the noise and discuss different use cases possible right now.

Let’s look at three ways to use AI in the music industry and why they should be embraced.

AI-based Music Generation

 

The most popular application of AI in music is in the field of AI-generated music. You might’ve have heard about AIVA and Endel (which sound like the names of a pair of northern European fairy-tale characters). AIVA, the first AI to be recognized as a composer by the music world, writes entirely original compositions. Last year, Endel, an AI that creates ambient music, signed a distribution deal with Warner Music. Both these projects signal a shift towards AI music becoming mainstream.

Generative music systems are built on machine learning algorithms and data. The more data you have, the more examples an algorithm can learn from, leading to better results after it’s completed the learning process – this is known in AI-circles as ‘training’. Although AI-generation doesn’t deliver supremely high quality yet, some of AIVA’s supposed self-made compositions stack up well compared against modern composers.

If anything, it’s the chance for co-creation that excites today’s musicians. Contemporary artists like Taryn Southern and Holly Herndon use AI technology to varying degrees, with drastically different results. Southern’s pop-ready album, I AM AI, released in 2018. It was produced with the help of AI music-generating tools such as IBM’s Watson and Google’s Magenta.

Magenta is included in the latest Ableton Live release, a widely-used piece of music production software. As more artists begin to play with AI-music tools like these, the technology becomes an increasingly valuable creative partner.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

AI-based Music Editing

Before the music arrives for your listening pleasure, it undergoes a lengthy editing process. This includes everything from mixing the stems – the different grouped elements of a song, like vocals and guitars – to mastering the finished mixdown (the rendered audio file of the song made by the sound engineer after they’ve tweaked it to their liking).

This whole song-editing journey is filled with many hours of attentive listening and considered action. Because of the amount of choices involved, having an AI to assist in making technical suggestions can speed things up. Equalization is a crucial editing step, which is as much technical as it is artistic. This refers to an audio engineer balancing out the specific frequencies of a track’s sounds, so they complement rather than conflict with each other. Using an AI to perform these basic EQ functions can provide an alternative starting point for the engineer.

Another example of fine-tuning music for consumption is the mastering process. Because published music must stick to strict formatting to for radio and TV, or film, it needs to be mastered. This final step before release usually requires a mastering engineer. They basically make the mix sound as good as possible, so it’s ready for playback on any platform.

Some of the technical changes mastering engineers make are universal. For example, they need to make every mixdown louder to match the standard of music that’s out there; or even to match the other songs on an album. Using universal techniques means AI can help, because you’ve got practices it can learn from. These practices can then be automatically applied and tailored to the song.

Companies like LANDR and Izotope are already on board. LANDR offers an AI-powered mastering service that caters to a variety of styles, while Izotope developed a plugin that includes a “mastering assistant“. Once again, AI can act as a useful sidekick for those spending hours in the editing process.

AI-based Music Analysis

Analysis is what happens when you break something down into smaller parts. In AI music terms, analysis is the process of breaking down a song into parts. Let’s say you’ve got a library full of songs and you’d like to identify all the exciting orchestral music (maybe you’re making a trailer for the next Avengers-themed Marvel movie). Through AI, analysis can be performed to highlight the most relevant music for your trailer based on your selected criteria (exciting; orchestral).

There are two types of analysis that make this magic possible: symbolic analysis and audio analysis. While symbolic analysis gathers musical information about a song from the score – including the rhythm, harmony and chord progressions, for example – audio or waveform analysis considers the entire song. This means understanding what’s unique about the fully-rendered wave (like those you see when you hit play on SoundCloud) and comparing it against other waves. Audio analysis enables the discovery of songs based on genre, timbre or emotion.

Both symbolic and audio analysis use feature extraction. Simply put, this is when you pull numbers out of a dataset. The better your data – meaning quality, well-organized and clearly tagged – the easier it is to pick up on ‘features’ of your music. These could be ‘low-level’ features like loudness, how much bass is present or the type of rhythms common in a genre. Or they could be ‘high level’ features, referring more broadly to the artist’s style, based on lyrics and the combination of musical elements at play.

AI-based music analysis makes it easier to understand what’s unique about a group of songs. If your algorithm learns the rhythms unique to Drum and Bass music, it can discover those songs by genre. And if it learns how to spot the features that make a song “happy” or “sad”, then you can search by emotion or mood. This allows for better sorting, and finding exactly what you pictured. Better sorting means faster, more reliable retrieval of the music you need, making you project process more efficient and fun.

With Cyanite we offer music analysis services via an API solution to tackle large music databases or the ready-to-use web app CYANITE. Create a free account to test AI-based tagging and music recommendations.

5 Technology Trends for Catalog Owners – How Technology is Changing the Music Industry?

5 Technology Trends for Catalog Owners – How Technology is Changing the Music Industry?

The music industry is technology-driven. As new technologies become mainstream, how customers use them affects how music industry players organize their catalogs. Even though traditional structures make it a challenge for music labels, publishing houses, and distribution companies to adapt quickly, to truly monetize the potential value of a music catalog, a continuously evolving market needs to be addressed. 

This article explores the state of technology in the music industry and outlines 5 emerging technologies that are disrupting the field.

The Current State of Technology in the Music Industry

Digital technology has been affecting the music industry for many years. Nowadays professional musicians can record music at home and the control over the distribution channels is mainly in the hands of digital platforms. These developments plus the proliferation of social media and video channels mark the democratization of the music industry. 

The pandemic brought about the inability to hold live performances which in turn propelled digital technology to even more growth. At the same time, Tik Tok reached its popularity around the same time and its easily discoverable bite-sized music has been celebrated by younger music fans. 

In 2022 the market continues to develop with new technology in music industry emerging and the center of entertainment shifting from live venues to home and virtual reality.

Emerging Technologies in the Music Industry 

These four major technology trends affect the future of the music industry and are increasingly important for music catalog owners.

Trend 1 – New media production & consumption channels

 

@Alexander Shatov from Unsplash

User-generated content (UGC) amplifies the amount of music content created these days. The delivery and consumption of music are now often happening through UGC channels such as Instagram reels, Facebook Watch, and Tik Tok. Big streaming platforms are under clear pressure as social media continues to gain further musical ground. The proliferation of these channels means that everyone can be a creator and produce music. 

This is not a new trend. Since the launch of Spotify, the amount of music content produced and consumed has skyrocketed. It was fueled by the freemium approach adopted by most streaming services. Users sign up for free and have access to an endless catalog of content. As a result, artists and creators were able potentially to reach millions of listeners worldwide.

With this incentive, content creators have jumped on board, signing exclusive deals with these platforms.  All these developments plus the rise of UGC have led to more music content than we can consume in our lifetime. 

As further entry points continue to appear for independent creators to offer content, this fully opens the floodgates of the UGC flood. AI-generated music will also be submitted by creators, which multiplies release cadences exponentially. Trawling through all the data to categorize it becomes challenging.  The music industry responded to these challenges with the development of AI tagging and classification engines, that can categorize the catalog and help create more targeted campaigns for music releases on various platforms. Just recently Soundcloud acquired Musiio – an automated tagging and playlisting engine to help categorize Soundcloud’s vast music library which proves how important categorization is for these platforms.

Trend 2 – Using AI to evaluate and benchmark a catalog

 

@Jeremy Bezanger from Unsplash

To respond to a constant increase in the amount of music content, AI is being used as the main tool for sorting and organizing the library. The basic thing such an AI does is it tags music in the catalog automatically so the classification is consistent. It can also analyze the constant stream of new songs and tag them according to the catalog’s classification. The ability of AI to categorize large amounts of music data as well as do the tagging on the fly keeps the catalog’s volume manageable

Not only does AI work with new content, but it also helps music library owners get the most out of the library in terms of revenue. AI is used to bring to light the back catalog where all the niche songs are stored in the tail and revive old music genres and subgenres. It solves the so-called long-tail problem using a combination of tagging, which makes old and niche songs easier to discover for search engines, and similarity search algorithms that find tracks similar to popular artists based on metadata.

Standing aside is the inability of search engines to respond to the needs of customers, which is one of the reasons behind the rise of user-generated content. Finding fitting songs is still a challenge as most music remains uncategorized and manually tagged. Using AI to improve the search function in the catalog is a new music technology that’s coming forward. 

To read more about AI for tagging and benchmarking, see the article on the 4 Applications of AI in the Music Industry.

Trend 3 – The rise of AI-generated music

 

@marcelalaskoski from Unsplash

It is clear that AI presents manifold opportunities to music catalog owners. But what about the music itself and music creators. Although AI-generated music dates back to the Illiac Suite of 1957, it attracted more interest during the last decade – just in 2019, the first music-making AI signed a deal with a major label.

While the quality of AI-generated music keeps improving, an algorithm that can generate Oscar-worthy film scores or emotionally riveting material is a distant reality. Currently, AI is used more as a tool for assisting in music creation, generating ideas that producers or artists turn into tracks. For example, Google’s Magenta provides such a tool

That said, music catalog owners need to be aware that AI-generated music will continue to improve. Those looking for alternatives to score their projects may consider exploring it as an option. In the future, the chances are high that AI-generated music will end up in your catalog along with other tracks, which returns us back to the question of proper classification and music search. While AI-generated music is definitely an opportunity for the music industry, it raises several problems including copyright issues and classification.

Trend 4 – Music for Extended Reality

 

A new wave of technology trends brings new forms of media content. The two applications most relevant for music catalog owners are Augmented Reality (AR) and Virtual Reality (VR).

Both rely on immersion, which refers to how believable the experience is for the user. Music is used to increase this believability. Just like the movie score creates an emotional connection with the viewer, music in AR and VR can enhance and stimulate the effect of the virtual space you’re moving around in.

The emotional and situational contexts are therefore critical. It is likely that AR and VR will follow the game industry to provide immersive music experiences. For example, adaptive soundtracks are already used in games where the music changes based on where the character is in the game and their perspective. Apple is rumored to release such an AR/VR set at the end of 2022 where music adapts to the environment. 

For AR and VR, you’d need to identify songs that adapt to the positioning, movement, and changing emotional state of users. This would mean tagging the songs for mood and other XR-related factors if you want to increase the speed of finding the right song.

Trend 5 – Music search will be assisted with technologies like Google 

The quality of the search function supported by AI tagging is already high enough, but the way music is searched for is going through a transformation. The future of music search looks similar to what Google offers now, which is the search result based on the user’s input of phrases or sentences in the search bar. According to our research, the ability of AI to translate music into text-based descriptions is one of the most anticipated technologies of 2022. 

Right now you can only search music by its meta-information such as the artist or title, or by specific descriptors, for example, mood or genre. For example, in Cyanite keyword search by weights allows to select up to 10 keywords and then specify their weights from 0 to 1 to find the right fitting track. You can also use Similarity Search which takes the reference track and gives you a list of tracks that match. To see this use case in action, see the Video Interview – How Cinephonix Integrated AI Search into Their Music Library.

The AI-based text descriptions take into account many characteristics of the song so simply typing “richly textured grand orchestral anthem featuring a lusty tenor and mezzo soprano will return a list of songs that correspond to the search query.

How the music business will change in the next 5-10 years

The development of technologies has always been challenging for the music industry. First, artists and labels lost their regular sources of income from CD sales, then the pandemic brought about the destruction of the live venues. 

AI is set to bring even more disruption. Users and AI generate an avalanche of new content that makes music professionals worried about the quality of music and the loss of a human element that is attached to it. At the same time, the speed of development of these technologies is overwhelming as they produce a crazy amount of content that needs to be classified and sorted. 

On the other hand, AI as a tool is used by labels and managers to automate repetitive tasks so they can focus on more complex goals. So these emerging technologies not only disrupt the industry but also help the music players to adapt to the ever-changing landscape. AI-assisted tagging, AI text descriptions for search, and new channels of distribution such as AR and VR represent revenue drivers and new ways of monetization for everyone involved.

I want to try out Cyanite’s AI platform – how can I get started?

If you want to get a first grip on how Cyanite works, you can also register for our free web app to analyze music and try out similarity searches without any coding needed

Contact us with any questions about our frontend and API services via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.