Experience Our Biggest Web App Update with 5,000+ New Genres! 🎉 Discover Now

AI Panel: Using AI Music Search in a Co-Creative Approach between Human and Machine

AI Panel: Using AI Music Search in a Co-Creative Approach between Human and Machine

In September 2022, Cyanite co-founder, Markus took part in a panel discussion at Production Music Conference 2022 in Los Angeles.

The panel topic was to discuss the role of AI in a co-creative approach between humans and machines. The panel participants included Bruce Anderson (APM Music), Markus Schwarzer (Cyanite), Nick Venti (PlusMusic), Philippe Guillaud (MatchTune), and Einar M. Helde (AIMS API). 

The panel raised pressing discussion points on the future of AI so we decided to publish our takeaways here. To watch the full video of the panel, scroll down to the middle of the article. Enjoy the read! 

Human-Machine Co-creativity

AI performs many tasks that are usually difficult for people, such as analyzing song data, extracting information, searching music, and creating completely new tracks. As AI usage increases, questions of AI’s potential and its ability to create with humans or create on their own have been raised. The possibility of AI replacing humans is, perhaps, one of the most contradicting topics. 

The PMC 2022 panel focused on the topic of co-creativity. Some AI can create on their own, but co-creativity represents creativity between the human and the machine.

So it is not the sum of individual creativity, rather it is the emergence of various new forms of interactions between humans and machines. To find out all the different ways AI music search can be co-creative, let’s dive into the main takeaways from the panel:

Music industry challenges

The main music industry challenge that all participants agreed on was the overwhelming amount of music produced these days. Another challenge is reaching a shared understanding of music.

The way someone searches for music depends on their understanding of music which can widely differ and their role in the music industry. Music supervisors, for example, use a different language to search for music than film producers.

We talked about it in detail at Synchtank blog back in May 2022. AI can solve these issues, especially with the new developments in the field.

Audience Question from Adam Taylor, APM Music: Where do we see AI going in the next 5 years?

So what’s in store for music AI in the next 5 years? We’re entering a post-tagging era marked by the combination of developments in music search. Keyword search will no longer be the main way to search for or index music. Instead, the following developments will take place: 

 

  • Similarity Search has shown that we can use complex inputs to find music. Similarity search pulls a list of songs that match a reference track. It is projected to be the primary way of searching for music in the future. 

 

  • Free Searches – Search in full-text that allows searching for music in your own words based on natural language processing technologies. With a Free Search, you enter what comes to mind into a search bar and the AI suggests a song. This is a technology similar to DALL-E or Midjourney that returns an image based on text input.

 

  • Music service that already knows what to do – in a further perspective, music services will emerge that recommend music depending on where you are in your role or personal development. These services will cater to all levels of search: from an amateur level that simply gives you a requested song to expert searches following an elaborate sync brief including images and videos that accompany the brief or even a stream of consciousness.

Audience Question from Alan Lazar, Luminary Scores: Can I decode which songs have the potential to be a hit?

While some AI companies attempted to decode the hit potential of music, it is still unclear if there is any way to determine whether the song becomes a hit.

The nature of pop culture and the many factors that compile a hit from songwriting to production and elusive factors such as what is the song connected make it impossible to predict whether or not a song becomes a hit. 

The vision for AI from Cyanite – where would we like to see it in the future?

AI curation in music is developing at a lightning speed. We’re hoping that it will make music space more exciting and diverse, which includes in particular: 

 

  • Democratization and diversity of the field – more opportunities will become available for musicians and creators, including democratized access to sync opportunities and other ways to make a livelihood from music. 

 

  • Creativity and surprising experiences – right now AI is designed to do the same tasks at a rapid speed. We’re hoping AI will be able to perform tasks co-creatively and produce surprising experiences based on music but also other factors. As music has the ability to touch directly into people’s emotions, it has the potential to be a part of a greater narrative.
YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Video from the PMC 2022 panel: Using AI Music Search In A Co-Creative Approach Between Human and Machine

Bonus takeaway: Co-creativity between users and tech – supplying music data to technology

It seems that we should be able to pull all sorts of music data from the environments such as video games and user-generated content. However, the diversity of music projects is quite astonishing.

So when it comes to co-creativity from the side of enhancement of machine tagging with human tagging, personalization can be harmful to B2B. In B2B, AI mainly works with audio features without the involvement of user-generated data.

Conclusion

To sum up, AI can co-create with humans and solve the challenges facing the music industry today. There is a lot in store for AI’s future development and there is a lot of potential.

Still, AI is far away from replacing humans and should not replace them completely. Instead, it will improve in ways that will make music searches more intuitive and co-creative responding to human input in the form of a text search, image, or video. 

As usual with AI, some people overestimate what it can do. Some tasks such as identifying music’s hit potential remain unthinkable for AI.

On the other hand, it’s not hard to envision the future where AI can help democratize access to opportunities for musicians and produce surprising projects where music will be a part of a shared emotional experience.

We hope you enjoyed this read and learned more about AI co-creativity and the future of AI music search. If you’re interested to learn more, you can also check out the article “The 4 Applications of AI in the Music Industry”. If you have any feedback, questions, or contributions, please reach out to markus@cyanite.ai.

I want to integrate AI search into my library – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

Music Ally Startup Files with Cyanite

Music Ally Startup Files with Cyanite

One and only Music Ally covered us in their latest StartupFiles. In an interview with two of Cyanite’s co-founders, Markus and Jakob, they explain how we want to “act as the Google Translate of music tagging” in an era of huge catalogue acquisitions.

For the first time, they give an insight into how we are building a second layer of music tagging that will be understood by all kinds of professionals working with music, such as in film, advertising, games, or UGC.

You’ll also find lots of different examples of how Cyanite.ai is being used by music companies worldwide and get an exclusive outlook on what’s coming next.

Read the article here.

Case Study: How SWR uses Cyanite’s recommendation algorithms for their new radio app

Case Study: How SWR uses Cyanite’s recommendation algorithms for their new radio app

About SWR

SWR is not only Germany’s 3rd biggest radio station but also investing heavily in the future of radio via its own innovation entity SWR audiolab. Their goal is to develop new radio experiences to attract audiences who no longer listen to traditional radio. For instance, the new SWR mobile app enables users to skip songs – a concept completely revolutionary to linear radio.

Weekly listeners of SWR owned radio stations in Germany: 7,07 million

Visits of SWR online services in 2019: 218,3 million

“Our aspiration is to create the future of radio for tomorrow and beyond by combining media trends and technologies with the demands of our users.”

Challenge

By providing on-demand content, SWR steps out of their radio license and have to pay royalties for every song stream like Spotify or Apple Music. Every skip means additional costs.

Shall I skip or shall I go?

Solution: Bespoke recommendation system

SWR implemented Cyanite’s music intelligence and user insight systems to automate song recommendations and personalized playlists based on listener preferences significantly lowering skips and increasing session times. The technology is delivered via the Cyanite API and seamlessly integrated into SWR’s new radio app.

This is a screenshot of Cyanite’s library view which easily turns a catalogue into an accountable database

Results

+ 60% lower skip rate

+ Annual cost saving of >60,000 €

+ 46% increase in session time

+ Valuable data on user behaviour and personal music preference

Christian Hufnagel

Christian Hufnagel

Co-Founder of SWR audio labs

Christian Hufnagel is the co-founder of SWR audio labs and drives the innovation activities of SWR forward.

Cyanite recommendation algorithms help us to develop a personalized radio station of the future that better attracts and engages listeners.”

I want to integrate AI in my service as well – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

How AI Empowers Employees in the Music Industry

How AI Empowers Employees in the Music Industry

According to a McKinsey report, 70% of companies are expected to adopt some kind of AI technology by 2030. The music industry is not an exception. Yet, when it comes to AI, skepticism often overshadows the potential. AI is thought to be a job killer and a force that will outperform humans in creativity and value production. This popular view causes general anxiety across all industries. This article is intended to add a new point of view about AI in music and show how AI can be used to solve typical business problems such as employee motivation and getting the work done in a more meaningful way.

The music industry is special in its AI adoption journey, as it is one of the industries that has very vivid problems that can be solved by AI. The number of tracks uploaded to the internet reaches 300.000 a day and the number of artists on Spotify could be 50 million by 2025. In this situation, the output exceeds the human capacity to manage it, so sometimes there is no other way but to use the AI. Despite the benefits, the negative impact of AI is a common subject in the media – see the article here. 

Rise of Robots

Annie Spratt © Unsplash

For a more balanced view on the topic, let’s explore how AI can actually help humans working in the music industry if the goal is not to maximize profits and productivity but empower the workforce. 

How AI can improve the employee experience

Speed up music tagging and search 

In their work, music companies often have to deal with music search. The essential part of a music search is a well-managed and consistently tagged music library. Such a library usually uses a lot of metadata. Some of the metadata are “hard factors” such as release date, recording artist, or title. But “soft factors” such as “genre”, “mood”, “energy level”, and other music describing factors become more and more important.

Obviously, assigning tags for those soft factors to music is a tedious and subjective task. AI helps with these tagging tasks by automatically detecting all the data in a music track and categorizing the song. It can easily decipher between rock music and pop music, emotions, such as “sad” or “uplifting”, add instruments to a track, and tag thousands of songs in a very short time. 

But in fact, music tagging AI doesn’t perform without errors. AI is not perfect and it still needs human supervision.

Automatic tagging increases work speed which gives companies the opportunity to quickly add songs to their catalog and pitch them to customers. 

Clean up a huge amount of data 

As it is with AI, there is always a possibility of errors when a person manages the catalog. Especially when different team members managed a catalog over the years, the number of errors can become overwhelming. 

AI helps reduce human errors in existing catalogs as well as prevent them in the future. For existing catalogs, AI can provide an analysis of data, discover oddly tagged songs, and then eliminate discrepancies. AI will also make some mistakes but it will do them in a consistent way. You can see how it works step by step in this case study here. 

Enrich tedious music discussions with AI-generated data 

AI is great for visualizing data and making complex information digestible – and so it is for music! In some visualizations, songs are grouped by emotions so you can get a very comprehensive view of the library. For single tracks, AI can analyze emotions across the whole track or in a custom segment. Another way to visualize a catalog is to group songs by similarity so most similar artists are bundled up together.

Ever tried to convince a data-driven CMO that this one song has a melancholic touch and doesn’t fit too well to the campaign? Try to back up your expertise with some data the next time!

Instruments in Detail View
Instruments analysis in Cyanite app

In any case, AI can complement marketing and sales efforts by giving the companies tools to visualize the catalog and song data and then use this data to sell. On a song-by-song basis, visualization provides a snapshot of the song that can be understood easily. But, really, visualization of data emphasizes innovative data-centered positioning of the company and adds a bit of spice to the sales efforts.

At Cyanite, we even created several music analysis stories using visualization.

Reduce human bias and make data-based decisions 

AI in music can ensure every decision is based on data, not emotions. For example, when choosing tracks for a brand video it is important that it adheres to the brand guidelines. But more often, tracks are chosen simply because someone liked them. 

To avoid human bias, checking in with AI can be implemented into the business strategy for more consistent and better branding efforts. For example, in the case of a branded video, AI will offer songs that correspond to the brand profile whether it is “sexy”, “chill”, or “confident”. 

All these capabilities of AI drastically improve the quality of the end result for customers and allow to get rid of tedious and boring tasks for employees.

How AI can boost motivation

Spend more time on creative and meaningful tasks 

One of the AI benefits is that employees can focus on creative solutions rather than on repetitive tasks. This frees up individual qualities such as empathy, communication, and problem-solving. Those qualities are then useful for customer acquisition and service, as the studies confirms.  If you are working in sync and answering a sync briefing, finding right-fitting songs from the catalog with AI’s help leaves you more time to add a creative storyline why this song is a great fit beyond its pure sound. In the end, AI has the potential to increase the customer service level while contributing to higher employee satisfaction. 

Employee

LinkedIn Sales Solutions © Unsplash

Speed up learning and training new employees 

In one of the case studies, we’ve shown the process of cleaning a music library. Some assets that were created during the project can be used by the company to teach and train new employees. For example, a visualization of song categories can be used as a guide for new staff who are in charge of tagging new songs. See for more details here.  Also, starting to work with a catalog of 10,000 songs represents a very high entry barrier and it usually takes months to understand a catalog in depth. With a Similarity Search, like the one from Cyanite or other services like Musiio, AIMS or MusiMap, a catalog search can start intuitively and easily with a reference track. It provides guidance and creates more opportunities for meaningful human work. Overall, AI is characterized by ease of use. It is highly intuitive, doesn’t need much time to be set up, and produces results at an instant. The better UX helps not only employees but also the customers if they have access to the catalog. To see for yourself, you can try the Cyanite web app here. 

Ensure consistent and collaborative approach to work processes and policies

In general, AI follows one consistent tagging scheme and does so automatically which means less control is needed from a human side to keep things going. Having clean metadata means at any point in time, the catalog can be repurposed, offered to a third party, and integration can be done. And integration will become more and more important in the future: could you directly serve a new music-tech startup that wants to offer your catalog in their new licensing platform? How well are you equipped to seize business opportunities? When a catalog includes many different music libraries, and there is a need for a unified approach, AI will scan the catalog for keywords that are equal in meaning and eliminate the redundancies. When a catalog is being integrated into a larger audio library, the AI will draw parallels between the two tagging systems and then automatically retag every song in the style of the new catalog at little to no information loss rate.  In general, having clean metadata and the ability to repurpose catalogs allows music companies to experiment with their offers and be more agile and innovative. 

Summary 

There are many benefits of AI for music companies. But also there are quite a lot of risks. When looking at AI in the music industry, it is important to understand that AI isn’t replacing jobs but it is a tool to work with and help employees improve. Of course, AI tools are different. In the case of Cyanite, the AI is handling boring repetitive tasks such as music analysis, tagging, and search. At the same time, it gives people the opportunity to work on something more meaningful and inspiring.

However, the introduction of AI not only in the music but in any industry has the potential to bear a variety of risks. That is why we are advocates for empowering human work with AI. It is important to stay critical, question new technology, and help its creators make the right decisions.

The 4 Applications of AI in the Music Industry

The 4 Applications of AI in the Music Industry

A couple of weeks ago, Cyanite co-founder Jakob, gave a lecture in a music publishing class at Berlin’s BIMM Institute. The topic was to show and give concrete examples of AI’s real use cases in today’s music industry. The goal was to get away from the overload of buzzwords surrounding the AI topic and shed more light on AI’s actual applications and benefits.

This lecture was well received by the students, so we decided to publish its main points on the Cyanite blog. We hope you enjoy the read!

Introduction

Many people, when they hear about “AI and music”, think of robots creating and composing music. This understandably comes together with a very fearful and critical perception of robots replacing human creators. But music created by algorithms merely represents a fraction of AI applications in the music industry. 

AI Robot & Music
Picture 1. AI Robot Writing Its Own Music
This article is intended to explore:

1. Four different kinds of AI in music.

2. Practical applications of AI in the music industry. 

3. Problems that AI can solve for music companies.

4. Pros and cons of each AI application.

How does AI work? 

Before we dive into the four kinds of AI in the music industry, here are some basic concepts of how AI works. These concepts are not only valuable to understand but they can help come up with new applications of AI in the future. 

Just like humans, some AI methods like deep learning need data to learn from. In that regard, AI is like a child. Children absorb and learn to understand the world by trial and error. As a child, you point your finger at a cat and say “dog”. You then get corrected by your parents who say, “No, that’s a cat”. The brain stores all this information about the size, color, looks, and shape of the animal and identifies it as a cat from now on. 

AI is designed to follow the same learning principle. The difference is that AI is still not even close to the magical capacity of the human brain. A normal AI neural network has around 1,000 – 10,000 neurons in it, while the human brain contains 86 billion!

This means that AI can currently perform only a limited number of tasks and needs a lot of high-quality data to learn from.

One example of how data is used to train AI to detect objects in pictures is a process called reCAPTCHA. This is a system that asks you to select traffic lights in a picture to “prove you are human”.

The system collects highly valuable training data for neural networks to learn how traffic lights look like.

ai learning
Picture 2. AI Learning with reCAPTCHA
If you are interested to learn more about how this process works for detecting genres in music, you can check out this article.

The 4 types of AI in music

Now that you understand the basic AI concept, here is an overview of the four main applications of AI in the music industry. Keep in mind that there are many more possible applications.

1. AI Music Creation

2. Search & Recommendation

3. Auto-tagging

4. AI Mastering

Let’s have a closer look at what problems each area addresses, how the solutions work, and also explore their pros and cons!

Application 1: AI-Generated Music

Problem

Problems that AI can solve in the AI creation field are not very apparent. AI-generated music is, firstly, a creative and artistic field. However, if we look at it from a business context we can identify existing problems. When the music needs to adapt to changing situations, for instance, in video games or other interactive settings, AI-created music can adapt more natively to changing environments. 

Solution

AI can be trained to create custom music. For that AI needs input data and then it needs to be taught to make music. Just like a human.

To understand current AI creation capabilities here are a couple of real-world examples:

Yamaha company analyzed many hours of Glenn Gould’s performance to create an AI system that can potentially reproduce the famous pianist’s music style and maybe even create an entirely new Glenn Gould’s piece.

A team of Australian engineers won AI “Eurovision Song Contest” by creating a song with samples of noises made by koalas and Tasmanian devils. The team trained a neural network on animal noises for it to produce an original sound and lyrics. 

Who is AI-generated music for?

  • Game Studios
  • Art Galleries
  • Brands  
  • Commercials  
  • Films  
  • YouTubers  
  • Social Media Influencers

Implementation Examples

Pros of this solution

  • Cheap to produce new content
  • Customizable
  • Great potential for creative human & AI collaboration
  • Creative tools for artists.

Cons of this solution

  • The quality of fully synthesized AI music is still very low
  • No concrete application in the traditional music industry
  • Legal issues over the copyright including rights to folklore music 
  • Most AI creation models are trained on western music and can reproduce western sound only
  • Very high development cost.

Bottom line

It will take some time for AI-created music to sound adequate or have a straight use case. However, hybrid approaches that use AI to compose music with pre-recorded samples, loops, and one-shots show that the AI-generated future is not far away.

Application 2. Search & Recommendation

Problem

It can be hard to find that one song that fits the moment perfectly, whether it is a movie scene or a podcast. And the more music a catalog contains, the harder it is to efficiently search it. With 500 million songs online and 300,000 new songs uploaded to the internet every day (!!), this can easily be called an inhuman task. Platforms like Spotify develop great recommendation algorithms for seamless and enjoyable listening experiences for music consumers. However, if we look at sync, it gets a lot more difficult. Imagine a music publisher who administers around 50,000 copyrights. Effectively they can oversee maybe 10% of that catalog leaving a lot of potential unused. 

Solution

AI can be trained to detect sonic similarities in songs.  

Who are Similarity Searches for?

  • Music publishers: using reference songs to search their catalog
  • Production music libraries and beat platforms
  • DSPs that don’t have their own AI team
  • Radio apps
  • More use cases in A&R (artist and repertoire) and etc.
  • DJs needing to hold the energy high after a particularly well-received track (in the post-Covid world)
  • Basically, anyone who starts sentences like “That totally sounds like…”
  • Managers targeting look-alike audiences. 

Implementation Examples

Pros of this solution

  • Finding hidden gems in a catalog which goes far beyond the human capacity for search. Here both AI-tagging and AI search & recommendation are employed
  • Low entry barrier when working with big catalogs
  • Great and intuitive search experiences for non-professional music searchers.

Cons of this solution

  • Technical similarity vs. perceived similarity – there is still quite a lot of difference in how a human and AI function. Human perception is highly subjective and may assign higher or lower similarity to two songs, which may be different to what AI thinks. 

Bottom line

All positive. Everyone should use Similarity Search algorithms every day.

Application 3. Auto-tagging

Problem

To find and recommend music, you need a well-categorized library to deliver the tracks that exactly correspond to a search request. The artist and the song name are “descriptive metadata”, while genre, mood, energy, tempo, voice, language are “discovery metadata”. More on this topic here. The problem is that tagging music manually is one of the most tedious and subjective tasks in the music industry. You have to listen to a song and then decide the mood it evokes in you. Doing that for one song might be ok, but forget about it at scale. At the same time, tagging requires extreme accuracy and precision. Inconsistent and wrong manual tagging leads to a poor search experience, which results in music that can’t be found and monetized. Imagine tagging the 300,000 new songs uploaded to the internet every day. 

Solution

Tagging music is a task that can be done with the help of AI. Just like in the example in the first part of this article, where an algorithm detects traffic lights, neural networks can be trained to learn how, for example, rock music differs from pop or rap music.

Here is a Peggy Gou’s song, analyzed and tagged by Cyanite: 

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

AI-tagged song
Who is AI-tagging for? 

For every music company that knows the pain of manual tagging. If you work in music, chances are pretty high that you had or will have to tag songs. If you pitch a song on Spotify for Artists, you have to tag a song. If you ever made a playlist – you most probably had to deal with its categorization and tagging. If you’re an A&R and present a new artist to your team and say something like, “This is my rap artist’s new party song,” you literally just tagged a song. In all these cases it is good to have an objective AI companion to tag a song for you. 

AI-tagging is a really powerful tool at scale. You just bought a new catalog with tons of untagged songs but want to utilize it for sync: AI-tagging is a way to go. You’re a distributor tired of your clients uploading unfinished or false metadata: AI-tagging can help. You’re a production music library that picked up tons of legacy from years of manual tagging: the answer is also AI-tagging.  

Implementation Example

In the BPM Supreme library, you can see the different moods, energy levels, voice presence, and energy dynamics neatly tagged by an AI.

BPM Supreme Interface
Picture 3. BPM Supreme Cyanite Search Interface
Pros of this solution

  • Speed 
  • Consistency across catalog
  • Objectivity / reproducibility
  • Flexibility. Whenever something changes in the music industry, you can re-tag songs with new metadata at a lightning speed.

Cons of this solution

  • Development cost and time (luckily, Cyanite has a ready-to-go solution)
  • High energy consumption of deep learning models, but still less resource-heavy compared to manual tagging.

Bottom line

Tagging can not replace human work completely. But it’s a powerful and practical tool to dramatically reduce the need for manual tagging. AI-based tagging can increase the searchability of a music catalog with little to no effort.

Application 4. AI Mastering

Problem

Mastering your own music can be very expensive, especially for all DIY and bedroom producers. These categories of musicians often resort to technology to create new music. But in order to distribute music to Spotify or similar platforms, the music needs to meet certain criteria of sound quality. 

Solution

AI can be used to turn a mediocre-sounding music file into a great sound. For that, AI is trained on popular mastering techniques and on what humans have learned to recognize as a good sound. 

Who is AI mastering for?

  • DIY and bedroom producers
  • Professional musicians
  • Digital distributors 

Implementation Example

One company that is leading the field of AI mastering is LANDR. The Canada-based company has a huge community of creators and already mastered 19 million songs. Other players include eMastered and Moises.

LANDR AI Mastering
Picture 4. LANDR AI Mastering
Pros of this solution

  • Very affordable ($48/year for unlimited mastering of LO-MP3 files plus $4.99/ track for other formats vs. professional mastering starting at $30/song)
  • Fast
  • Easy for non-professionals. 

Cons of this solution

  • A standardized process that doesn’t allow room for experiments and surprises
  • Some say AI mastering is “lower quality compared to human mastering”.

Bottom line

AI mastering is an affordable tool for musicians with low budgets. For up-and-coming artists, it’s a great way to get your professionally edited music out to DSPs. For professional songwriters it’s the perfect means to let demos sound reasonably good. Professional mastering experts usually serve a different target group, so these fields are complementing each other rather than AI taking over human jobs.

Summary

To sum it up, we presented 4 different concrete use cases for AI, that work for almost every part of the value chain in the music industry. Still, the practical applications and quality differ.  AI is far from having the same complex thinking and creativity as a professional music tagger, mastering expert, or musician. But it can already help creatives do their work or even completely take over some of the expensive and tedious tasks. 

One of the biggest problems that prevents us from embracing new technology is wrong expectations. There are often two extremes: on the one side, people overestimate and expect more from AI than it can currently deliver e.g. tagging 1M songs without a single mistake or always being spot-on with music recommendations. The other camp has a lot of fear about AI taking over their jobs.

The answer may lie somewhere in between. We can embrace technology and at the same time remain critical and not blindly rely on algorithms, as there are still many facets of the human brain that AI can not imitate. 

We hope you enjoyed this read and learned more about the 4 different use cases of AI in music. If you have any feedback, questions, or contributions, you are more than welcome to reach out to jakob@cyanite.ai. You can also contact our content manager Rano if you are interested in collaborations. 

Introducing: Cyanite’s Keyword Cleaning System for Music Libraries

Introducing: Cyanite’s Keyword Cleaning System for Music Libraries

In this article, we present the common challenge of inconsistencies of keyword tagging in music databases. We discuss what causes these problems and how Cyanite developed a Keyword Cleaning system to automatically solve and overcome these. We will present four use cases for our Keyword Cleaning system and the potential impact it may have on music businesses.

Introduction of the problem

The way we perceive music is highly individual. So is the way we describe music. What is food for many dinner conversations is important to be aware of when handling bigger amounts of musical pieces professionally.

To leverage diverse monetization opportunities with musical assets, many music companies sort music catalogs by assigning keyword tags to all the audio files in their music database. These tags may describe the mood and genre of a song or categorize its instruments or tempo. This way music companies ensure accessibility and searchability of any musical asset even in very large music catalogs.

These tags follow the companies’ individual understanding of music – their catalog language. The specific nature of a catalog language may be understood under two aspects:

1. Objective catalog language (tagging): the entity of keywords and tags often described as taxonomy or tag anthology (quantity, classes and wording). „Which tags do I use.

2. Subjective catalog language (understanding of tagging): the understanding of tags and their connection to certain sound qualities. „When do I assign a certain tag?“

Objective catalog language is inherent to the music catalog or the company that owns it. Subjective catalog language, however, is inherent to every individual person that tags the music.

Having a consistent catalog language leads to a brilliant search experience and is the perfect condition for thorough exploitation of your assets. A lot of work can go into building and maintaining an own catalog language. However, 3 main events can quickly erode it and thus erode tagging quality and meaningfulness:

Event 1: Catalog acquisitions or integrations.

Event 2: Differences in the form of the day of tagging staff.

Event 3: The hiring of new tagging staff.

Not being aware of this can cause the annihilation of the work of decades. Songs can’t be found and revenue streams can’t be realized as before, seriously harming a company’s ability to execute their business model.

More importantly – music searching staff don’t trust the music search anymore which leads them to building up highly individual systems of workarounds to finding suitable music or a very limited „go-to-catalog“ of songs that they use more often rather than grasping on the entire music catalog.

Aaron Chavez © Unsplash

Our solution

Addressing these issues, Cyanite developed a way to bring together (translate) two catalog languages – objective or subjective – with minimum information loss and maximum speed, using AI.

We base our approach on a measure we denote as keyword similarity, describing the degree of semantic similarity of a pair of tags. To give an example, the keywords “enthusiastic” and “euphoric” should have a rather similar meaning when used for the description of a musical mood. We would therefore expect a high degree of keyword similarity. On the contrary, “enthusiastic” and “gloomy” represent a quite contrary pair of descriptive attributes which should point towards a low degree of keyword similarity.

Most music catalogs contain a multi-label tagging scheme, meaning the possibility for a single piece of music to be assigned multiple tags. We take use of this fact and focus on the track-wise co-occurrence of tags, hypothesizing that a frequent joint attribution of a tag pair will indicate a high degree of interrelation and, thus, keyword similarity.

We developed a natural language processing (NLP) AI system capable of learning the semantic interrelation of keywords in any library. With this, we are able to derive a quantitative measure for any combination of keywords contained in one or several music catalogs. This analysis is the basis for a variety of groundbreaking use cases to overcome challenges many music companies are struggling with.

 

Use Case 1: Catalog language translation

This challenge arises when two (or more) differently tagged music catalogs shall be integrated into each other (potentially after a catalog acquisition or when choosing a different distribution outlet). Manually translating tags is tedious and may lead to significant information loss as sometimes the same tags are not used equally (see “subjective catalog language” above).

Our system is able to understand and map every tag in relation to each other. It does it with both taxonomies understanding the respective catalog language. In a second step it maps both catalogue languages on top of each other drawing direct relations between tags and their understanding. The third step marks the translation of the single song tagging from one catalog language into the one the catalog shall be integrated in. The system automatically re-tags every song in a new catalog language.

Use Case 2: Keyword Cleaning of inconsistent keyword tagging

Companies with high fluctuation in tagging staff face this challenge – or it may be a company with a particularly large catalog (>100,000 songs) that picked up some legacy over the years: Inconsistencies in keyword tagging. This is one of the biggest problem catalogs can face as it seriously diminishes the searchability and search experience of the catalog leading to mistrust of the system, individual workarounds and eventually losing the customer for good. Or it leads the customer to directly contact the library’s sales team and search staff which harms the capability of your business to scale.

After understanding the respective catalog language of your catalog our Cyanite Keyword Cleaning system can detect tags with low keyword similarity that may contradict the other tags and flag the respective songs. To assess if a tag was wrongfully assigned (or may be missing), we offer an audio-based tagging solution for these anomalies to detect whether or not a tag is suitable or not. In case of the latter the tag is then deleted.   

Use Case 3: Taxonomy Cleaning. Detection of redundancies and blind spots.

Languages change over time – and with it change catalog languages. Some catalogs have 15,000+ different keywords in their taxonomy. It should come as no surprise that songs with older keyword tags are less being found. The choice to a slimmer taxonomy can elevate searchability and overall search experience of catalogs.

This raises the question of whether all tags are necessary and meaningful or not. To test this, our Cyanite system can detect tags that are equal in meaning by scanning through your keyword tagging. Then it consolidates redundancies condensing a taxonomy to only meaningful disjunct keyword classes.

Use Case 4: Open search

If you rely on customers handing in sync briefings and then search your catalog yourself, your business will lack scalability. So you might want to open up your catalog search to every potential client. For this you want to make sure, that you deliver the right music to every music search and every individual understanding of music – you need to speak the language of every of your customers.

To achieve this, our Cyanite Keyword system can translate a vast amount of keywords into semantically related tags. This means that if you only tag the keyword „euphoric” for very upbeat, outgoing and happy songs, but the client wants to search for „enthusiastic”, our Cyanite Keyword system understands and will present the suitable songs out of your catalog. This is important for keyword that were tagged significantly less in your catalog to be able to show a good variety of music.

Use Case 5: Automatic tagging in your own catalog language.

Let’s say your clients and customers got used to your specific keyword tagging – your catalog language. It means that your catalog language is an integral part of the stickiness of your platform and will lead customers to retain to your service. If you introduce automatic tagging through deep learning systems such as the Cyanite Tagging system, you want to keep the automatic tags in your catalog language so that your customers keep on finding the right music.

To achieve this, our Cyanite Keyword system and the Cyanite Tagging system work together on translating our auto-tags into your catalog language. Your customers won’t even notice that you switched to AI-tagging.

How to get started!

If the approach of Cyanite’s Keyword Cleaning resonates with you, the first step is to have a look into your metadata. For that, please reach out to sales@cyanite.ai. Together, we will dive into your tagging scheme and assess the possibility of a Keyword Cleaning project.Â