Experience Our Biggest Web App Update with 5,000+ New Genres! 🎉 Discover Now

AI Panel: Using AI Music Search in a Co-Creative Approach between Human and Machine

AI Panel: Using AI Music Search in a Co-Creative Approach between Human and Machine

In September 2022, Cyanite co-founder, Markus took part in a panel discussion at Production Music Conference 2022 in Los Angeles.

The panel topic was to discuss the role of AI in a co-creative approach between humans and machines. The panel participants included Bruce Anderson (APM Music), Markus Schwarzer (Cyanite), Nick Venti (PlusMusic), Philippe Guillaud (MatchTune), and Einar M. Helde (AIMS API)

The panel raised pressing discussion points on the future of AI so we decided to publish our takeaways here. To watch the full video of the panel, scroll down to the middle of the article. Enjoy the read! 

Human-Machine Co-creativity

AI performs many tasks that are usually difficult for people, such as analyzing song data, extracting information, searching music, and creating completely new tracks. As AI usage increases, questions of AI’s potential and its ability to create with humans or create on their own have been raised. The possibility of AI replacing humans is, perhaps, one of the most contradicting topics. 

The PMC 2022 panel focused on the topic of co-creativity. Some AI can create on their own, but co-creativity represents creativity between the human and the machine.

So it is not the sum of individual creativity, rather it is the emergence of various new forms of interactions between humans and machines. To find out all the different ways AI music search can be co-creative, let’s dive into the main takeaways from the panel:

Music industry challenges

The main music industry challenge that all participants agreed on was the overwhelming amount of music produced these days. Another challenge is reaching a shared understanding of music.

The way someone searches for music depends on their understanding of music which can widely differ and their role in the music industry. Music supervisors, for example, use a different language to search for music than film producers.

We talked about it in detail at Synchtank blog back in May 2022. AI can solve these issues, especially with the new developments in the field.

Audience Question from Adam Taylor, APM Music: Where do we see AI going in the next 5 years?

So what’s in store for music AI in the next 5 years? We’re entering a post-tagging era marked by the combination of developments in music search. Keyword search will no longer be the main way to search for or index music. Instead, the following developments will take place: 

 

  • Similarity Search has shown that we can use complex inputs to find music. Similarity search pulls a list of songs that match a reference track. It is projected to be the primary way of searching for music in the future. 

 

  • Free Searches – Search in full-text that allows searching for music in your own words based on natural language processing technologies. With a Free Search, you enter what comes to mind into a search bar and the AI suggests a song. This is a technology similar to DALL-E or Midjourney that returns an image based on text input.

 

  • Music service that already knows what to do – in a further perspective, music services will emerge that recommend music depending on where you are in your role or personal development. These services will cater to all levels of search: from an amateur level that simply gives you a requested song to expert searches following an elaborate sync brief including images and videos that accompany the brief or even a stream of consciousness.

Audience Question from Alan Lazar, Luminary Scores: Can I decode which songs have the potential to be a hit?

While some AI companies attempted to decode the hit potential of music, it is still unclear if there is any way to determine whether the song becomes a hit.

The nature of pop culture and the many factors that compile a hit from songwriting to production and elusive factors such as what is the song connected make it impossible to predict whether or not a song becomes a hit. 

The vision for AI from Cyanite – where would we like to see it in the future?

AI curation in music is developing at a lightning speed. We’re hoping that it will make music space more exciting and diverse, which includes in particular: 

 

  • Democratization and diversity of the field – more opportunities will become available for musicians and creators, including democratized access to sync opportunities and other ways to make a livelihood from music. 

 

  • Creativity and surprising experiences – right now AI is designed to do the same tasks at a rapid speed. We’re hoping AI will be able to perform tasks co-creatively and produce surprising experiences based on music but also other factors. As music has the ability to touch directly into people’s emotions, it has the potential to be a part of a greater narrative.
YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Video from the PMC 2022 panel: Using AI Music Search In A Co-Creative Approach Between Human and Machine

Bonus takeaway: Co-creativity between users and tech – supplying music data to technology

It seems that we should be able to pull all sorts of music data from the environments such as video games and user-generated content. However, the diversity of music projects is quite astonishing.

So when it comes to co-creativity from the side of enhancement of machine tagging with human tagging, personalization can be harmful to B2B. In B2B, AI mainly works with audio features without the involvement of user-generated data.

Conclusion

To sum up, AI can co-create with humans and solve the challenges facing the music industry today. There is a lot in store for AI’s future development and there is a lot of potential.

Still, AI is far away from replacing humans and should not replace them completely. Instead, it will improve in ways that will make music searches more intuitive and co-creative responding to human input in the form of a text search, image, or video. 

As usual with AI, some people overestimate what it can do. Some tasks such as identifying music’s hit potential remain unthinkable for AI.

On the other hand, it’s not hard to envision the future where AI can help democratize access to opportunities for musicians and produce surprising projects where music will be a part of a shared emotional experience.

We hope you enjoyed this read and learned more about AI co-creativity and the future of AI music search. If you’re interested to learn more, you can also check out the article “The 4 Applications of AI in the Music Industry”. If you have any feedback, questions, or contributions, please reach out to markus@cyanite.ai.

I want to integrate AI search into my library – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

AI looks into the Sound of Iconic Fabric Club Compilations

AI looks into the Sound of Iconic Fabric Club Compilations

One year ago, we analyzed the sound of 9 iconic German clubs and tried to uncover representative elements behind the musical curation of each club using Cyanite’s music analysis algorithms.

Today we ask ourselves if our AI can shed light on how electronic music has evolved over the last 20 years. Which club would be better suited for this than London’s Fabric? Its legendary club compilations hand-picked by popular and emerging DJs boast almost 20 years of history.

We look into all the main characteristics of Fabric compilations such as genre, mood, and energy level to show how the sound of the club progressed over the years.

Our Methodology
Fabric compilations feature two series – fabric and Fabriclive. Friday nights at the club are known as Fabriclive. These albums feature such artists as James Lavelle, Tayo Popoola, and Daniel Avery. The live element of Fabriclive nights doesn’t mean they were recorded live. Saturday nights bear the name of fabric. Fabric albums feature such artists as Craig Richards, Omar-S, Shackleton, and many more.

Although the two series are clearly different from each other, we will try to find out if our AI can find common elements that could be characteristic and representative of Fabric’s sound and its development over time.

Our approach was to narrow the analysis down to the most favorite Fabric compilations. For this, we used the best-of lists from media outlets such as DJ Mag, Mixmag, and the Fabric team itself. In total, we selected 25 compilations and limited the analysis to them. You can find the full list at the end of the article.  

Our findings include: 

  • fabric series progressed from house to techno
  • Fabriclive exhibits a strong tendency toward breakbeat/drum and bass
  • Fabriclive series has more albums with uplifting vibes than fabric
  • fabric’s sound is robotic and bouncy and Fabriclive is pulsing and driving
  • Common elements of fabric and Fabriclive compilations are high energy and a cool character.

And many more interesting insights, so keep reading to find them out.

Genre and Sub-genre

fabric compilations are dedicated to electronic dance as the main genre. Fabriclive is more diverse in its genre featuring electronic dance and other sub-genres such as funk-soul, rap/hip hop, and rock.

The sub-genre feature in Cyanite provides 48 sub-genres from abstract IDM / leftfield to trap. The Drill and Grime popular within the UK scene are likely to be classified as trap.

Each sug-genre has a score from 0-1 where 0 indicates that the track is unlikely – 0% – to represent the sub-genre, and 1 indicates that the track by 100% represents the given sub-genre. 

Insights from the sub-genre analysis: from house to techno for fabric, and drum and bass for Fabriclive

On the graphics below, you’ll see the development of main sub-genres over time for fabric and Fabriclive. 

The first fabric compilations (fabric 01, fabric 10, fabric 11, and fabric 31) are heavily focused on house music. In fabric 36, Ricardo Villalobos delivers an album that is consistently minimal and house. Finally, in the latest years, fabric compilations gear toward techno with fabric 96 being the most (50%) techno album of all.

In the Fabriclive series, albums change sub-genres from one album to another, sometimes rather abruptly. Some albums have one dominant sub-genre, others are a mix of various sub-genres in relatively similar proportions. It starts with house as a sub-genre for Fabriclive 01 and Fabriclive 09. And then we practically don’t see another house album till Fabriclive 59 and Fabriclive 66.

Meanwhile, breakbeat / drum and bass takes over, Fabriclive 32 is 32% breakbeat / drum and bass and Fabriclive 44 and Fabriclive 46 are fully breakbeat / drum and bass with 80% and 73% respectively. Finally, Fabriclive 75 restores a bit of a balance with a combination of drum and bass, electro, and house

Finally, the odd ones out are Fabriclive 07 by John Peel which is indie / alternative at the core, Fabriclive 24 by Diplo which is mainly electro, and Fabriclive 36 by LCD Soundsystem which is 39% disco. There is definitely more variety and experimentation within the Fabriclive series.

Mood
Let’s see how the moods played out in the fabric and Fabriclive series. The moods work the same way as genre and sub-genre in Cyanite and represent the emotion of the track on a scale from 0 to 1 (0-100%). 

Insights from the mood analysis: fabric – dark, energetic, and ethereal, Fabriclive – energetic and uplifting

Both fabric and Fabriclive are quite energetic. fabric series tend to be more dark and ethereal, while Fabriclive is uplifting.

The more detailed analysis reveals the difference between individual compilations:

The darkest compilation – fabric 36 featuring Ricardo Villalobos.

The most energetic and aggressive one – fabric 60 by Dave Clarke.

The most ethereal album – fabric 55 by Shackleton.

The most energetic Fabriclive compilations are Fabriclive 09 by Stuart Price, Fabriclive 24 by Diplo.

Most uplifting albums – Fabriclive 36 by LCD Soundsystem and Fabriclive 09 by Stuart Price.

The happiest album is Fabriclive 09 by Stuart Price.

Fabriclive 09 by Stuart Price is an album with a lot of extremes being one of the most energetic, uplifting, and happiest albums.

Looking at the results, if you want to get or expend some energy during the weekend both fabric and Fabriclive nights are a great choice. If you want a bit of happier and uplifting vibes, Friday Fabriclive nights are probably your best bet. On the contrary, Saturday fabric nights tend to be on the dark side. 

But the results vary across the compilations with some odd figures in between. So you might become a witness to the Fabriclive night where dark, ethereal, and sad moods are prevalent similar to Fabriclive 50 by DBridge and Instra:mental.  

Character
The character describes qualities distinctive to a track and is one of the newer features in Cyanite. It contains such classifiers as warm, playful, heroic, luxurious, and more, which depicts the expressive form of music and describes its appearance rather than mood. 

Insights from the character analysis: fabric – luxurious, cool, and mysterious, Fabriclive – cool, unpolished, and powerful. 

fabric compilations have a cool and luxurious character but only at the start in fabric 01, fabric 10, and fabric 19 albums. In later compilations, the sound continues to be cool with a touch of mysterious and bold, which makes sense with Techno being more present in these albums. Finally, fabric 55 breaks through with an ethereal character but it still maintains a bit of mystery. fabric 60 and fabric 91 introduce unpolished character while the last one, fabric 96, is mysterious and ethereal.

Fabriclive compilations also have a strong cool character across almost all albums. The cool character of some albums is often complimented with unpolished vibes. Such are Fabriclive 32, Fabriclive 42, and Fabriclive 44.

In Fabriclive 24, Fabriclive 38, and Fabriclive 42, bold accompanies the cool character. Overall, our data shows bold, cool, unpolished, and powerful as overarching themes for Fabriclive with no clear skew in one direction. 

Movement
Movement is another new feature in Cyanite. It describes the overall manner of how the sound changes or “moves” across the track. Movement in music can be described as bouncy, driving, flowing, groovy, nonrhythmic, pulsing, robotic, running, steady, or stomping.  

Insights from the movement analysis: Fabric – robotic and bouncy, Fabriclive – pulsing and driving

This is an average across the compilations and individual albums’ values may vary.

Energy Level
Out of all fabric albums, Dave Clark’s fabric 60 has the most tracks with a high level of energy. fabric 36 stands out with a lot of medium energy tracks, while low energy is not really characteristic of any of the fabric compilations. Out of 11 fabric albums, 7 have high energy.

Insights from the energy analysis: both fabric and Fabriclive compilations are high energy overall

dBridge and Instra:mental’s Fabriclive 50 is probably the lowest energy album of all Fabriclive compilations. Out of 14 Fabriclive albums, 10 have the majority of tracks with high energy, so the Fabriclive series is also high energy overall.

Conclusion
What does all this data mean? It shows the development of Fabric sound across the years and paints a picture of a club that pretty much remained true to its goals and mission from the start. While there are some variations across fabric and Fabriclive compilations, both are dedicated to the electronic dance genre, with house and techno as sub-genres for fabric, and breakbeat / drum and bass, techno, house, plus some rap/hip hop, rock, and soul for Fabriclive. 

The differences in mood and movement between fabric and Fabriclive are where the club brings some experimentation within the series as well as between the series. With fabric delivering the darkest vibes, it is hard not to appreciate the uplifting nature of Fabriclive sound. With movement also, the differences between the series are apparent. While fabric’s movement values are robotic and bouncy, Fabriclive is characterized by pulsing and driving vibes. 

As for the character and energy levels, they are pretty consistent. The club maintained its cool character on Friday and Saturday nights throughout the years, additionally introducing more mysterious sound for fabric and unpolished sound for Fabriclive in the latest years. 

It appears that it might be possible to detect how the club sound changed over time as well as explore the differences between the club nights. For a legendary club such as Fabric, it is an opportunity to decide whether to stay on a well-known path or steer in a different direction in the future.

I want to analyze my music data with Cyanite’s AI – how can I get started?

If you want to get a first grip on how Cyanite works, you can also register for our free web app to analyze music and try out similarity searches without any coding needed

Contact us with any questions about our frontend and API services via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

The Scariest Movie Soundtrack for Halloween

The Scariest Movie Soundtrack for Halloween

In light of the scariest night of the year, we present the data story on movies for Halloween. We identify the dominant emotion in the movies’ soundtracks and analyze the scariest movie music-wise. Read on to find out more.
Dracula
Bram Stoker’s Dracula is a classic vampire story. Directed by Francis Ford Coppola, the movie revives the long-forgotten character of Vlad Dracula and saves it from its previous crazy interpretations. In the movie, Dracula is looking to reunite with his lost love which prompts him to seduce and terrorize a group of friends in London

Dracula’s soundtrack is quite dark, sad, and scary, though the dominant mood is dark with 16% of dark emotions throughout the whole soundtrack.

28 Days Later
An enchanting apocalyptic story, 28 Days Later tells the story of a mysterious virus that takes over the United Kingdom. The four survivors have to rebuild their lives back while they fight everyone who was infected by the virus. In 2007, Stylus Magazine named 28 Days Later the second-best zombie movie of all time.

28 Days Later soundtrack is epic and sad rather than scary. It only scored 4% on scary which makes it one of the least scary movies on the list. 

Corpse Bride
Not so much a scary movie but a dark fantasy film, Corpse Bride was the third stop-motion movie directed by Tim Burton. The plot features Victor, a troubled fiancee, who accidentally marries the skeleton-like creature called Emily. The newlyweds spend a lot of time in the Land of the Dead before returning to the Land of the Living. 

Mostly sad, epic and chilled, Corpse Bride’s soundtrack doesn’t pursue to scare its viewers. It is actually also one of the least scary movies on the list with only 4% of the scary mood. 

The Shining
The Shining is a psychological horror film produced and directed by Stanley Kubric. The film is based on Stephen King’s novel and tells the story of Jack and his family who move to an abandoned hotel with a mysterious past. Jack’s psychological health deteriorates in the hotel as he starts dreaming about killing his family. 

The Shining music is dark, scary, and sad with some energetic notes across the soundtrack. It is one of the three scariest movies on the list.

Halloween 2
A murderer who terrorized the people in the small hometown on the eve of Halloween was imprisoned. As soon as he gets out, he starts his murderous affairs again. Halloween 2 is a typical American slasher movie co-produced by the original Halloween creator – John Carpenter.

Halloween 2 soundtrack scores 18.6% on being scary only preceded by 21.8% of the dark mood. The soundtrack is also quite energetic and aggressive.

Midsommar
A quite recent release, Midsommar features an ancestral commune in Sweden where strange things connected to the Scandinavian pagan cult are bringing uneasiness and fear. This is also a movie about a woman’s revenge against a man who could not meet her emotional needs

Midsommer’s soundtrack is 16.2% dark, 15% sad, and 12.7% spherical. It is also mildly scary with 10% of the scary mood. 

Stranger Things
Extraordinary mysteries are explored in this science fiction horror drama. A group of young friends witnesses supernatural events occurring around the town, including the appearance of a girl with psychokinetic abilities. Stranger Things is the only Netflix series on this list but it is worth a mention for its critically acclaimed atmosphere, plot, directing, writing, and soundtrack. 

The series’ soundtrack is sad, spherical, and dark. It is also quite calm but not that scary. 

Genre Analysis

We also analyzed the genres of all the movie soundtracks. The results are as follows:

The most classical soundtrack is Dracula. It is 67.46% classical and only 7.11% ambient

The most ambient soundtrack is Stranger Things. The movie’s soundtrack has the most ambient and electronic dance profile out of all. 

Apart from that, the Corpse Bride scored very high on the jazz genre with 23.89%, and 28 Days Later boasts some rock tunes with 21.34% for the rock genre. 

Overall, it seems that the movie makers prefer classical, ambient, electronic dance, jazz, and rock compositions.

Emotional Profile and Energy Levels 

Our analysis uncovered the energy level and emotional profile of each soundtrack. The emotional profile tends to be negative with only two movies being somewhat positive which are 28 Days Later and Corpse Bride. For horror movies, it is quite a natural result.

The energy levels are varied. For example, Midsommer and Stranger Things have low energy level soundtracks, whereas Dracula has a medium level energy soundtrack. All other movies scored high on the energy level.

The Scariest Movie Soundtrack for Halloween is…? 

And the title of the scariest soundtrack for Halloween goes to Halloween 2 with 18.6% of the scary mood. John Carpenter co-wrote the movie, but he’s also the composer of the film’s iconic soundtrack. This is not surprising considering that the previous movie’s soundtrack – Halloween I – was named the greatest horror soundtrack by Rolling Stone in 2019 and served as an inspiration for many movies to come.

We hope you enjoyed this data story and wish you a Happy Halloween! If you’re interested in doing a joined data story or analysis reach out to rano@cyanite.ai. 

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

The Sound of Traumprinz – AI Music Analysis

The Sound of Traumprinz – AI Music Analysis

In the world of electronic music, shifting between different aliases allows artists to explore different sounds and demonstrate their versatility without transforming existing music identities too much. But is it really true that different aliases mean different sound for all the DJs? In this article, we are going to analyze the music of one of the famous German DJs across all of their aliases to confirm or challenge that view.

Late last year, Dutch musician Afrojack, known for EDM, revived his house and techno alias Kapuchon, releasing a housey single ‘10 Years Later’. With the TESTPILOT alias, deadmau5 flaunts his techno chops. 

But, not all aliases signal a vastly different sound change. Some reveal more subtle transformations, which makes it a little more challenging for listeners, fans, and reviewers to articulate, but never any less pleasing to the ears.

In these cases, how can machine learning help to identify even the more granular differences in your music catalog? 

We put our Cyanite music intelligence tools to the test through tracking and analyzing the different aliases of quite possibly one of the most reclusive German producers in the underground electronic world, Traumprinz. 

The Banksy of the underground music community, his presence is marked by sporadic Soundcloud releases across various identities, no announcements of live gigs, and definitely no hints at his real name. Having produced under 7 different aliases throughout his career, the elusive producer’s musical output spans techno, ambient, and house, and everything in-between, and yet somehow remains recognizably ‘Traumprinz-sounding’.

We analyzed EPs and albums from all 7 aliases, amounting to over 150 songs. Today, we share with you some interesting insights gleaned using our mood and genre algorithms.

Analyzing tracks from all his aliases, we obtained unique statistics on each track’s genre breakdown, emotion breakdown, BPM, and more. From there, we arrived at alias-level breakdowns, and a whole-of-Traumprinz, combined aliases musical breakdown of his ‘average’ sound across all aliases. 

The emotions of Traumprinz’s many aliases

Mood Analysis Traumprinz

Sentiment analysis in the world of Music AI goes beyond positive and negative. Our Cyanite models detect 13 different facets of emotions in the tracks it analyzes. 

Overall, our analysis shows that Traumprinz sound veers towards the contemplative, melancholy territory, being detected by our music models as largely spherical, sad, and dark. Musical output from the DJ Healer era is detected as being the saddest, most calm, most spherical, and most chilled out of all 7. Occasional experimentation with the lighter side of things is seen in his releases under the DJ Metatron and The Phantasy aliases. 

If you are an artist, check out this article on how to build trust with the gatekeepers in the music industry.

The ‘benchmark’ Traumprinz sound

The Cyanite intelligence tools revealed that the Traumprinz sound can be largely summed up (if that was ever possible!) as being largely electronic dance (techno and house-oriented), with a significant touch of ambient and in some parts classical.

Traumprinz pie chart

And now, on to a deep-dive:

Genre Analysis Across Aliases Traumprinz

The most electronic dance era: Traumprinz

With more house-tinged tracks in releases such as Into the Sun, Mothercave, and Intrinity, and a more upbeat BPM of 121, the Traumprinz era was detected as being 82.6% electronic dance in the genre makeup, soaring above the average of 59.5%. 

The most ambient-sounding era: DJ Healer 

We found that Traumprinz’s songs were most ambient-sounding under the DJ Healer alias, with the solemn, sophisticated Nothing 2 Loose release, moderately paced at an average BPM of 102. At 71.9% ambient, this amount was far above the average Traumprinz ambient level of 28.4%. 

The most classical-sounding era: DJ Healer

At 8.0% classical in makeup, songs from the DJ Healer days were found as being the most classical, although DJ Metatron alias, with releases like Loops of Infinity, was found to be a very close runner-up, at 7.8% classical.

Looking at the other aliases, ambient makes up slightly more than one quarter of DJ Metatron’s overall sound, and electronic dance in vibe dominates. Compared to DJ Metatron, Golden Baby has more of that electronic dance feel, and less of the ambient. Musical output from The Phantasy era very similar mirrors Golden Baby in genre profile, with just a little bit more ambient.

Even dancier than the Golden Baby is the Prince of Denmark sound. Finally, the closest runner up to Traumprinz for the most electronic dance-sounding era would be the Prime Minister of Doom alias. 

Energy levels and emotional profiles

Apart from the data on genre and moods, our analysis uncovered the energy level and emotional profile of each song. Our general summary of the analysis can be described as follows: 

  • The emotional profile tends to be negative with very few songs being neutral, and even less tracks being tagged as positive

  • The energy level, for the most part, alternates between low and medium. But when it comes to Prince of Denmark and the Phantasy, there are definitely more high-energy songs compared to other aliases. Are these two aliases a way for the producer to show a more energetic side? 

Here is a detailed Excel sheet with each song’s data. Click on the links in the file to see the full albums. 

Our final thoughts: AI as a tool for discovering new ways for music curation

Through this music analysis experiment, we can once again see how music tagging and categorization software can be used as a counterpart to human judgment and instinct when it comes to appreciation of music. Moreover, this data can be used to select the tracks for DJ sets to interchange smoothly. For music companies, it represents the inner workings of AI, which can be used to sort music in the catalog and make similar song recommendations. 

To end off, here’s a mix of Traumprinz for your listening pleasure.

Club Sounds Analysis with Cyanite: Mexican Edition

Club Sounds Analysis with Cyanite: Mexican Edition

Following the success of the article on German club sounds, published in March on the Cyanite blog, Terc0 – a group of creatives from Mexico searching, empowering, and promoting artistic talent, contacted us with the idea of a similar project for Mexican clubs and we instantly said yes!

The project was published on the Deefe platform and we are sharing it with you today. 

Deefe looked at the seven music spaces in cities across Mexico to find out how the country sounds like. The clubs mentioned in the article include: 

Bar Americas (Guadalajara)

Terminal Club (Mexico City)

YUYU (Mexico City)

Rhodesia (Mexico City)

M.N. Roy (Mexico City)

Topazdeluxe (Monterrey)

Hardpop (Ciudad Juarez)

Darkness, sensuality, and sadness turned out to be the dominant moods of the club sound. At the same time, the sound is uplifting and energetic. This comes in contrast to Germany, where most sound is dark and melancholic. 

Unlike Germany, where clubs prefer instrumentals over vocals, Mexican clubs are more diverse in their choice. Female and male vocals seem to be more pronounced. And 4 out of 7 clubs don’t have female vocals at all. In Germany, all clubs more or less had female vocals in their tracks, though some less than the others. 

These are just some of the findings revealed by the analysis. To see if the Mexican audience prefers techno or house and which Mexican club has the sexiest sound, head over to Deefe blog.

To see how Germany compares, see this article on the blog. 

If you’d like to reproduce this analysis for your country, send an email to Rano at rano@cyanite.ai

 

The 4 Applications of AI in the Music Industry

The 4 Applications of AI in the Music Industry

A couple of weeks ago, Cyanite co-founder Jakob, gave a lecture in a music publishing class at Berlin’s BIMM Institute. The topic was to show and give concrete examples of AI’s real use cases in today’s music industry. The goal was to get away from the overload of buzzwords surrounding the AI topic and shed more light on AI’s actual applications and benefits.

This lecture was well received by the students, so we decided to publish its main points on the Cyanite blog. We hope you enjoy the read!

Introduction

Many people, when they hear about “AI and music”, think of robots creating and composing music. This understandably comes together with a very fearful and critical perception of robots replacing human creators. But music created by algorithms merely represents a fraction of AI applications in the music industry. 

AI Robot & Music
Picture 1. AI Robot Writing Its Own Music
This article is intended to explore:

1. Four different kinds of AI in music.

2. Practical applications of AI in the music industry. 

3. Problems that AI can solve for music companies.

4. Pros and cons of each AI application.

How does AI work? 

Before we dive into the four kinds of AI in the music industry, here are some basic concepts of how AI works. These concepts are not only valuable to understand but they can help come up with new applications of AI in the future. 

Just like humans, some AI methods like deep learning need data to learn from. In that regard, AI is like a child. Children absorb and learn to understand the world by trial and error. As a child, you point your finger at a cat and say “dog”. You then get corrected by your parents who say, “No, that’s a cat”. The brain stores all this information about the size, color, looks, and shape of the animal and identifies it as a cat from now on. 

AI is designed to follow the same learning principle. The difference is that AI is still not even close to the magical capacity of the human brain. A normal AI neural network has around 1,000 – 10,000 neurons in it, while the human brain contains 86 billion!

This means that AI can currently perform only a limited number of tasks and needs a lot of high-quality data to learn from.

One example of how data is used to train AI to detect objects in pictures is a process called reCAPTCHA. This is a system that asks you to select traffic lights in a picture to “prove you are human”.

The system collects highly valuable training data for neural networks to learn how traffic lights look like.

ai learning
Picture 2. AI Learning with reCAPTCHA
If you are interested to learn more about how this process works for detecting genres in music, you can check out this article.

The 4 types of AI in music

Now that you understand the basic AI concept, here is an overview of the four main applications of AI in the music industry. Keep in mind that there are many more possible applications.

1. AI Music Creation

2. Search & Recommendation

3. Auto-tagging

4. AI Mastering

Let’s have a closer look at what problems each area addresses, how the solutions work, and also explore their pros and cons!

Application 1: AI-Generated Music

Problem

Problems that AI can solve in the AI creation field are not very apparent. AI-generated music is, firstly, a creative and artistic field. However, if we look at it from a business context we can identify existing problems. When the music needs to adapt to changing situations, for instance, in video games or other interactive settings, AI-created music can adapt more natively to changing environments. 

Solution

AI can be trained to create custom music. For that AI needs input data and then it needs to be taught to make music. Just like a human.

To understand current AI creation capabilities here are a couple of real-world examples:

Yamaha company analyzed many hours of Glenn Gould’s performance to create an AI system that can potentially reproduce the famous pianist’s music style and maybe even create an entirely new Glenn Gould’s piece.

A team of Australian engineers won AI “Eurovision Song Contest” by creating a song with samples of noises made by koalas and Tasmanian devils. The team trained a neural network on animal noises for it to produce an original sound and lyrics. 

Who is AI-generated music for?

  • Game Studios
  • Art Galleries
  • Brands  
  • Commercials  
  • Films  
  • YouTubers  
  • Social Media Influencers

Implementation Examples

Pros of this solution

  • Cheap to produce new content
  • Customizable
  • Great potential for creative human & AI collaboration
  • Creative tools for artists.

Cons of this solution

  • The quality of fully synthesized AI music is still very low
  • No concrete application in the traditional music industry
  • Legal issues over the copyright including rights to folklore music 
  • Most AI creation models are trained on western music and can reproduce western sound only
  • Very high development cost.

Bottom line

It will take some time for AI-created music to sound adequate or have a straight use case. However, hybrid approaches that use AI to compose music with pre-recorded samples, loops, and one-shots show that the AI-generated future is not far away.

Application 2. Search & Recommendation

Problem

It can be hard to find that one song that fits the moment perfectly, whether it is a movie scene or a podcast. And the more music a catalog contains, the harder it is to efficiently search it. With 500 million songs online and 300,000 new songs uploaded to the internet every day (!!), this can easily be called an inhuman task. Platforms like Spotify develop great recommendation algorithms for seamless and enjoyable listening experiences for music consumers. However, if we look at sync, it gets a lot more difficult. Imagine a music publisher who administers around 50,000 copyrights. Effectively they can oversee maybe 10% of that catalog leaving a lot of potential unused. 

Solution

AI can be trained to detect sonic similarities in songs.  

Who are Similarity Searches for?

  • Music publishers: using reference songs to search their catalog
  • Production music libraries and beat platforms
  • DSPs that don’t have their own AI team
  • Radio apps
  • More use cases in A&R (artist and repertoire) and etc.
  • DJs needing to hold the energy high after a particularly well-received track (in the post-Covid world)
  • Basically, anyone who starts sentences like “That totally sounds like…”
  • Managers targeting look-alike audiences. 

Implementation Examples

Pros of this solution

  • Finding hidden gems in a catalog which goes far beyond the human capacity for search. Here both AI-tagging and AI search & recommendation are employed
  • Low entry barrier when working with big catalogs
  • Great and intuitive search experiences for non-professional music searchers.

Cons of this solution

  • Technical similarity vs. perceived similarity – there is still quite a lot of difference in how a human and AI function. Human perception is highly subjective and may assign higher or lower similarity to two songs, which may be different to what AI thinks. 

Bottom line

All positive. Everyone should use Similarity Search algorithms every day.

Application 3. Auto-tagging

Problem

To find and recommend music, you need a well-categorized library to deliver the tracks that exactly correspond to a search request. The artist and the song name are “descriptive metadata”, while genre, mood, energy, tempo, voice, language are “discovery metadata”. More on this topic here. The problem is that tagging music manually is one of the most tedious and subjective tasks in the music industry. You have to listen to a song and then decide the mood it evokes in you. Doing that for one song might be ok, but forget about it at scale. At the same time, tagging requires extreme accuracy and precision. Inconsistent and wrong manual tagging leads to a poor search experience, which results in music that can’t be found and monetized. Imagine tagging the 300,000 new songs uploaded to the internet every day. 

Solution

Tagging music is a task that can be done with the help of AI. Just like in the example in the first part of this article, where an algorithm detects traffic lights, neural networks can be trained to learn how, for example, rock music differs from pop or rap music.

Here is a Peggy Gou’s song, analyzed and tagged by Cyanite: 

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

AI-tagged song
Who is AI-tagging for? 

For every music company that knows the pain of manual tagging. If you work in music, chances are pretty high that you had or will have to tag songs. If you pitch a song on Spotify for Artists, you have to tag a song. If you ever made a playlist – you most probably had to deal with its categorization and tagging. If you’re an A&R and present a new artist to your team and say something like, “This is my rap artist’s new party song,” you literally just tagged a song. In all these cases it is good to have an objective AI companion to tag a song for you. 

AI-tagging is a really powerful tool at scale. You just bought a new catalog with tons of untagged songs but want to utilize it for sync: AI-tagging is a way to go. You’re a distributor tired of your clients uploading unfinished or false metadata: AI-tagging can help. You’re a production music library that picked up tons of legacy from years of manual tagging: the answer is also AI-tagging.  

Implementation Example

In the BPM Supreme library, you can see the different moods, energy levels, voice presence, and energy dynamics neatly tagged by an AI.

BPM Supreme Interface
Picture 3. BPM Supreme Cyanite Search Interface
Pros of this solution

  • Speed 
  • Consistency across catalog
  • Objectivity / reproducibility
  • Flexibility. Whenever something changes in the music industry, you can re-tag songs with new metadata at a lightning speed.

Cons of this solution

  • Development cost and time (luckily, Cyanite has a ready-to-go solution)
  • High energy consumption of deep learning models, but still less resource-heavy compared to manual tagging.

Bottom line

Tagging can not replace human work completely. But it’s a powerful and practical tool to dramatically reduce the need for manual tagging. AI-based tagging can increase the searchability of a music catalog with little to no effort.

Application 4. AI Mastering

Problem

Mastering your own music can be very expensive, especially for all DIY and bedroom producers. These categories of musicians often resort to technology to create new music. But in order to distribute music to Spotify or similar platforms, the music needs to meet certain criteria of sound quality. 

Solution

AI can be used to turn a mediocre-sounding music file into a great sound. For that, AI is trained on popular mastering techniques and on what humans have learned to recognize as a good sound. 

Who is AI mastering for?

  • DIY and bedroom producers
  • Professional musicians
  • Digital distributors 

Implementation Example

One company that is leading the field of AI mastering is LANDR. The Canada-based company has a huge community of creators and already mastered 19 million songs. Other players include eMastered and Moises.

LANDR AI Mastering
Picture 4. LANDR AI Mastering
Pros of this solution

  • Very affordable ($48/year for unlimited mastering of LO-MP3 files plus $4.99/ track for other formats vs. professional mastering starting at $30/song)
  • Fast
  • Easy for non-professionals. 

Cons of this solution

  • A standardized process that doesn’t allow room for experiments and surprises
  • Some say AI mastering is “lower quality compared to human mastering”.

Bottom line

AI mastering is an affordable tool for musicians with low budgets. For up-and-coming artists, it’s a great way to get your professionally edited music out to DSPs. For professional songwriters it’s the perfect means to let demos sound reasonably good. Professional mastering experts usually serve a different target group, so these fields are complementing each other rather than AI taking over human jobs.

Summary

To sum it up, we presented 4 different concrete use cases for AI, that work for almost every part of the value chain in the music industry. Still, the practical applications and quality differ.  AI is far from having the same complex thinking and creativity as a professional music tagger, mastering expert, or musician. But it can already help creatives do their work or even completely take over some of the expensive and tedious tasks. 

One of the biggest problems that prevents us from embracing new technology is wrong expectations. There are often two extremes: on the one side, people overestimate and expect more from AI than it can currently deliver e.g. tagging 1M songs without a single mistake or always being spot-on with music recommendations. The other camp has a lot of fear about AI taking over their jobs.

The answer may lie somewhere in between. We can embrace technology and at the same time remain critical and not blindly rely on algorithms, as there are still many facets of the human brain that AI can not imitate. 

We hope you enjoyed this read and learned more about the 4 different use cases of AI in music. If you have any feedback, questions, or contributions, you are more than welcome to reach out to jakob@cyanite.ai. You can also contact our content manager Rano if you are interested in collaborations.