How-To: Spotify Playlist Pitching Guide with AI in 2025

How-To: Spotify Playlist Pitching Guide with AI in 2025

It’s no secret that your song’s playlist performance can make or break your release. How likely your song is to be picked is significantly influenced by the quality of your pitch on Spotify for Artists. That’s why we decided to create a guide on Spotify playlist pitching guide with AI.

And let’s face it: Artists are great at making music, but not everyone is great at talking about it. Let Cyanite’s AI song analysis talk the talk while you walk the walk. 

Three Types of Playlists

There are several types of playlists and you can get your track on each one of them. We distinguish between the following playlists:

  1. Algorithmic playlists (Spotify)
  2. Independent playlists (bloggers & curators)
  3. Editorial playlists (Spotify’s curators)

We’ll focus on the last two and provide a playlist pitch template using Cyanite. Independent playlists usually have their own websites, as well as contact details somewhere on the website. They host their playlist on a multitude of platforms including Spotify.

Editorial playlists on Spotify are created by Spotify editors. These playlists can only be accessed via the portal Spotify for Artists. For those who do not yet know how the portal works, here is a quick guide by Ditto.

What is Spotify for Artists?

If you’re going to pitch on Spotify, Spotify for Artists is the tool for you. Any artist can submit a track to Spotify so that Spotify editors can review it and include it in one of the playlists. The editorial team at Spotify accepts only unreleased tracks, so if your song is already on Spotify you won’t be able to submit it. Therefore, before you choose the submission date on Spotify for Artist, make sure you use the pitching option first. At the same time, editors’ review takes time, so you need to submit a song well in advance.

Tip: Submit a track on Spotify for Artists at least seven days before the release (better 2 weeks), to ensure it can be included in the Release Radar of the artist’s followers.

How to Pitch a Track?

Spotify for Artists gives you a step-by-step guide on how to pitch the song. But as with every platform, some tips and tricks can increase your chances of getting onto the playlist. 

Here’s where Cyanite comes into play with its AI song analysis, as it improves the quality of your pitch and makes things more smooth and productive. 

Our tips on how to pitch a playlist using Cyanite’s AI include:

  1. Identify the strongest emotions of the song
  2. Find the right words for the Spotify song description;
  3. Find the most suitable playlists with Cyanite’s Playlist Matching. 

Let’s explore these steps in detail.

Tip 1: Identify the strongest emotions of the song.

The Spotify for Artists portal lets you select two emotions that classify your song the best. Being limited to only two, it is very important to make the right choice here. The emotional and subjective nature of music makes this task particularly difficult.

Spotify for Artists pitching

Here is how you do it with Cyanite’s AI song analysis. Upload your song file as MP3 or via a YouTube link into your library on Cyanite. The song will be analyzed and tags such as genre, mood, energy, or instruments will be available. Also, in Cyanite’s Detail View, you will see how the moods, genres, energy levels, or instruments develop over the duration of the song. 

Here is a scheme that shows how the emotions of Spotify for Artists can be equated with the emotions on Cyanite.

A simple chart showing how the moods on Spotify relate to the moods from Cyanite.

Spotify/Cyanite Moods Translation

If you manage to find emotions that correctly describe your track, it will save time for the editors and you will make a good first impression. This is confirmed by the professionals in the music industry, who often have to deal with tons of music releases. 

Weston McGowen – artist manager at Equal Songs, used Cyanite when submitting songs to Spotify for Artists. Weston remembers that choosing emotions has always been one of the most difficult parts for him.

He says: “The objective view of  Cyanite’s AI helps a lot“. 

Additionally, some of Spotify’s playlists are mood-based, so mood match is the first criteria editors look at. Stephen Cirino emphasizes the relevance of emotion selection in his article on the pitching process:

Choosing the right moods to match your song can help get your music in front of curators for mood-focused playlists such as Mood Booster, Dreamy Vibes, Sad Indie, and more“. So Cyanite’s mood tags might be the most important tags to pay attention to when playlist pitching with AI. 

Additionally, you can choose and match genres, sub-genres, and instruments using Cyanite. Here is the screenshot of the song analysis with all the data:

Screenshot showing the Library in Cyanite's Web App interface with popular songs and tags.

Cyanite Analysis of genre & auto-descriptions

Tip 2: Find the right words for the song description.

Usually, the most important part of the Spotify playlist pitching guides out there is the song description – according also to the editors. In 500 words you need to describe what your song is about and why it is a good match to any of Spotify’s playlists. 

Yes, it is all about the context. Especially when filling in that big blank space where you can describe the song to the editors, everything that gives the editors extra background information about the song has to be packed in here. In the end, it makes their work easier and helps them to build an emotional connection to the music. 

For that, Cyanite’s state-of-the-art Auto-Descriptions and Augmented Keywords are a great choice. Elaborate full-text descriptions plus a word pool of 1,500 music describing terms featuring, genres and moods but also rather abstract terms such as contexts, situations, use cases, and activities solve the blank page problem and make sure the description is bang on. 

We give more detailed instructions and Spotify playlist pitching examples in the article: How to Write Press Releases and Music Pitches with Cyanite.

Spotify for Artists text description

The text pitch should present you as an artist and also include details about the song: your artistic approach, inspiration, collaborations, credits, and future plans can be included here. You can also mention which playlist might be a good fit for the track. 

AWAL, an artist service offered by Sony Music, writes: “It also requires self-classification, which might offer additional value to a DSP that hopes to match a listener’s mood with the appropriate soundtrack, as quickly and accurately as possible“. 

A big part of how listeners experience a song is the way it develops and what turns it takes over the duration of the track. As the name suggests, the Dynamic Emotion Analysis does not only show you what moods a song is made of. It maps the most characteristic peaks and lows and all developments in between. This gives you the data-supported vocabulary to describe certain dynamics of your song and the fine little details that let it stand out. See the screenshot below.

Screenshot of Cyanite's Detailed Song View - showing the moods over the course of the song's duration.

Cyanite detail view with dynamic emotion analysis

Pro Tip: Cyanite Mood Analysis + LLM

Feed a screenshot of the mood analysis chart to an LLM of your choice and ask it to write a description of the song’s emotional dynamic and its duration, to get further inspired. Here’s an example for the song above:

Opening Section (0:00 – ~1:00):

The track kicks off with a strong energetic presence, immediately drawing listeners in with its vibrant intensity. This high-energy start is balanced with hints of an uplifting undertone, giving the introduction a bright and driving quality. The dynamic nature makes it an excellent opener or mid-playlist highlight.

Development and Contrast (~1:00 – ~2:30):

As the song progresses, the energy remains prominent but begins to interact with subtler emotional elements. Uplifting tones shift slightly to make space for a romantic and epic feel, adding depth and intrigue. These layers create a dynamic ebb and flow, ideal for keeping listeners engaged during transitions between more contrasting tracks in a playlist.

Peak and Groove (~2:30 – ~4:00):

In this section, the energy peaks, and the track’s balance of movement and intensity shines. There’s an underlying sensual and smooth vibe, which contrasts beautifully with its punchy rhythm. This moment is perfect for playlists centered on late-night energy or danceable grooves with a touch of sophistication.

Closing Section (~4:00 – End):

The final segment maintains its energetic drive while reintroducing uplifting tones, giving the track a satisfying resolution. The consistent rhythm ensures a strong finish, making it suitable as a climactic point in a playlist or as a segue into lighter, more reflective tracks.

Additionally, to write a text pitch you can use Cyanite’s Auto-Description and Augmented Keywords. These are the keywords that characterize a song in addition to other data on moods, genre, energy level, etc.

 

Screenshot showing Cyanite's Augmented Keywords for the Song "Grow Old With Me" by Tom Odell

Tom Odell’s “Grow Old With Me” analysis – Augmented keywords from Cyanite

You can use these keywords to write a compelling text pitch or just copy and paste them into an LLM of your choice. With some human editing, current LLMs can produce compelling song descriptions and pitches. We tried using a “product description” option, and here is the result for Tom Odell’s Grow Old with Me

Tom Odell’s soothing new song is the perfect soundtrack for any emotional situation. It reminds you that beauty, love, and joy are always close by and will always be a part of your life. The acoustic guitar and piano melodies help create a calm and relaxed atmosphere where you can’t help but feel comfortable.

Tip 3: Filter out the most suitable playlists.

When you click on “Playlist Matching” on the navigation bar and select your song, you will get instant Spotify playlist recommendations – both editorial as well as independent.

Screenshot of Cyanite's Playlist Matching feature in their Web App. The Screenshot shows matching playlists from Spotify for the song "Grow old With Me" by Tom Odell

Cyanite’s Playlist Matching Tab 

Browse through up to 20 playlists and find out which one matches the vibe of your song best. If they are editorial, it is indicated by a little Spotify logo on the top left corner and it will say “Spotify” as the editor. To get to those playlists, please use your Spotify for Artists playlists pitching tool. 

For everything else, you can often google the curator’s nickname and find their profiles on other social media platforms to get in touch about your release there.  

How to best approach these indie curators is well described here and for more great tips on how to promote your music check out this article by Studio Frequencies.

Will I Be Picked?

It is impossible to tell if your track is going to be picked by Spotify. The waiting time to find out is usually from two weeks to a month. If after that time you realize that nothing is happening, don’t worry. Sometimes the track is picked later when it starts to gain traction and listens on Spotify. 

That’s why it is important to continue your promotional efforts after the release and use other platforms including social media. We explain why using ads and social media outreach is so important for Spotify editors in the article: How to Create Custom Audiences for Pre-Release Music Campaigns in Facebook, Instagram, and Google.

What's Next?

Given the continuous streaming hype, mastering the art of playlist pitching seems inevitable.

Nevertheless, because playlists have such an influence on the music industry, it’s a topic that needs critical discussion. In addition to our guide, we recommend these readings on Spotify curatorial practices and playlisting on Musically and BestFriendsClub.

Ultimately, the success with playlist pitching comes with finding the right fit and putting work into correctly tagging the song and writing a song description. You can do that manually or you can use tools like Cyanite if you’re tired of listening to the same track over and over again or if you have large volumes of music to pitch.

Use Cyanite for playlist pitching with AI

If you don’t have a web app account yet, you can also register for our free web app below to analyze music and try our playlist matching.

Ellen Allien, Roman Flügel and Dub Isotope: An AI Analysis of Techno and Drum ‘n’ Bass

Ellen Allien, Roman Flügel and Dub Isotope: An AI Analysis of Techno and Drum ‘n’ Bass

As a team of music lovers, the Cyanite team has been tuning in regularly to Berlin’s lockdown livestream DJ sets over the last few months. 

Some of us might be of the opinion that recording technology should be kept firmly away from the dancefloor in order for party people to truly revel in the night’s atmosphere. Right now, it seems the opposite is true. 

These livestream recordings have made it possible for music fans in Berlin (and the world) to experience electronic music and to feel connected to the electronic music community. Technology has proven itself as very much needed and welcome in the music space, and in this case, instrumental in keeping club culture alive during the coronavirus restrictions.

Decoding Electronic Music with AI

In this spirit of club culture, we ran some of our favourite mixes through our analysis models.

Set #1: Ellen Alliens’s Griessmuehle set

A heavyweight in the techno scene, Ellen Allien’s combination of classic techno with a side of experimental and IDM is one-of-a-kind, and a definite favorite at Cyanite. 

Ellen Allien’s no prisoner taking set @ Griessmuehle Berlin

In this set, techno alternated between dancey and contemplative. Our genre analysis results revealed that the set was predictably profiled as consisting mostly of “electronic dance”. In moments where the electronic dance genre was detected at a low level, the ambient genre was inversely detected as the dominant genre.

Result of Cyanite’s AI analysis on the emotional dynamics of Ellen Allien’s set

Looking at our emotion analysis results, the top quality detected in her hour-long set was “dark”. This was followed closely by “tense”, and then “energetic”. Characteristically, we observed that her set opened with the level of darkness at a high point, before hovering at a more or less at a consistent mid-to-high level after, before ending high again. 

Tenseness, however, was a different story altogether. In Ellen Allien’s set, techno is a tightrope walk. Listeners alternate between feeling almost about to tip over the edge and occasional moments of stability at the peak.

Sustained periods of high musical tension were found at the beginning, middle, and end of her set. Outside of those intervals, the level of tenseness peaked and plunged all throughout the set, often in sharp, spiky drops and rises. 

In her set, tension is also characterized by a frenetic level of energy: we saw that the level of energy detected very closely paralleled the pattern of tenseness.

Set #2: Roman Flügel’s Wilde Renate stream 

Roman Flügel’s Wilde Renate stream is another hot favorite. 

Roman Flügel’s never disappointing curation of eclectic sounds @ Wilde Renate

While the Ellen Allien set we listened to earlier veered towards the heavier side, this set takes us to a gentler side of techno. Flügel treated our ears to an hour of electro, techno and occasional ambient. 

A softer brand of techno does not mean happy techno though (if there can ever be such a concept). While the previously discussed set was ruled by high-strung techno energy, Flügel’s set is more muted

Result of Cyanite’s AI analysis on the emotional journey of Roman Flügel’s set

Topmost of the qualities detected was ‘melancholia’. The atmosphere of melancholy hovered at a consistently high level throughout the entire set, with brief intervals of dips. In those moments where melancholy dropped, tenseness – which was at a base level throughout most of the set, climbed up slightly. 

Almost as much as his set was melancholic, it was calm. The smooth melodies and synth swells in the set gave it an air of sereneness. Calmness was the second highest quality detected. The level of calm closely mirrored the level of melancholy throughout the set, although it had more well-defined plateaus during the most calm moments. 

Flügel’s set was also comfortingly brooding (exactly how we love our techno). Underscoring the calmness and melancholy was darkness, which was profiled as the third top quality in the set. 

The haunting, sad effect of minor keys seem to be well favored in techno. 

Both these techno sets were detected to be mostly in minor keys: Ellen Allien’s one was predominantly B Flat minor, and Roman Flügel’s in F minor.

Set #3: Dub-Isotope’s VOID mix 

Pivoting away from techno, our third set analyzed was a Drum N Bass one. We analyzed Dub Isotope’s set at VOID Berlin- a stellar venue for non-techno and techno music alike. 

You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

Dub Isotope’s stellar 171 BPM Drum n’ Bass journey @ VOID Berlin

While the two sets above were in a minor key, our analysis results showed that this set favored F major. Also, compared to the 105-130 BPM range of the techno sets, Dub Isotope’s set sat firmly at 171 BPM, a tempo characteristic of Drum N Bass music. 

Listening to Drum N Bass is quite a diverse emotional journey. The qualities detected in this set were at more moderate levels, compared to the earlier two techno sets. 

Cyanite’s AI mood analysis on Dub Isotope’s Drum n’ Bass trip

Our results signaled to us that this set was definitely more upbeat. While the techno sets at certain emotions at a distinctively high level, and others at a significantly lower range (e.g. Relaxing’ at near rock bottom levels for both), the various emotions detected for Dub Isotope’s set mostly occupied the mid-range. Among these, the top few to note were ‘calm’, ‘dark’ and ‘relaxing’, followed very closely by happiness

Looking at our genre analysis, Dub Isotope’s set was similarly detected to be electronic dance, with an interesting spurt of hip hop just a bit after the halfway mark of the mix

Analyze your own music

We built Cyanite in a way that everyone can use it to analyze their own music with AI. If you want to get insights on mood, genre, bpm, and key for your music, you can register here for free and try it out yourself. Contact us if you have feedback, ideas, or want to use our API to integrate Cyanite into your database.

3 Ways to Display and Integrate AI Search Results in Your Music Platform

3 Ways to Display and Integrate AI Search Results in Your Music Platform

Artificial intelligence is an innovative technology. Pair it with a music library, and you get innovative results. That’s largely thanks to an AI music approach called MIR – Music Information Retrieval. What’s great about approaches like MIR is that they give you the power to find the exact song you’re looking for.

There are many ways you can integrate AI into your catalogue. We’ve identified three that are both easy on the eyes and rich in information. Let’s dive into the three most effective tools for presenting AI-generated results in your library or online platform.

Mood/Colour Visualisation

This map from UC Berkeley matches colors to emotional responses from music

The world is a colourful place. You’ll find different shades and hues everywhere you look. And that’s great, because colours are intuitive to understand. We’re used to making sense of the world through them. Traffic lights and street signs work that way. So does fashion.

There’s also a clear connection between psychology and colours. Whether natural or learned behaviour, we attach moods to colours. If someone mentions sad, what’s the first colour you think of? What about happy, angry or excited? You probably guessed right, and you didn’t have to think about it for very long either.

Because music is an art form that’s all about mood and emotion, it makes sense to match songs to colours. That’s exactly what the University of California, Berkeley did. They surveyed 2500 people from the United States and China. The aim was to test their emotional responses to thousands of songs from genres such as classical, rock, jazz, folk, experimental and heavy metal. Researchers then determined 13 feelings to map out the subjective experience of music: “Amusement, joy, eroticism, beauty, relaxation, sadness, dreaminess, triumph, anxiety, scariness, annoyance, defiance, and feeling pumped up.”

Colour-based visualisation is great because it’s easy to navigate and provides analysis-driven results. Songs are grouped by emotion, and you see exactly how many fill each group. You also get a thorough first impression about the general structure of the song.

If you want to see (and hear) for yourself, check out this interactive audio map created from the data. Listeners can switch tunes to try out specific moods, and see how much of an emotion is present in a song. (50% romantic, 25% dreamy and 4% nauseating sounds like a rock-solid combination.)

Song Maps

Gnoosic’s Song Map helps you to discover similar music through an interactive map.

While visual maps present a clear picture of your library, song maps connect the dots. Generally, we use maps for navigation; to plot paths from one point to another. This helps us see where things are located in relation to each other. Once you find where you are on a map, you can determine exactly where you’re going and what route to take.

Song maps imitate this process of discovery. A popular service like Gnoosic allows users to enter the name of an artist and discover those that are similar. Whatever you type in will bring up a tree of new artists to look at.

This makes browsing easy, because you already have a clear starting point. The more similar an artist the closer they’ll appear on the map. Type in Eminem and you’ll see Tupac right next to him. Michael Jackson, however, is right at the edge of the map. Interestingly, Eminem has said Tupac influenced his song-writing –  song structure is one of the components AI can look at when performing search.

As we’ve seen during the coronavirus quarantine, embracing novelty, whether in technology or content, is both healthy and progressive. AI-powered song maps are useful, intuitive tools to discover new music. If you’re up for adventure, you’ll try what’s on the edges. And if you want something closer to your favourite Jazz musician, you’ve got that too. It even groups the results into clusters. That means if something is a bit different, and you like that, you can find a group of artists that are similarly different.

Similarity Search

Cyanite’s Similarity Search uses AI to recommend similar songs

Similarity search takes a reference track and gives you a list of songs that match. (This works by pulling metadata and other relevant information from audio files). It does this quickly – we’re talking a matter of seconds.

It’s also more accurate than other methods, because the results are narrowed down to a small selection. Still too many matches? With Cyanite’s Similarity Search, you can filter the AI’s results based on the level of similarity.

This discovery-driven approach emphasises context; the search spans your entire catalogue. This could be helpful to catalogue owners who want to see how different songs within their database relate. You could check if there are more happy than sad songs, for example, and how best to update your library.

It also helps music publishers answer synch briefings faster. They can go from a reference track to the required music quickly, even if they’ve never used the database. You can try out Cyanite’s Similarity Search with a limited database to get a better feel for this application.

With a clear overview of the data, you can prepare your catalogue accordingly. A similarity search approach is functional, specific and visually simple. Users only discover what they’re looking for: the most similar tracks.

If you want better results – whether in your library, catalogue or user experience – delivered by innovative AI technology, give us a shout.

You can schedule a free 15-minute call with our CEO, Markus.

Cyanite Talks #1 with Karolina Namyslowski from AMP Sound Branding

Cyanite Talks #1 with Karolina Namyslowski from AMP Sound Branding

Sound and music are probably the most universal languages in the world. Especially the increasing popularity of audio through music streaming, podcasts, Siri, Alexa and Co. makes the creation of a sound identity an essential component of a brand personality. “If” a brand uses audio in the right way, it is 96% more likely to be remembered and can build a lasting relationship with its customers. This is especially true for young adults, 74% of whom understand a brand personality better through music.

The “If” exists because it’s one of the biggest challenges for a brand to find its sound. That’s why we sat down with Karolina Namyslowska, Senior Creative at one of the world’s leading sound branding agencies AMP, for our new Cyanite interview series.

We wanted to know more about the challenges when finding the right music for a customer, how subjective music a really is and especially if and how the use of AI benefits her work.

Cyanite: Hi Karo, on your LinkedIn profile is written: “You are only given a little spark of madness, you musnt’t lose it” – which spark of madness has lead you to the music industry?

Karolina Namyslowska: You need to be slightly „mad“ to decide against a career path in economics, law, medicine or engineering. The creative path is much riskier and requires courage. Many of my classmates continued their careers at university and other academic institutions. I personally never saw myself doing this.

After my bachelor I wanted to stretch out my feelers and in 2013 I came across amp, where I started as an intern. That’s 7 years ago and I’m proud to say that I was their first employee. I’m very happy to have followed this creative path and to apply my knowledge and creativity to my job. 

 

 

 

You have studied music from both a technological (Music Informatics) and a social/cultural (Musicology/Cultural Studies) perspective. To which extent do these 2 perspectives help you in your daily work, and when does which one come into play?

I’m a Senior Creative at amp and lead the entire Creative Team.

The job requires creative input, as well as quality control for all of our creative output. One of my main tasks is the translation of brands into sound.

For the conceptual part of my job, I rely on the vocabulary and analytical techniques I picked up in my musicology studies and my musical background (piano). I also have technical proficiency, which is equally important and useful.

I’m a little disheartened by the women in the industry who lack technical skills. I’m not talking about a highly technical specialization, but rather, just common sense and the ability to use basic software tools to your advantage. I hope we buck this trend going forward.

My music technology curriculum also provided me with the basic audio-technical know-how to contribute to all types of media productions. My day-to-day involves a constant exchange with our internal production team and external teams (bands, producers, sound designers, etc.). So knowing how to navigate tools like Final Cut, Logic, Pro Tools, etc.. provides me with valuable insight throughout the different phases of a production. Even in a creative / conceptual phase I profit from those tools, for instance, when making mood-videos to better imagine or explain an idea.

 

“I think the key to knowing or being familiar with a large musical repertoire is not shying away from certain genres or styles or artists” 

 

 


At AMPs website you are named as their inhouse Spotify. How do you maintain a good overview of musical trends and new artists, when 300.000 new songs are being uploaded to the internet every day?

Listening to music is a core responsibility that comes with the job. Whether I’m looking for songs to better explain a concept to our clients – or if I’m digging for reference tracks to determine the creative direction for a new composition or production – my ultimate source is always Spotify.

I think the key to knowing or being familiar with a large musical repertoire is not shying away from certain genres or styles or artists. I was blessed with musical parents and a musical home – and we didn’t just use music as background noise, we actively listened to music.

I never lost interest or stopped enjoying music, despite my constant listening habits (professional and private). When I get on the train I put on my headphones and when I wrap my working day I listen to some more to relax. I’m always happy for new leads and new music, whether it’s from my friends, colleagues or through Spotify.

 

 

 

Your job is basically to understand the language of non-musicexperts (brands) and musicexperts (songwriters or publishers), and to bridge the gap between them. What do you consider to be the biggest challenge in this process?

Many of the stakeholders and client partners that we work with consider music a purely subjective art form. Our greatest challenge and mission is to settle on a common language (with our clients) and define parameters to better understand, discuss and evaluate music.

For this reason, our process always includes an “educational” part. We develop a common understanding for how the brand should and it should not sound. To do this we derive and translate brand values (e.g. “edgy”, “urban”, or “innovative”) into basic musical characterizations. These parameters or criteria serve as the basic description for the sound of the brand and help to evaluate any and all Sonic Assets (incl. Tracks, Sonic Logos, etc.).

Our goal is to provide the client with more than just a “gut feeling” for what sounds on-brand and what doesn’t. We teach our clients to develop the skills necessary to judge and understand music themselves. Because ultimately, the client stakeholders are determining the current and future sound of their brand – not their personal playlist

 

Their recent campaign with Mercedes is one of AMP’s many examples that show the power of sound branding

When music is used in a commercial, but also when algorithm-based music recommendations come in to play, the emotional effect of music becomes is more or less generalised. How subjective is music really and how do you measure the emotional effect of a song?

I agree! Music can create a deeply emotional and personal experience.

But there are parameters that can influence or steer the experience in specific directions. Let’s take a basic example:

We have a song with a dragging tempo and melancholy vocals. If we were to show the song to 100 people and survey them, only a fraction would consider the song driving, bright and uplifting. Of course there are unpredictable and personal factors, such as an individual’s past experience or past relationship with the song. However, the overwhelming consensus will always be that the song is “introverted” and “melancholic”.

We trust the expert-team at amp to track and define this relationship between musical parameters and their effect on the emotional listening experience. But we also regularly rely on market research (implicit, explicit and emotion-based) for our projects. An important element of our evaluation process is the AI-Testing Tool Veritonic. We use it to quickly and regularly test Sonic Assets along a set of standard attributes and give us an indication of Brand Fit, Uniqueness and Recall.

We don’t, however, use market research and AI tools as a replacement for creativity. All it does, is help us and our clients verify observations and decisions.

 

 

 

“Our greatest challenge and mission is to settle on a common language (with our clients) and define parameters to better understand, discuss and evaluate music”


What has been the biggest technological revolution since you started working in Sound Branding?

I think the age of voice is no revolution, but an evolution. I’m very impressed with Amazon Alex and how every-day interactions have been so seamlessly integrated into people’s lives. I’m excited to see what happens to autonomous driving in the next few years and how sound will help facilitate the human-machine interaction.

How do you look at AI in music? Do you think there is a place for AI in the space of Sound Branding?

Like I said early, I think that AI has a place in Sonic Branding. Whether it’s used for the cataloguing of music (in databases, through search-algorithms, etc.) or in the evaluation of certain aspects of music. We at amp, recognized this potential early and have developed a platform for our clients, which they can use as their own brand-specific Spotify to browse and search for Sonic Assets. Our clients greatly appreciate this tool and it’s become one of amp’s USPs. We prioritize giving our clients the necessary implementation tools to use their Sonic Identity in the best and easiest possible way.

Ok, last question, imagine, you’re sitting in the English Garden in Munich in the summer of 2021, the corona crisis is hopefully over, and you’re looking back on the previous year. What do you hope to say in the future? Do you think the job of sound branding experts will have changed?  

Primarily, I hope that the crisis will be over by then and that my family, friends and colleagues are healthy. I hope that the average 8 hour work day / 5 day work week becomes a thing of the past. I’m currently having a very good experience in my home office with amp.

I’m not sure if the job itself will change much. The significance of music and sonic branding will not level off – the opposite is true. The Corona virus is proving just how important and invaluable music is to people, especially in difficult times. It would be nice, especially here in Germany, to observe the same passion for music as we do in Italy and Spain.

 

Thanks a lot for taking your time Karo. Shout-outs to AMP and stay safe. 

 

AI Music Now: 3 Ways how AI can be used in the Music Industry

AI Music Now: 3 Ways how AI can be used in the Music Industry

Mention “AI music” and most people seem to think of AI-generated music. In other words, they picture a robot, machine or application composing, creating and possibly performing music by itself; essentially what musicians already do very well. First, let’s address every industry professional’s worst Terminator-induced fears (should they have any): AI will never replace musicians.

Even if music composed and generated by AI is currently riding a rising wave of hype (include link to previous article), we’re far from a scenario where humans aren’t in the mix. The perception around AI infiltrating the industry comes from a lack of attention towards what AI can actually do for music professionals. That’s why it’s important to cut through the noise and discuss different use cases possible right now.

Let’s look at three ways to use AI in the music industry and why they should be embraced.

AI-based Music Generation

 

The most popular application of AI in music is in the field of AI-generated music. You might’ve have heard about AIVA and Endel (which sound like the names of a pair of northern European fairy-tale characters). AIVA, the first AI to be recognized as a composer by the music world, writes entirely original compositions. Last year, Endel, an AI that creates ambient music, signed a distribution deal with Warner Music. Both these projects signal a shift towards AI music becoming mainstream.

Generative music systems are built on machine learning algorithms and data. The more data you have, the more examples an algorithm can learn from, leading to better results after it’s completed the learning process – this is known in AI-circles as ‘training’. Although AI-generation doesn’t deliver supremely high quality yet, some of AIVA’s supposed self-made compositions stack up well compared against modern composers.

If anything, it’s the chance for co-creation that excites today’s musicians. Contemporary artists like Taryn Southern and Holly Herndon use AI technology to varying degrees, with drastically different results. Southern’s pop-ready album, I AM AI, released in 2018. It was produced with the help of AI music-generating tools such as IBM’s Watson and Google’s Magenta.

Magenta is included in the latest Ableton Live release, a widely-used piece of music production software. As more artists begin to play with AI-music tools like these, the technology becomes an increasingly valuable creative partner.

You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

AI-based Music Editing

Before the music arrives for your listening pleasure, it undergoes a lengthy editing process. This includes everything from mixing the stems – the different grouped elements of a song, like vocals and guitars – to mastering the finished mixdown (the rendered audio file of the song made by the sound engineer after they’ve tweaked it to their liking).

This whole song-editing journey is filled with many hours of attentive listening and considered action. Because of the amount of choices involved, having an AI to assist in making technical suggestions can speed things up. Equalization is a crucial editing step, which is as much technical as it is artistic. This refers to an audio engineer balancing out the specific frequencies of a track’s sounds, so they complement rather than conflict with each other. Using an AI to perform these basic EQ functions can provide an alternative starting point for the engineer.

Another example of fine-tuning music for consumption is the mastering process. Because published music must stick to strict formatting to for radio and TV, or film, it needs to be mastered. This final step before release usually requires a mastering engineer. They basically make the mix sound as good as possible, so it’s ready for playback on any platform.

Some of the technical changes mastering engineers make are universal. For example, they need to make every mixdown louder to match the standard of music that’s out there; or even to match the other songs on an album. Using universal techniques means AI can help, because you’ve got practices it can learn from. These practices can then be automatically applied and tailored to the song.

Companies like LANDR and Izotope are already on board. LANDR offers an AI-powered mastering service that caters to a variety of styles, while Izotope developed a plugin that includes a “mastering assistant“. Once again, AI can act as a useful sidekick for those spending hours in the editing process.

AI-based Music Analysis

Analysis is what happens when you break something down into smaller parts. In AI music terms, analysis is the process of breaking down a song into parts. Let’s say you’ve got a library full of songs and you’d like to identify all the exciting orchestral music (maybe you’re making a trailer for the next Avengers-themed Marvel movie). Through AI, analysis can be performed to highlight the most relevant music for your trailer based on your selected criteria (exciting; orchestral).

There are two types of analysis that make this magic possible: symbolic analysis and audio analysis. While symbolic analysis gathers musical information about a song from the score – including the rhythm, harmony and chord progressions, for example – audio or waveform analysis considers the entire song. This means understanding what’s unique about the fully-rendered wave (like those you see when you hit play on SoundCloud) and comparing it against other waves. Audio analysis enables the discovery of songs based on genre, timbre or emotion.

Both symbolic and audio analysis use feature extraction. Simply put, this is when you pull numbers out of a dataset. The better your data – meaning quality, well-organized and clearly tagged – the easier it is to pick up on ‘features’ of your music. These could be ‘low-level’ features like loudness, how much bass is present or the type of rhythms common in a genre. Or they could be ‘high level’ features, referring more broadly to the artist’s style, based on lyrics and the combination of musical elements at play.

AI-based music analysis makes it easier to understand what’s unique about a group of songs. If your algorithm learns the rhythms unique to Drum and Bass music, it can discover those songs by genre. And if it learns how to spot the features that make a song “happy” or “sad”, then you can search by emotion or mood. This allows for better sorting, and finding exactly what you pictured. Better sorting means faster, more reliable retrieval of the music you need, making you project process more efficient and fun.

With Cyanite we offer music analysis services via an API solution to tackle large music databases or the ready-to-use web app CYANITE. Create a free account to test AI-based tagging and music recommendations.