Experience Our Biggest Web App Update with 5,000+ New Genres! ???? Discover Now

How to Create Mood- and Contextual Playlists With Dynamic Keyword Search

How to Create Mood- and Contextual Playlists With Dynamic Keyword Search

In the last article on the blog, we covered how Cyanite’s Similarity Search can be used in music catalogs. In this article, we explore another way to search for songs using Dynamic Keyword Search and how to leverage it to create mood- and contextual-based playlists. 

Rather than relying on a reference track, Dynamic Keyword Search allows you to select and combine from a list of 1,500 keywords and adjust the impact of these keywords on the search. This is especially helpful to create playlists where songs match in mood, activity, or other characteristics. 

But before we explain how this feature works, let’s explore how playlists are created. What makes a perfect playlist? Why are playlists so essential when utilizing a music catalog? And how can the Dynamic Keyword Search help with that?

How are playlists created?

There are three techniques for playlist creation:

  1. Manual creation (individually picking songs) 
  2. Automatic generation and recommendation 
  3. Assisted playlist creation. 

Historically, manual creation has been the most basic and old approach. It involves picking songs individually for playlists. It might be the simplest technique but the amount of time and effort that goes into it can be overwhelming. Imagine you are working 100,000 audios in a catalog and have to create an “Energetic Workout” and “Beach Party” playlist. 

Automatic generation uses various algorithms to create playlists with no human intervention. One of the most famous ones is, for example, “Discover Weekly” by Spotify. 

Assisted playlist creation uses music technology to guide and support manual playlist creation. 

In the research by Dias, Goncalves, and Fonseca, manual playlist creation was found to be most effective in terms of control, engagement, and trustiness. This means that people trust handmade playlists. Also, manual creation provides the most amount of control over the outcome and it engages editors in the creation process. 

Automatic creation was found to be the most effective in adapting to the listeners’ needs. There is no manual control involved, so automatic tools can adapt and change playlists in no time. 

Assisted techniques were found to be most effective in terms of engagement and trustiness whilst being quick to create. They also performed well on the song selection criteria. Song selection has been defined as the most critical factor in the playlist creation process according to this study. However, while song selection is considered very important, the question of what makes a song right for the particular playlist is still open. Apart from that, assisted techniques proved to be optimal in control, and serendipity and they also can adapt to listening preferences rather easily. 

To anticipate things already: The Dynamic Keyword Search is exactly such an assisted technique in playlist creation.

Why are search tools for playlist creation important in a catalog?

Playlists have been known to be the ultimate tool for promoting music. We already covered the ways artists can get on Spotify and other people’s playlists in other articles on the blog. But creating playlists can also be beneficial for catalog owners and catalog users, be it professional musicians or labels. Here is why: 

  • You can realize new and passive modes to exploit and monetize your catalog. If you make it easier for your users and/or customers to explore your catalog, you directly increase its value.
  • Playlists are used as a promotional tool to showcase the works of an artist or the inspirations behind the artist. This article recommends creating two playlists: a vibe playlist and a catalog playlist for brand engagement and streams. 
  • Playlists help organize music by theme or context
  • With playlist creation features, users save time on finding the right fitting songs
  • Playlists can be indexed separately in search results. This helps music get discovered. 

So playlist creation tools in a catalog are pretty important. Similarity Search is one of these tools. Another one, which we focus on in this article is Dynamic Keyword Search.

How does Dynamic Keyword Search Work?

Cyanite’s Dynamic Keyword Search allows for searching tracks based on multiple keywords simultaneously where each keyword can be weighted for its impact on the search. This feature leads to more relevant search results with less time-effort spent on search.

Usually, the keywords you choose represent your idea of what you’re searching for. But you don’t have full control over the search. With Dynamic Keyword Search, you can increase the precision of the search results by adjusting the impact of the keywords on the search. So you can express exactly what you’re looking for. There are 1,500 keywords to choose from representing such characteristics of the song as mood, genre, situation, brand values, and style. These keywords’ impact on search can then be adjusted on the scale from -1 to 1 from no impact at all to “heavy impact”.

Cyanite Dynamic Keyword Search interface

What playlist features can be improved with Dynamic Keyword Search?

Not all playlists are created equal. Some are better than others. This study outlines 5 characteristics of playlists that can indicate a good or bad playlist. The authors of the study assumed that user-generated playlists could be an indicator for the algorithms to create good playlists. Here are the 5 playlist characteristics they outlined: 

  • Popularity – most user-generated playlists feature popular tracks first. This, however, is not too obvious though but grabbing the attention spans of the listeners from the start is important. 
  • Freshness – playlists should contain recently released tracks. Most playlists in the study contain tracks released on average in the last 5 years.
  • Homogeneity and diversity –  playlists on average cover a very limited number of genres so playlists should be rather homogenous. However, diversity plays a significant part in listeners’ satisfaction so it should be incorporated into the playlist as well.
  • Musical Features – in terms of energy, playlists with a narrow energy spectrum with a low average energy level are preferred, but there can be some high-energy tracks in the list. 
  • Transition and Coherence – the similarity between the tracks defines the smoothness in transition and coherence of the playlist. Usually, user-generated playlists have a better similarity in the first half and a lesser similarity in the second half. 

As the study deals with a variety of user-generated playlists, it can’t be said that all of them were equally good playlists. But the criteria outlined above can help improve playlists by understanding the character of the playlist. With Dynamic Keyword Search, you can control such criteria as homogeneity and diversity, musical features such as energy level, and similarity between the tracks to ensure transition and coherence

PRO TIP: To improve a playlist’s transition and coherence you can combine the Dynamic Keyword Search with our Similarity Search to further filter music on Camelot Wheel. The Camelot Wheel indicates which songs transition harmonically well giving you an extremely powerful tool to perfect the song order. You can find a deeper explanation of that in this article.

Creating Playlists with Dynamic Keyword Search – Step-by-step

Here is how to access Dynamic Keyword Search in the Cyanite app. This feature is also available through our API

  1. Go to Search in the menu and select the Keyword Search tab. Choose whether to display results from the Library or Spotify. 
  2. Select keywords from the Augmented Keywords set. For example, these are some of the keywords in the list: joy, travel, summer, motivating, pleasant, happy, energetic, electro, bliss, gladness, auspicious, pleasure, forceful, determined, confident, positive, optimistic, agile, animated, journey, party, driving, kicking, impelling, upbeat. We recommend selecting up to 7 keywords out of 1,500. 
  3. Adjust the weights for each keyword from 1 to -1 to define their impact on search. For example, let’s set  the search input as sparkling: 0.5, sad: -1, rock: 1, dreamy: 1 
  4. Scroll down for search results. The search results will return tracks from the library that are dreamy, slightly sparkling, and not at all sad. They will also all be rock songs.

Dynamic Keyword Search can be requested from our support team.

Conclusion

There are various ways to create playlists from manual creation to automatic and assisted techniques. An assisted approach that combines automatic and manual creation has proved to be the most effective in playlist creation. It meets almost all the editors’ needs such as providing control over the process, maintaining a high level of engagement and trustworthiness, and offering a good selection of songs. However, the automatic approach is fast developing and algorithms might substitute human work completely in the future. 

Our Dynamic Keyword Search feature can help you create playlists as one of the assisted techniques. It can provide search results that take into account the search intent  in terms of keywords and the impact of those keywords on search. This doesn’t mean that Dynamic Keyword Search replaces the manual work completely, but it can help artists, labels, and catalog owners do the creative work and engage fans and listeners with the support of the right tools to save time, money, and effort. This is what we’re striving to achieve here at Cyanite – to help you fully unlock your catalog’s potential.

Let us know if this article has been helpful and stay tuned for more on the Cyanite blog! 

I want to try Dynamic Keyword Search – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

Best of Music Similarity Search: Find Similar Songs With the Help of AI

Best of Music Similarity Search: Find Similar Songs With the Help of AI

In the past, music search was limited to basics like artist name and song title. Today’s vast and diverse music landscape calls for better ways of discovering music. Studies show that people connect with music for emotional and social reasons, making style, mood, genre, and similarity crucial to music discovery.

In this article, we explore how AI-driven music similarity search works and practical ways to find songs that sound alike using Cyanite’s Similarity Search tool.

 

How does Cyanite’s Music Similarity Search Work?

Our AI-powered music Similarity Search uses a reference track to pull a list of matching songs from a library. First, the AI analyzes the entire catalog, comparing the audio features of each song to enable accurate similarity searches. You can also filter results, for example, by BPM or genre to refine your search.

These algorithms compute the distance between songs based on their audio features. The smaller the distance, the more similar the tracks are. As music libraries expand, Similarity Search makes finding music easier and more efficient. Unlike platforms like Spotify that recommend songs based on user behavior, Cyanite focuses purely on sound, making our matches more accurate.

Find Similar Songs by Audio – 9 Best Practices

Here we outline 9 best ways to use music Similarity Search in a music catalog.

1. Finding similar songs using audio references for sync and music briefs

Music supervisors often work under tight deadlines. Our research with the University of Utrecht shows that 75% of music searches are done in a rush. Using a reference track within music Similarity Search can speed up this process and boost the chances of licensing tracks that otherwise get overlooked. Unlike Spotify’s “Similar Artist” feature, Cyanite analyzes sound characteristics, making it perfect for precise sync projects.

With the help of Cyanite’s AI tags and the outstanding search results, we were able to find forgotten gems and give them a new life in movie productions. Without Cyanite, this might never have happened.

Miriam Rech

Sync Manager, Meisel Music

Photo at Unsplash @dillonjshook

 

2. Finding duplicates

Music libraries often have duplicates, which can clutter your catalog. Similarity Search easily identifies and removes these duplicates, saving time and effort.

3. Social media campaigns

Want to promote a new artist? Use Similarity Search to find songs by popular artists with similar sounds. This data helps target fans on platforms like Facebook, Instagram, and Google, increasing campaign effectiveness.

Read more about this use case in our article on Custom Audiences for Pre-Release Music Campaigns.

Photo at Unsplash @William White

 
 

4. Determining type beats

Beat producers often create “type beats” to mimic the style of popular artists. With Similarity Search, they can compare their beats to the intended style and refine them. Catalog users can also find unique, niche matches to avoid oversaturation.

5. Playlist pitching

Use music Similarity Search to target your pitches to Spotify editors and playlist curators. Ingest full playlists and find the closest match for a more personalized approach. Providing references, like “Fans of Max Richter and Dustin O’Halloran,” makes your pitch stronger and more relatable.

Learn more in our article on Playlist pitching with Cyanite.

???? Ready to try it out? Register for our free web app and start using Similarity Search here.

6. Playlist optimization

Similarity Search helps generate playlists automatically based on a reference track, inspiring playlist curators to create cohesive collections for study sessions or specific moods.

7. Dj mixing and DJ Crates optimization

DJs can use Similarity Search to find tracks that match in key and vibe, creating smoother transitions. The Camelot Wheel filter ensures harmonic mixing for an optimal DJ set.

Discover more in our article on Optimizing Playlists and DJ Sets.

 

A screenshot showing Cyanite's Music Similarity Search interface.

Cyanite’s music Similarity Search interface

8. Uncovering Catalog Blind Spots 

Older or niche songs often get lost in catalogs. Similarity Search reveals hidden gems, expanding your options and keeping users engaged with more variety.

9. Finding Samples

Instead of wasting hours searching for samples, Similarity Search pulls up similar sounds instantly. Refine results by key or BPM to quickly build your ideal sample stack.

Why use music Similarity Search in a Catalog?

Similarity Search doesn’t just find similar tracks. It helps clean up your catalog, surface hidden songs, and optimize playlist curation. It’s also invaluable for strategic playlist pitching and social media targeting. As the music industry evolves, tools like these will be essential for staying competitive.

Cyanite provides Similarity Search via an API or web app. Our tool uses audio and metadata to deliver results, reducing search time by up to 86% and simplifying tedious tasks. Check out our Cinephonix integration video for a real-world example.

FAQs

Q: How accurate is Cyanite’s Similarity Search compared to Spotify’s recommendations?
A: Unlike Spotify, which relies on user behavior, Cyanite focuses on the actual sound. This makes our matches more sonically accurate for use cases where the song’s tonality is crucial.

Q: Can I use Similarity Search without coding skills?
A: Yes! Our free web app lets you analyze music and run similarity searches without any coding knowledge.

Q: How does Similarity Search help in marketing campaigns?
A: By finding songs with similar sounds to popular artists, you can target fans of those artists on social media, making your campaigns more effective.

Q: Can DJs benefit from Similarity Search?
A: Absolutely. DJs can use it to find tracks that blend well for seamless transitions and harmonic mixing.

Q: How can I try Similarity Search for free?
A: Simply register for our free web app here to start using Similarity Search today!

From Data to Decision – How to Use Music Data and Analytics for Intelligent Decision Making

From Data to Decision – How to Use Music Data and Analytics for Intelligent Decision Making

We continue writing about the Data Pyramid and in this article we finalize the series with an overview of the fourth level of the pyramid – Intelligence. The supreme discipline of data utilization and a path to success when done right.

Other articles in the series include: 

How to Turn Music Data into Actionable Insights: This is an overview of the Data Pyramid and how it can be used in the music industry. 

An Overview of Data in The Music Industry: This article gives a list of all types of metadata in the music industry.

Making Sense of Music Data – Data Visualizations: This article explores data visualizations as the second step of the pyramid and gives examples of visualizations in the music industry. 

Benchmarking in the Music Industry – Knowledge Layer of the Data Pyramid: This article deals with Knowledge and how it is used to benchmark performance and set expectations.

Data Pyramid and the Intelligence Layer
The Intelligence layer of the pyramid deals with the future and answers questions “So What?” or “Now What?”. When this level is reached, usually the company stakeholders already have the dataset that is organized and structured as well as information about past outcomes of decision making. They also must have access to real-time data to learn and adjust on the fly. Having all the information at hand enables them to anticipate the outcomes of future decisions and choose the most suitable course of action.

Intelligence can be described as the ability to choose one decision out of a million other decisions based on knowledge of how these decisions might affect the outcome. 

Intelligence can be generated by the machine, for example, a self-driving car is a form of intelligence that scans the environment and can predict the course of action for the next section of the road. In the music industry, intelligent decisions are still, for the most part, made by humans by examining information, reading graphs and charts, memorizing past outcomes, and monitoring real-time data. In this article, we’ll explore some of the emerging intelligence technology in the music field so keep reading to find out more.

Prescriptive, not Predictive Analytics
Intelligence in data science is produced by the use of prescriptive analytics, which is the process of using data to determine the best possible course of action. Prescriptive analytics often employ machine learning algorithms to analyze data and consider all the “if” and “else” scenarios. Multiple datasets over different periods of time can be combined in prescriptive analytics to account for various scenarios and model complex situations. 
Intelligence Layer – Examples in the Music Industry

1. Recommendation systems that learn and adapt effectively to individual users’ preferences

Recommendation systems already use some sort of prescriptive analytics when they make a selection of songs based on past user behavior. Recommendation systems can also take into account the sequence of songs and context that affect the enjoyment level of the playlist as a whole. As previously played songs influence the perception of the next song, the playlist can be adjusted accordingly. The ability to prescribe a listening experience by recommendation systems is, perhaps, the most common and well-developed example of intelligence in the music industry.

Additionally, recommendation systems can prescribe music that directly affects user behavior. This project, for example, uses data from running exercises, predicts the future running performance, and recommends songs that maximize running results. It does so continuously, as the system stores and learns from each updated running exercise record.

To learn more about different types of recommendation systems, check out the article How Do AI Music Recommendation Systems Work. 

Photo at Unsplash @skabrera

2. Automatic playlist generation based on context

Generating music or suggesting existing music based on the context is an analog of a self-driving car in the music industry. The music adapts to the listening situation to amplify the current experience. For instance in video games, where music adjusts to the plot as the user progresses through various levels of the game. More on that in our article on Omniphony engine that explores adaptive soundtracks and music context in game development.

Such systems are also used as location-aware music recommendations for travel destinations (when music is chosen based on the sightseeing place you visit), or computer vision systems for museum experiences (when the artwork dictates the audio choice). In these cases, the constantly changing environment serves as the basis for recommendations. 

Another example of intelligence in this field is generating music in the metaverse which is a virtual environment, that includes augmented reality. The metaverse can be accessed through Oculus headsets and a smartphone. Currently, virtual streams and concerts are already conducted in the metaverse, so it is only a matter of time till the curated immersive experiences that can adjust to the audience’s needs will be delivered using intelligence.

3. Prescriptive curatorship – What’s going to be hot next? 

Prescriptive curatorship entails an understanding of how up-and-coming artists and tracks will perform and who is more likely to break in the near future. In the past, platforms like Hype Machine indexed music sites and helped find the best new music. 

Nowadays, there are systems that can predict future hits and breaking artists automatically. For example, Spotify is developing algorithms that can predict future-breaking artists. The algorithm takes into account the preferences of the early adopters and then determines whether the artist can be considered breaking. This data can then be used to sign deals with the artist at a very early stage.

Photo at Unsplash @jhjowen

4. Tracking changes in music preference distribution  – making music that hits the current preferences or even future preferences

Unlike prescriptive curatorship that relies on a group of experts, music preference distribution numbers serve artists to show how their chosen genre and formats fit audience demographics and how music can be changed for current or future preferences. The general consensus in the music industry is that music preference algorithms come after the music is produced, otherwise all music will end up sounding the same to mimic popular artists

There is not yet a system that would automatically recommend changing the content of the song based on what users prefer. Nevertheless, attempts to use the numbers to create songs people will like are still being made.

5. Royalty Advances

Royalty advances are a complex task that requires comprehensive tracking of music consumption across all platforms. Distributors such as Amuse and iGroove offer a royalty advance service that is able to predict upcoming payout amounts so that artists can invest in their music long before the actual royalties kick in. These systems analyze streaming data to calculate upcoming earnings. 

Recently the topic got even more attention through the hype of NFTs. Crypto-investors want to predict future royalty payouts and the value of their asset. 

Future platforms most likely will be able to prescribe a course of action regarding which distribution platform to focus on based on the predicted royalty amounts. 

Conclusion
True intelligence in music is still hard to come by. Most of the technology described in this article falls in the space between Knowledge engines, that make predictions, and Intelligence machines, that prescribe the most appropriate course of action out of million other possible actions.

The main concern in the industry is how far can one go with technological intelligence considering that music is a creative activity and the human element is still largely prevalent. An intelligence machine that can tell which music to produce based on a prediction of future user preferences generally prompts an adverse reaction in the industry

Nevertheless, intelligent decisions to adjust the content of songs or to sign future-breaking artists identified by the AI can already be made by the artists and labels based on available data. 

At Cyanite, we provide our API for access to data and the development of any kind of intelligence engines. As always, at each level of the pyramid, the quality of data plays a vital role. Cyanite generates data about each music track such as bpm, dominant key, predominant voice gender, voice presence profile, genre, mood, energy level, emotional profile, energy dynamics, emotional dynamics, instruments, and more.

Cyanite Library view

Each parameter is provided with its respective weight across the duration of the track. Based on different audio parameters, the system determines the similarity between the items and lists similar songs based on a reference track. These capabilities can be used for the development of intelligent products and tools as well as making intelligent decisions based on data within the company.

I want to analyze my music data with Cyanite – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

Free Data Sheet: Full Tagging Analysis of Spotify’s New Music Friday

Free Data Sheet: Full Tagging Analysis of Spotify’s New Music Friday

This analysis example gives you a comprehensive overview of Cyanite’s auto-tagging scope. Whether you are thinking about integrating Cyanite into to your platform or just want to get a general idea of AI tagging in action, this data sheet is for you!

To show you how it works, we analyzed the New Music Friday playlist by Spotify (Germany). To get the data sheet, leave your email here.

 

Benchmarking in the Music Industry – Knowledge Layer of the Data Pyramid

Benchmarking in the Music Industry – Knowledge Layer of the Data Pyramid

This article is a continuation of the series on the Data Pyramid which allows to turn music data into actionable insights. In this part of the series, we review the third step of the Data Pyramid – Knowledge. For details on all articles in the series, see below: 

How to Turn Music Data into Actionable Insights: This article is the first one in the series. It reviews all layers of the Data Pyramid and shows how to turn raw data into actionable insights. 

An Overview of Data in The Music Industry: This article presents all types of metadata in the industry from factual (objective) metadata to performance metadata. It is an essential guide for music professionals to understand all the various data sources available for analysis. 

Making Sense of Music Data – Data Visualizations: This article discusses the second step of the Data Pyramid – Information and how information can be presented in the form of data visualizations, making it easier to comprehend large data sets. 

Data Pyramid and the Knowledge Layer
Once you collect the data and analyze it to get information (organized, structured, and visualized data), it can then be turned into knowledge. Knowledge puts information into context. This context can be KPI’s after a significant change or performance against the competition.

At this step, attempts to look into the future and predict outcomes are made. The more specific the problem or context you’re observing is, the more precise your findings will be at this step. The Knowledge Layer produces analytics that help benchmark performance. As Liv Buli from Berklee Online University puts it, at the Knowledge layer you can tell that the artist of a certain size sells well after performing on the TV show and use this information to guide strategy for other artists of the same size. As a result, knowledge makes it possible to look at data in the industry-specific context and understand how you compare in relation to past successes and to competition. In that regard, benchmarking and setting expectations is the final outcome of the knowledge step. 

Benchmarking can take different forms within the music industry: 

Types of benchmarking 

 

  • Process benchmarking

This type of benchmarking deals with processes and aims to optimize internal and external processes in the company. You can improve the process by looking at what competitors are doing or setting one process against another. Processes relate to how things are done in the company, for example, the process of uploading songs to the catalog. 

  • Strategic benchmarking

Strategic benchmarking focuses on the business’s best practices which is often more complex than the other two types of benchmarking and includes: competitive priorities, organizational strengths, and managerial priorities. For example, an assessment of how fans responded to the brand sound in the past can help devise a long-term sound branding strategy.

  • Performance benchmarking

Performance benchmarking compares product lines, marketing, and sales figures to determine how to increase revenues. For example, as a marketing campaign for a music release develops over time, it can reveal the most vital money channels for exposure. Sales figures can indicate how artists compare to one another in terms of profitability. 

The Music Industry Benchmarks
In this part of the article, we review some of the various ways you can derive knowledge from data and how this knowledge can be used for benchmarking. This list is not final as there are many types of data in the music industry that can be analyzed and turned into knowledge.

1. Analyzing hit songs and popular artists to discover new talent

You can utilize Cyanite’s technology to analyze what’s currently working in the music industry in different markets. In particular, you can analyze popular songs and understand what makes them successful in terms of audio-based features such as genre, mood, energy level, etc. Further, you can use the Similarity Search to find tracks with a similar vibe and feel. It then helps you discover and identify new talent which may go along the same lines as current successes. Of course, that is not the whole story of making a hit but it gives you a pretty solid foundation of hitting the current zeitgeist.  

Cyanite Similarity Search interface

2. Analyzing popular playlists to predict matches

We specifically described this use case in The 3 Best Ways to Improve your Playlist Pitching with CYANITE article. You can analyze existing popular playlists such as Spotify New Music Friday or Spotify Peaceful Piano and see what songs are usually featured. This information can then help to understand the profile of the playlist which allows you to find the perfect fit and increase the chances of getting into the playlist. It also supports you in describing the songs to the editors in the way that song is accepted.  

Algorithmic performance increasingly determines whether or not strangers hear my song. More strangers hearing my songs is how my fanbase grows—and that cannot happen unless I understand how algorithms are likely to analyze and classify my song by genre, mood, and other relevant characteristics.
Brick Blair

Independent singer-songwriter, brick@brickblair.com, @brickblair

Photo at Unsplash @heidijfin

3. Analyzing marketing and sales numbers to allocate marketing budgets and support event planning

For example, record labels can analyze the performance of their artists and adjust marketing campaigns accordingly. Insight can help change the direction of the marketing campaign, choose appropriate channels, and allocate marketing budgets more effectively. In events planning, you can analyze event venues and identify the most relevant cities and venues based on past events. 

4. Analyzing the artists’ styles to identify opportunities for song-plugging

If you’re a music publisher looking to get a placement on a new release of a successful artist, analyzing their previous style and matching it to your catalog of demos would be the way to go. 

In this case, some audio qualities of the song are not important, as it will be eventually re-recorded. To analyze songs, a regular Cyanite functionality can be used including the Keyword Search by Weights where you can search your demo-catalog by the analysis results of the successful artist on weight-specific keywords to get the most relevant results. 

5. Analyzing fan engagement to identify audience segments

You can also analyze artists’ performance by looking into fan engagement on social media and music platforms. Through understanding the fan’s demographics, interests, and lives, you can create custom audiences for new artists or deepen fan engagement for the same artist based on past campaigns. This use case has been thoroughly described in the article How to Create Custom Audiences for Pre-Release Music Campaigns in Facebook, Instagram, and Google.

Photo at Unsplash @luukski

6. Analyzing trending music in advertising to find the most syncable tracks in the own catalog.

Sync licensing which includes finding the sync opportunities and pitching specific songs can benefit from data analysis and benchmarking. Trending music in brand advertising can be analyzed to reveal the brand’s sound. This sound will then be matched to specific songs in your catalog making a strong case in the pitch to the brand in terms of sound-brand-fit. If you are interested in this use case and how data can be used in sound branding and sync licensing check out the interview we did with Vincent Raciti from TRO – About AI in sound branding.

Potential Issues and Questions
– Finding reliable data, specifying the problem/context and analyzing information can be difficult. Deriving knowledge and benchmarking involves first asking a specific question or making a specific hypothesis and then getting a proper set of data to answer/verify it. If the data set is faulty, the knowledge will be wrong and potentially even harmful to your business outcome. Additionally, at this stage of the Data Pyramid, it is easy to ignore the previous steps and not explore the deeper layers of data missing out on the details.

Knowledge is about the past, not the future. At the knowledge layer, you only have information about what happened before. Usually, information about the present (though some tools provide access to real-time data) or the future is not taken into account. It is important to remember this limitation as past performance is no guarantee to future results.

Conclusion
Raw data is usually useless unless organized and interpreted. Only then does data become information. But before that, decisions on types of metadata and the methods to extract data have to be made. This process of data accumulation and filtering can be very time-consuming. 

At the stage of Information, data is structured and organized so it can be interpreted. Information requires less time to find relevant data but there is still a lot of effort involved. 

At the Knowledge level, information is put into context and can be used for benchmarking and setting expectations. This context can be historical and involve past successes or it can relate to the position of others in the market. What kind of knowledge is derived from information depends on the initial data set and on the ability to store the memories of past successes. This process of turning data into knowledge takes a whole new form when machine learning and deep learning techniques are used as they significantly speed up the process of data collection and can memorize tons of data. However, a lot of knowledge in the music industry is still derived manually by looking at the past outcomes and trying to apply them somehow in the present.  

I want to analyze my music data with Cyanite – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

Making Sense of Music Data – Data Visualizations

Making Sense of Music Data – Data Visualizations

1This article is a continuation of the series “How to Turn Music Data into Actionable Insights”. We will explore the second step of the Data Pyramid – Information and Visualizations. For details on the first step – Data – you can see the article here.

Making Sense of Music Data

The Information stage of the Data Pyramid involves organizing and visualizing data. Organizing is often done with the use of business intelligence, while visualizations are presented in the form of pie charts, line graphs, tables, etc. This stage of the pyramid answers the question “What happened?” (or What is happening?) in the business. 

Data becomes information when it is structured and organized. Some think that organizing and structuring data is not enough and only when data is useful and meaningful then it can be considered information. To combine these two views, we will consider that data becomes information when it is structured and meaningful. The final goal of the information stage is to come up with data visualizations that combine data and information so that conclusions can be made.

Music Data Visualizations

Data visualization is the process of translating large data sets and metrics into charts, graphs, and other visuals. Visualizations combine both design and computing skills. There is also psychology involved in how people read and interpret visuals.  

So data visualizations pursue several goals: to communicate information effectively and to present the information in a visually appealing way. Some people even say that data should look sexy and an ideal visualization should stimulate viewer engagement. A great example of this is the work by the London-based Italian information designer Tiziana Alocci who uses visualizations for many different use cases: album covers, corporate visualizations, and editorial infographics.

 

Data-driven album cover by Tiziana Alocci, 2019.

EGOT Club, by Tiziana Alocci for La Lettura, Corriere della Sera, 2019.

As an information designer, my job is to visualize data and represent information visually. My most traditional data visualization works involve the design of insight dashboards, thought-provoking data visualizations, and immersive data experiences. For me, the entire process of researching, sorting, organising, connecting, feeling, shaping, and acting is the highest form of human representation through data.

See Tiziana’s works on Instagram. 

Tiziana Alocci

Information Designer and Art Director at Tiziana Alocci Ltd, Associate Lecturer at University of the Arts London

Data visualizations that are used for business decisions are usually expected to be visually appealing and clear and communicate information. So design, data science, and computing knowledge should really work together to create effective data visualizations. 

Data Visualization Tools

Before data can be analyzed and visualized it is important to decide which data to use as well as ensure the quality of data. For different types of metadata used in the music industry, see the article here. Once the dataset is chosen and analyzed, it can then be visualized either with the help of a designer or using one of the three types of Music Data Visualization Tools:

1. Music Analysis and Discovery Tools

These tools show major characteristics of music such as genre, mood, emotional profile, energy level, etc., as well as show relations between artists and tracks. The idea is that similar tracks can be analyzed in detail and combined into playlists or recommended to customers. Cyanite usually falls into this category of music data visualization tools. 

2. Music Visualization Tools for Researchers

These tools are used for research purposes to prove a thesis or provide an overview of the field. For example, Ishkur’s Guide to Electronic Music was originally created as a genealogy of electronic music over the course of 80 years. It consists of 153 subgenres and 818 sound files

3. Marketing Tools

These tools present data visualizations that can be used for sales, advertising, and marketing purposes. They visualize data about consumer preferences, artists’ popularity, track consumption, industry trends, etc. which is not audio sound data. 

An example of marketing visualization tools could be web applications such as Pandora AMP and Soundcharts that provide data and visualizations to derive information. Tools like Tableau and Plot.ly allow you to upload raw data and get industry reports. 

Data Visualization Techniques

The general visualization techniques to help represent data in an effective way are trend charts, comparison charts, pie charts, connections, maps, etc. These can be constructed manually or by the computer.

For example, spectrograms are used by Cyanite to train the computer to identify patterns in the audio sounds.

Spectrograms used by Cyanite from left to right: Christina Aguilera, Fleetwood Mac, Pantera

Cyanite’s moods are presented in a comparison chart where overarching mood and the least present mood can be easily identified.

Comparison chart in Cyanite mood analysis project

Genre is represented in a trend chart that shows how track’s mean value of genre changes throughout the duration of the track. 

Genre trend chart in Cyanite detail view

In this project, representing the history of rock, a connection chart shows how different artists relate to each other in a linear or hierarchical way. 

To learn more about data visualization techniques see the article here

Examples from the World of Sound Branding

We asked companies specializing in data analysis and visualization how data visualizations are used in their work. 

AMP Sound Branding works with data visualization experts and depending on where the company plans on using the data, they visualize it in different ways. 

We try to use whatever technique that fits the data and the story we are telling best. Often we use Polar Area charts and spider-graphs as we find them a good fit for the Cyanite data.

Bjorn Thorleifsson

Head of Strategy & Research at amp sound branding

In the research on automotive industry sound, for example,  AMP used a combination of Polar area charts and line charts to visualize brand moods and compare.

Hyundai Genre by AMP

Overall Moods by AMP

At TAMBR sonic branding, a big chunk of work is to create a shared understanding of musical parameters that surround a brand. 

They say music is a universal language, but more often than not, talking about music is like dancing about architecture. As such, we only start composing once we have agreed on a solid sonic moodboard. For this to happen, we always start with a Cyanite-powered music search based on the brand associations of our client. For each track we present, we also visualize how it scores on the required associations.

Niels de Jong

Sonic strategist at TAMBR Sonic Branding

TAMBR visualizations take some of the subjectivity away when choosing the right music for a brand. However, these visualizations are merely guidelines, not strict pointers. According to the company, magic happens where data and creativity meet.

Data Visualizations by TAMBR
Conclusion

Data visualizations are a powerful way to present the results of data analysis and gain additional insights. Visualizations can really improve the process of decision-making. They can also be used on their own in the sales process to impress customers

However, this stage of the Data Pyramid is also connected to a range of problems. For example, misinterpretations often occur when the output is interpreted as the only truth, disregarding the input dataset and its limitations. Sometimes people rely on visuals so much that they don’t go into exploring the deeper layers of data, missing out on the important information. The human element in algorithms is also a problem. In some algorithms, a human marks the data as important to consider for a machine, so this affects how the algorithm learns and develops. Nevertheless, data visualizations are widely used to present some version of the truth in a clean and digestible format. 

You really can’t ignore the simplicity of data visualizations and their ability to navigate the viewer’s attention to the key information. Yet, to get a simple graph, tons of data mining and data analysis work is usually required. 

I am interested to visualize my music data with the help of Cyanite – how can I get started?

Please contact us with any questions about our Cyanite AI via sales@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.