From Data to Decision – How to Use Music Data and Analytics for Intelligent Decision Making

From Data to Decision – How to Use Music Data and Analytics for Intelligent Decision Making

We continue writing about the Data Pyramid and in this article we finalize the series with an overview of the fourth level of the pyramid – Intelligence. The supreme discipline of data utilization and a path to success when done right.

Other articles in the series include: 

How to Turn Music Data into Actionable Insights: This is an overview of the Data Pyramid and how it can be used in the music industry. 

An Overview of Data in The Music Industry: This article gives a list of all types of metadata in the music industry.

Making Sense of Music Data – Data Visualizations: This article explores data visualizations as the second step of the pyramid and gives examples of visualizations in the music industry. 

Benchmarking in the Music Industry – Knowledge Layer of the Data Pyramid: This article deals with Knowledge and how it is used to benchmark performance and set expectations.

Data Pyramid and the Intelligence Layer
The Intelligence layer of the pyramid deals with the future and answers questions “So What?” or “Now What?”. When this level is reached, usually the company stakeholders already have the dataset that is organized and structured as well as information about past outcomes of decision making. They also must have access to real-time data to learn and adjust on the fly. Having all the information at hand enables them to anticipate the outcomes of future decisions and choose the most suitable course of action.

Intelligence can be described as the ability to choose one decision out of a million other decisions based on knowledge of how these decisions might affect the outcome. 

Intelligence can be generated by the machine, for example, a self-driving car is a form of intelligence that scans the environment and can predict the course of action for the next section of the road. In the music industry, intelligent decisions are still, for the most part, made by humans by examining information, reading graphs and charts, memorizing past outcomes, and monitoring real-time data. In this article, we’ll explore some of the emerging intelligence technology in the music field so keep reading to find out more.

Prescriptive, not Predictive Analytics
Intelligence in data science is produced by the use of prescriptive analytics, which is the process of using data to determine the best possible course of action. Prescriptive analytics often employ machine learning algorithms to analyze data and consider all the “if” and “else” scenarios. Multiple datasets over different periods of time can be combined in prescriptive analytics to account for various scenarios and model complex situations. 
Intelligence Layer – Examples in the Music Industry

1. Recommendation systems that learn and adapt effectively to individual users’ preferences

Recommendation systems already use some sort of prescriptive analytics when they make a selection of songs based on past user behavior. Recommendation systems can also take into account the sequence of songs and context that affect the enjoyment level of the playlist as a whole. As previously played songs influence the perception of the next song, the playlist can be adjusted accordingly. The ability to prescribe a listening experience by recommendation systems is, perhaps, the most common and well-developed example of intelligence in the music industry.

Additionally, recommendation systems can prescribe music that directly affects user behavior. This project, for example, uses data from running exercises, predicts the future running performance, and recommends songs that maximize running results. It does so continuously, as the system stores and learns from each updated running exercise record.

To learn more about different types of recommendation systems, check out the article How Do AI Music Recommendation Systems Work. 

Photo at Unsplash @skabrera

2. Automatic playlist generation based on context

Generating music or suggesting existing music based on the context is an analog of a self-driving car in the music industry. The music adapts to the listening situation to amplify the current experience. For instance in video games, where music adjusts to the plot as the user progresses through various levels of the game. More on that in our article on Omniphony engine that explores adaptive soundtracks and music context in game development.

Such systems are also used as location-aware music recommendations for travel destinations (when music is chosen based on the sightseeing place you visit), or computer vision systems for museum experiences (when the artwork dictates the audio choice). In these cases, the constantly changing environment serves as the basis for recommendations. 

Another example of intelligence in this field is generating music in the metaverse which is a virtual environment, that includes augmented reality. The metaverse can be accessed through Oculus headsets and a smartphone. Currently, virtual streams and concerts are already conducted in the metaverse, so it is only a matter of time till the curated immersive experiences that can adjust to the audience’s needs will be delivered using intelligence.

3. Prescriptive curatorship – What’s going to be hot next? 

Prescriptive curatorship entails an understanding of how up-and-coming artists and tracks will perform and who is more likely to break in the near future. In the past, platforms like Hype Machine indexed music sites and helped find the best new music. 

Nowadays, there are systems that can predict future hits and breaking artists automatically. For example, Spotify is developing algorithms that can predict future-breaking artists. The algorithm takes into account the preferences of the early adopters and then determines whether the artist can be considered breaking. This data can then be used to sign deals with the artist at a very early stage.

Photo at Unsplash @jhjowen

4. Tracking changes in music preference distribution  – making music that hits the current preferences or even future preferences

Unlike prescriptive curatorship that relies on a group of experts, music preference distribution numbers serve artists to show how their chosen genre and formats fit audience demographics and how music can be changed for current or future preferences. The general consensus in the music industry is that music preference algorithms come after the music is produced, otherwise all music will end up sounding the same to mimic popular artists

There is not yet a system that would automatically recommend changing the content of the song based on what users prefer. Nevertheless, attempts to use the numbers to create songs people will like are still being made.

5. Royalty Advances

Royalty advances are a complex task that requires comprehensive tracking of music consumption across all platforms. Distributors such as Amuse and iGroove offer a royalty advance service that is able to predict upcoming payout amounts so that artists can invest in their music long before the actual royalties kick in. These systems analyze streaming data to calculate upcoming earnings. 

Recently the topic got even more attention through the hype of NFTs. Crypto-investors want to predict future royalty payouts and the value of their asset. 

Future platforms most likely will be able to prescribe a course of action regarding which distribution platform to focus on based on the predicted royalty amounts. 

Conclusion
True intelligence in music is still hard to come by. Most of the technology described in this article falls in the space between Knowledge engines, that make predictions, and Intelligence machines, that prescribe the most appropriate course of action out of million other possible actions.

The main concern in the industry is how far can one go with technological intelligence considering that music is a creative activity and the human element is still largely prevalent. An intelligence machine that can tell which music to produce based on a prediction of future user preferences generally prompts an adverse reaction in the industry

Nevertheless, intelligent decisions to adjust the content of songs or to sign future-breaking artists identified by the AI can already be made by the artists and labels based on available data. 

At Cyanite, we provide our API for access to data and the development of any kind of intelligence engines. As always, at each level of the pyramid, the quality of data plays a vital role. Cyanite generates data about each music track such as bpm, dominant key, predominant voice gender, voice presence profile, genre, mood, energy level, emotional profile, energy dynamics, emotional dynamics, instruments, and more.

Cyanite Library view

Each parameter is provided with its respective weight across the duration of the track. Based on different audio parameters, the system determines the similarity between the items and lists similar songs based on a reference track. These capabilities can be used for the development of intelligent products and tools as well as making intelligent decisions based on data within the company.

I want to analyze my music data with Cyanite – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

Benchmarking in the Music Industry – Knowledge Layer of the Data Pyramid

Benchmarking in the Music Industry – Knowledge Layer of the Data Pyramid

This article is a continuation of the series on the Data Pyramid which allows to turn music data into actionable insights. In this part of the series, we review the third step of the Data Pyramid – Knowledge. For details on all articles in the series, see below: 

How to Turn Music Data into Actionable Insights: This article is the first one in the series. It reviews all layers of the Data Pyramid and shows how to turn raw data into actionable insights. 

An Overview of Data in The Music Industry: This article presents all types of metadata in the industry from factual (objective) metadata to performance metadata. It is an essential guide for music professionals to understand all the various data sources available for analysis. 

Making Sense of Music Data – Data Visualizations: This article discusses the second step of the Data Pyramid – Information and how information can be presented in the form of data visualizations, making it easier to comprehend large data sets. 

Data Pyramid and the Knowledge Layer
Once you collect the data and analyze it to get information (organized, structured, and visualized data), it can then be turned into knowledge. Knowledge puts information into context. This context can be KPI’s after a significant change or performance against the competition.

At this step, attempts to look into the future and predict outcomes are made. The more specific the problem or context you’re observing is, the more precise your findings will be at this step. The Knowledge Layer produces analytics that help benchmark performance. As Liv Buli from Berklee Online University puts it, at the Knowledge layer you can tell that the artist of a certain size sells well after performing on the TV show and use this information to guide strategy for other artists of the same size. As a result, knowledge makes it possible to look at data in the industry-specific context and understand how you compare in relation to past successes and to competition. In that regard, benchmarking and setting expectations is the final outcome of the knowledge step. 

Benchmarking can take different forms within the music industry: 

Types of benchmarking 

 

  • Process benchmarking

This type of benchmarking deals with processes and aims to optimize internal and external processes in the company. You can improve the process by looking at what competitors are doing or setting one process against another. Processes relate to how things are done in the company, for example, the process of uploading songs to the catalog. 

  • Strategic benchmarking

Strategic benchmarking focuses on the business’s best practices which is often more complex than the other two types of benchmarking and includes: competitive priorities, organizational strengths, and managerial priorities. For example, an assessment of how fans responded to the brand sound in the past can help devise a long-term sound branding strategy.

  • Performance benchmarking

Performance benchmarking compares product lines, marketing, and sales figures to determine how to increase revenues. For example, as a marketing campaign for a music release develops over time, it can reveal the most vital money channels for exposure. Sales figures can indicate how artists compare to one another in terms of profitability. 

The Music Industry Benchmarks
In this part of the article, we review some of the various ways you can derive knowledge from data and how this knowledge can be used for benchmarking. This list is not final as there are many types of data in the music industry that can be analyzed and turned into knowledge.

1. Analyzing hit songs and popular artists to discover new talent

You can utilize Cyanite’s technology to analyze what’s currently working in the music industry in different markets. In particular, you can analyze popular songs and understand what makes them successful in terms of audio-based features such as genre, mood, energy level, etc. Further, you can use the Similarity Search to find tracks with a similar vibe and feel. It then helps you discover and identify new talent which may go along the same lines as current successes. Of course, that is not the whole story of making a hit but it gives you a pretty solid foundation of hitting the current zeitgeist.  

Cyanite Similarity Search interface

2. Analyzing popular playlists to predict matches

We specifically described this use case in The 3 Best Ways to Improve your Playlist Pitching with CYANITE article. You can analyze existing popular playlists such as Spotify New Music Friday or Spotify Peaceful Piano and see what songs are usually featured. This information can then help to understand the profile of the playlist which allows you to find the perfect fit and increase the chances of getting into the playlist. It also supports you in describing the songs to the editors in the way that song is accepted.  

Algorithmic performance increasingly determines whether or not strangers hear my song. More strangers hearing my songs is how my fanbase grows—and that cannot happen unless I understand how algorithms are likely to analyze and classify my song by genre, mood, and other relevant characteristics.
Brick Blair

Independent singer-songwriter, brick@brickblair.com, @brickblair

Photo at Unsplash @heidijfin

3. Analyzing marketing and sales numbers to allocate marketing budgets and support event planning

For example, record labels can analyze the performance of their artists and adjust marketing campaigns accordingly. Insight can help change the direction of the marketing campaign, choose appropriate channels, and allocate marketing budgets more effectively. In events planning, you can analyze event venues and identify the most relevant cities and venues based on past events. 

4. Analyzing the artists’ styles to identify opportunities for song-plugging

If you’re a music publisher looking to get a placement on a new release of a successful artist, analyzing their previous style and matching it to your catalog of demos would be the way to go. 

In this case, some audio qualities of the song are not important, as it will be eventually re-recorded. To analyze songs, a regular Cyanite functionality can be used including the Keyword Search by Weights where you can search your demo-catalog by the analysis results of the successful artist on weight-specific keywords to get the most relevant results. 

5. Analyzing fan engagement to identify audience segments

You can also analyze artists’ performance by looking into fan engagement on social media and music platforms. Through understanding the fan’s demographics, interests, and lives, you can create custom audiences for new artists or deepen fan engagement for the same artist based on past campaigns. This use case has been thoroughly described in the article How to Create Custom Audiences for Pre-Release Music Campaigns in Facebook, Instagram, and Google.

Photo at Unsplash @luukski

6. Analyzing trending music in advertising to find the most syncable tracks in the own catalog.

Sync licensing which includes finding the sync opportunities and pitching specific songs can benefit from data analysis and benchmarking. Trending music in brand advertising can be analyzed to reveal the brand’s sound. This sound will then be matched to specific songs in your catalog making a strong case in the pitch to the brand in terms of sound-brand-fit. If you are interested in this use case and how data can be used in sound branding and sync licensing check out the interview we did with Vincent Raciti from TRO – About AI in sound branding.

Potential Issues and Questions
– Finding reliable data, specifying the problem/context and analyzing information can be difficult. Deriving knowledge and benchmarking involves first asking a specific question or making a specific hypothesis and then getting a proper set of data to answer/verify it. If the data set is faulty, the knowledge will be wrong and potentially even harmful to your business outcome. Additionally, at this stage of the Data Pyramid, it is easy to ignore the previous steps and not explore the deeper layers of data missing out on the details.

Knowledge is about the past, not the future. At the knowledge layer, you only have information about what happened before. Usually, information about the present (though some tools provide access to real-time data) or the future is not taken into account. It is important to remember this limitation as past performance is no guarantee to future results.

Conclusion
Raw data is usually useless unless organized and interpreted. Only then does data become information. But before that, decisions on types of metadata and the methods to extract data have to be made. This process of data accumulation and filtering can be very time-consuming. 

At the stage of Information, data is structured and organized so it can be interpreted. Information requires less time to find relevant data but there is still a lot of effort involved. 

At the Knowledge level, information is put into context and can be used for benchmarking and setting expectations. This context can be historical and involve past successes or it can relate to the position of others in the market. What kind of knowledge is derived from information depends on the initial data set and on the ability to store the memories of past successes. This process of turning data into knowledge takes a whole new form when machine learning and deep learning techniques are used as they significantly speed up the process of data collection and can memorize tons of data. However, a lot of knowledge in the music industry is still derived manually by looking at the past outcomes and trying to apply them somehow in the present.  

I want to analyze my music data with Cyanite – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

#1 Case Study Video Interview – How did MySphera integrate Cyanite’s API into their platform?

#1 Case Study Video Interview – How did MySphera integrate Cyanite’s API into their platform?

In our first case study video interview with MySphera, we explore how the artist to tastemaker matchmaking platform integrated Cyanite’s API and what kind of results they were able to get. We discuss the selection process for the music AI as well as the challenges the company faced along the way.

Our CMO Jakob Höflich led the discussion with MySphera’s co-founders Netta Tzin and Nimrod Azoulai. This video lays out the whole journey from identifying the business problem to finding the AI solution and getting positive results for MySphera customers. And especially: what learnings did they experience on the way?

Find out how it all worked out in the video below.

We have met MySphera in the startup program Marathon Labs by London-based Marathon Music Group. 

I want to integrate AI in my service as well – how can I get started?

Please contact us with any questions about our Cyanite AI via business@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

How to Write Press Releases and Music Pitches with Cyanite

How to Write Press Releases and Music Pitches with Cyanite

Press and blog pitching is one of the many ways to promote an artist in addition to Spotify Playlists. Spotify even asks you how you’re going to drive traffic to Spotify before you can pitch a song. So if you’re doing influencer or Facebook marketing, not only will it help distribute your song, but you can also get on that coveted Spotify editorial. We already covered Facebook and Google marketing on the blog. In this article, we look into press releases and pitch writing with Cyanite.
The problem with the media is that media outlets get so many pitches every day that it is hard to break through the clutter. That being said, there is a right way to reach your audience. 

Why the right keywords are important in press releases and pitches

It is useful to understand how the music blog industry works before doing a pitch. The blogs receive thousands of emails every day and all of them describe music. Additionally, the blog industry developed into a niche sphere catering to specific audiences. Nowadays, there is a standalone blog for every music taste. So the pitching usually starts with identifying the right media channels and blogs. 

Additionally, as the editors receive too many pitches, some blogs request to submit the pitches through a third-party platform such as SubmitHub, Breakr, MySphera, or Soundcampaign.

The standard pitch includes an introduction, the purpose of writing (whether it is to secure a playlist placement, blog feature, or monthly album roundup), the artist, and the song description. To write the latter part you will have to tell a story, provide evidence, and reference facts that might be familiar to the editor. If you want to get on the editor’s list, the pitch should sound interesting, engaging, and express the essence of the song. Here is where Cyanite can help, so you can infuse your pitches with the best possible descriptions and also support them with data

This innovative way of using Cyanite for press release writing was brought to us by mü-nest – a music label from Kuala Lumpur, Malaysia. Here is why and how they use it, which we also describe step-by-step below:

For press release and pitch writing, Cyanite’s Augmented Keywords and Mood Tags allow us to accurately pinpoint certain algorithm-friendly and easy-to-relate keywords to be used in the texts so that the readers (i.e. magazine editors/radio DJs / music reviewers / playlisters) can instantly have a better sense about the track even before they listen to it, and that will further help them to decide whether this track can fit into their music review column/radio show/playlist. Furthermore, these keywords can also be very useful as hashtags in places like Bandcamp, SoundCloud, or social media.

Wei

Founder and Owner, Mü-nest

1. Analyze the track in Cyanite

Drag and drop your music to the library view. Before you do that, you need to register for free here https://app.cyanite.ai/register

The library view will show data about the song such as mood, genre, energy level, emotional profile, and more. You will see it in the columns, but also in the detail view.

Cyanite library view

2. Pick the right mood tags and keywords

In the detail view you will see genre identified by AI, mood, emotional profile, instruments, and voices represented in interactive graphics that can be customized. The final part, augmented keywords, will show additional words to describe the song. 

For the press release, you can use any information suggested by Cyanite, for example, calm, chill, gentle, dreamy, pensive, piano, synth, guitars. 

Augmented keywords, especially, can be of great help when you can’t find words or synonyms to describe the song. 

Augmented Keywords

Augmented keywords in Cyanite detail view

You can also try one of the AI writers and upload Cyanite keywords there to see what the output would be. We recommend you check and edit the AI-written text for better accuracy, but the results of AI writers using Cyanite data for song descriptions are already very impressive.

This is an example of a text written by simply uploading Cyanite augmented keywords from the analysis of Billie Eilish’s song Bury a Friend:

This track is very laid back, with a deep and dark sound. It has a slow tempo and easy-going rhythm, mixed with some electronic sounds. The track is perfect for a long evening of relaxation.

3. Use Similarity Search to identify similar artists 

Another trick is to use Similarity Search to find similar tracks and artists. You can access Similarity Search from the library to see the full list of suggested similar songs.

This information can be used to provide familiar references for the editor to understand the profile of the song. For example: “For the fans of Max Richter and Dustin O’Halloran.” or “Much alike to artists such as Vangelis, Ennio Morricone, and Nils Frahm, Dae Kim produces similar ambiance with his usage of orchestration and synthesizers.”

Cyanite similarity search

4. Write a press release or a pitch 

The last step is to write a press release or a pitch using all the information you’ve gathered. As we mentioned earlier, you can use an AI writer or create an email to the blog or a press release manually. Here is an excerpt from a press release by mü-nest written using Cyanite data:

This gentle rework is entitled, “Our Home”, in which he takes a short and chill motive that according to Dae, he could not stop playing on the piano and restructures the song with it being used as a foundation with an additional chord progression. What stayed, obviously, is the picturesque synth drones and reverberating guitars, while pensive lyrics and dreamy vocals were added to convey the story better.

How to write a quality press release or pitch using Cyanite
According to Tunecore, it’s better to avoid hype in your press releases. Stick to important, useful, and objective data. The writing should be balanced and contain the story, song description, artist description, and, perhaps, your artistic approach and inspiration. You can also include your previous successes and future plans. If you collaborated with someone in the past, or got a feature in media or blog, or had any other success with the track – this information is worth mentioning. 

A 200-word limit seems to be an industry standard for pitches. Cyanite data will help you not get stuck with words, provide ideas and inspiration, and even the exact words to use in the press release. For labels who send hundreds of pitches every day, it is also a way to optimize time and effort

The exact same method can be used for pitching Spotify Editorial Playlist. To see how you can improve Spotify Playlist pitching using Cyanite see the article here. Additionally, in services like Bandcamp and SoundCloud, you can use Cyanite keywords as hashtags.

I want to integrate AI in my service as well – how can I get started?

Please contact us with any questions about our Cyanite AI via sales@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

An Overview of Data in The Music Industry

An Overview of Data in The Music Industry

This article is a continuation of the series “How to Turn Music Data into Actionable Insights”. We’re diving deeper into the first layer of the Data Pyramid and explore different kinds of metadata in the industry. 

Metadata represents a form of knowledge in the music industry. Any information you have about a song is considered metadata – information about performance, sound, ownership, culture, etc. Metadata is already used in every step of the value chain; for music recommendation (algorithms), release planning, advance payments, marketing budgeting, royalty payouts, artist collaborations, and etc. 

Nonetheless, the music industry has an ambivalent relationship with musical metadata. On the one hand, the data is necessary as millions of songs are circulating in the industry every day. On the other hand, music is quite varied and individual (pop music is very different to ambient sounds, for example) so the metadata that describes music can take different forms and meanings making it a quite complex field. 

This article intends to explore all kinds of metadata used in the industry from basic descriptions of acoustic properties to company proprietary data. 

The article was created with helpful input from Music Tomorrow:

Music data is a multi-faceted thing. The real challenge for any music business looking to turn data into powerful insights is connecting the dots across all the various types of music data and aggregating it at the right level. This process starts with figuring out how the ideal dataset would look like — and a well-rounded understanding of all the various data sources available on the market is key

Dmitry Pastukhov

Analyst at Music Tomorrow

Types of Metadata

There are various classifications of the music metadata. The basic classification is Public vs Private metadata: 

  • Public metadata is easily available and visible to the public.  
  • Private metadata is kept behind closed doors due to legal and security or because of economic reasons. Maintaining competitiveness is one of these reasons why the metadata is kept private. Typically, performance-related metadata like sales numbers is private. 

Another very basic classification of metadata is Manual vs Automatic annotations: 

  • Manual metadata is entered into the system by humans. These could be annotations from the editors or users. 
  • Automatic metadata is obtained through automatic systems, for example, AI.

1. Factual (Objective)

Factual metadata is objective. This is such metadata as artist, album, year of publication, and duration of the song, etc. Factual metadata is usually assigned by the editor or administrator and describes the information that is historically true and can not be contested. 

Usually, factual metadata doesn’t describe the acoustic features of the song. 

Besides the big streaming services, platforms like Discogs are great sources to find and double-check objective metadata. 

Discogs provides a full account of factual metadata

2. Descriptive (Subjective)

Descriptive metadata (often also referred to as creative metadata or subjective metadata) provides information about the acoustic qualities and artistic nature of a song. These are such data as mood, energy, and genre, voice, and etc.  

Descriptive metadata is usually subjectively defined by the human or the machine based on the previous experience or dataset. However, BPM, Key, and Time Signature are the exception to this rule. BPM, Key, and Time Signature are objective metadata that describe the nature of the song. We still count them as the descriptive metadata.  

Major platforms like Spotify and Apple Music have strict requirements for submitted files. Having incomplete metadata can result in a file being rejected for further distribution. For music libraries, the main concern is user search experience as categorization and organization of songs in the library rely almost entirely on metadata.

Companies such as Cyanite, Musiio, Musimap, or FeedForward are able to extract descriptive metadata from the audio file.

3. Ownership/Performing Rights Metadata

Ownership Metadata defines the people or entities who own the rights to the track. These could be artists, songwriters, labels, producers, and others. These are all sides interested in a royalty split, so ownership metadata ensures everyone involved is getting paid. Allocation of royalties can be quite complicated with multiple songwriters involved, songs using samples of other songs, and etc. So ownership metadata is important.

Companies such as Blòkur, Jaxta, Exectuals, Trqk, and Verifi Media provide access to ownership metadata with the goal to manage and track changes to ownership rights of the song over time – and ensure correct payouts for the rights holders.

4. Performance – Cultural Metadata

The performance or cultural metadata is produced by the environment or culture. Usually, this implies users having an interaction with the song. This interaction is then registered in the system and analyzed for patterns, categories, and associations. Such metadata includes likes, ratings, social media plays, streaming performance, chart positions, and etc. 

The performance category can be divided into two parts: 

  • Consumption or Sales data deals with consumption and use of the item/track and usually needs to be acquired from partners. For example, Spotify shares data with distributors, distributors pass it down to labels, and so forth. 
  • Social or Audience data. Social data indicates how well music/artist does within a particular platform plus who the audience is. It can be accessed either through first-party tools or third-party tools. 

First-party tools are powerful but disconnected. They require to harmonize data from different platforms to get a full picture. They are also limited in scope, meaning that they cover only proprietary data. Third-party tools are more useful. They provide access to data across the market, incl. performance for artists you have to connect to. In this case, the data is already harmonized but the level of detail is lower. 

Another way to acquire social data is tracking solutions (movie syncs, radio, etc) that produce somewhat original data — these could be either integrated with third-party solutions (radio-tracking on Soundcharts/Chartmetric, for example) or operate as standalone tools (radio monitor, WARM, BDS tracker). It is still the consumption data but it’s accessed bypassing the entire data chain.

5. Proprietary Metadata

Proprietary data is the data that remains in the hands of the company and rarely gets disclosed. Some data used by recommender systems is proprietary data. For example, a song’s similarity score is proprietary data that can be based on performance data. This type of data also includes insights from ad campaigns, sales, merch, ticketing, and etc. 

Some of the proprietary data belongs to more than one company. Sales, merch, ticket sales — a number of parties are usually involved here.

Outlook

Today, processes in the music industry are rather one-dimensional when it comes to data. For instance, marketing budgets are often planned merely based on past performance of an artist’s recordings – so are multiples on the acquisitions of their rights. 

Let’s look at the financial sector: In order to estimate the value of a company, one has to look at company-internal factors such as inventory, assets, or human resources as well as outside factors such as past performances, political situation, or market trends. Here we look at proprietary data (company assets), semi-proprietary data (performance), and public data (market trends). The art of connecting those to make accurate predictions will be the topic of future research on the Cyanite blog.

Cyanite library provides a range of descriptive metadata

Conclusion

Musical metadata is needed to manage large music libraries. We tried to review all metadata types in the industry, but these types can intersect and produce new kinds of metadata. This metadata can then be used to derive information, build knowledge, and deliver business insights, which constitute the layers of the Data Pyramid – a framework we presented earlier that helps make data-based decisions. 

In 2021, every company should see itself as a data company. Future success is inherently dependent on how well you can connect your various data sources.

I want to integrate AI in my service as well – how can I get started?

Please contact us with any questions about our Cyanite AI via sales@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.