Context is what separates music from AI slop

Context is what separates music from AI slop

Add structured context to your discovery workflows with Cyanite’s Advanced Search.

Every week, thousands of new tracks enter music libraries. There’s no real limit to how many can be uploaded. As catalogs expand, it becomes harder to tell why one piece deserves attention over another.

At the same time, generative AI tools make it possible to produce a lot of music quickly and cheaply. This means catalogs can get flooded with “AI slop,” a term used to describe mass-produced generative content created for volume rather than quality.

Context is key to making the distinction between music and AI slop. Knowing who created a track, what shaped it, and why it exists roots it in human experience and creative intent.  Without that layer of insight, music becomes interchangeable audio, reduced to tags and search terms.

What makes music human?

The intention behind music and the social connection it creates are what make it human.

That humanity is visible in the decisions that shape a track. No matter how minimal or elaborate a composition is, every musical choice reflects human knowledge and experience. The key, rhythm, production, and instruments used are all guided by cultural exposure, emotional memory, and learned musical language.

And then there’s the risk. When someone releases music, they also release control over how it will be heard and judged. That exposure is vulnerable, and recognizing the risk and context behind a piece makes the connection to it stronger.

The AI limitation

When intention and personal stakes are missing, the difference is noticeable.

AI-generated music can sound like human-made tracks. It can replicate style, structure, and production detail with striking accuracy. In many contexts, it even meets professional standards.

However, it doesn’t come from lived experience and instead reconstructs patterns it has learned from existing music. There’s no vulnerability behind the track. There’s no social stake. And there’s no personal history shaping the decision to produce it. The output is coherent because the sequence fits statistically, not because something needed to be expressed.

Why context matters more than ever

With the sheer volume of modern catalogs, several tracks can sound interchangeable. You can work on a brief and find 10 pieces that would technically meet the requirements. 

What actually helps you choose is knowing where the music comes from and who made it. That extra layer of information changes how you hear it. In a space this crowded, context is what keeps everything from blending into the same background noise.

The potential of contextual metadata

If context gives music meaning, it needs to be structured as metadata so it can be searched and filtered at scale.

Custom tagging makes that possible. Catalogs can include fields for artist origin, geography, creative background, cultural context, and editorial positioning. When that information can be filtered, it starts shaping decisions. Context moves from description to action.

The same principle applies to one of the clearest distinctions in modern catalogs: whether a track is human-created or AI-generated. When that difference is structured as metadata, it becomes searchable inside existing discovery systems.

Melodie Music puts this into practice to spotlight original Australian artists. They combine Cyanite’s sound-based AI search with their own editorial and contextual metadata.

  • Cyanite analyzes the sound of a reference track and generates a shortlist based on emotional profile and sonic character.
  • Melodie layers contextual filters, such as artist origin, on top of those results.
  • Users refine further using editorial tags aligned with cultural or strategic priorities.
  • The final selection satisfies both the creative brief and the mandate to support specific artist communities.

What this means for music discovery

Algorithmic recommendations alone aren’t enough. Teams want clarity about origin, authorship, and AI involvement before committing to a track. 

In our joint study with MediaTracks and Marmoset, we found that contextual metadata plays a central role in how professionals work through briefs. Respondents described relying on origin details and creator background to avoid misalignment and explain their choices to clients. 

Clearly labeling AI involvement was part of that same expectation. Professionals are open to working with AI-generated music, but they want to know explicitly whether a track is AI-generated or human-made. Context, including transparency around authorship, informs decisions.

Read more: Why AI labels and metadata now matter in licensing

Cyanite’s Advanced Search, available via API integration, allows teams to upload their own custom metadata fields and use them for filtering. Fields can include artist origin, cultural background, clearance information, and editorial categories.

Search queries then run within that defined subset, so sound analysis operates inside contextual boundaries set by the catalog owner, as implemented by Melodie Music.

For platforms embedding Cyanite’s search algorithms into their own systems, this enables structured transparency at scale. Context becomes part of the discovery logic itself.

Choosing meaning over noise

“We have always connected to music because it carries intention, experience, and emotion – not just sound. A song means something because it was created in a specific moment, for a reason, by someone responding to their world. Today, we are surrounded by more music than ever, inevitably making it harder to feel that connection. Delivering context to a song gives a glimpse into what went into it, and with it a chance to understand the people and feelings behind the music.

Even though AI-generated music can sound pleasant, it is fundamentally an imitation – a reconstruction of patterns it has seen before. It lacks intention, situation, risk, and personal stake.

That’s why context matters more than ever. Knowing why a piece of music exists, where it comes from, and what went into it is what turns sound into something meaningful.”

Simon Timm

Music Producer, Cyanite Marketing Expert, Cyanite

Context can double as infrastructure in catalogs. As AI-generated music becomes easier to produce and distribute, what will separate human-made tracks from AI slop is whether a track’s origin is visible and understood. Catalogs that structure and surface contextual metadata can ensure music is selected based on where it comes from and why it exists, not just how it sounds.

Ready to add context to your discovery workflows?

FAQs

Q: How can contextual metadata help distinguish human-created music from AI-generated tracks?

A: Contextual metadata adds information beyond sound analysis, such as artist background, origin, editorial positioning, and authorship labeling. It can allow teams to filter catalogs based on transparency and creative intent, helping distinguish human-created music from generative content produced at scale.

 

Q: Does Cyanite detect whether music is AI-generated or human-made?

A: Cyanite is developing AI music detection capabilities designed to support transparent catalog workflows. Early implementations allow teams to label and filter tracks based on AI involvement, helping licensing professionals and curators make informed decisions during discovery.

Q: Can Cyanite’s Advanced Search filter music using custom metadata fields?

A: Yes. Advanced Search allows catalog owners to include their own metadata fields as filters within search queries. These filters narrow the searchable catalog before sound similarity or text-based matching is applied, helping teams surface results that fit their creative and business requirements.

Q: How can music platforms integrate contextual discovery into existing workflows?

A: Music catalogs can integrate Cyanite’s Advanced Search through the API, making it possible to combine sound analysis with custom metadata filters inside their existing workflows.

Making Sense of Music Data – Data Visualizations

Making Sense of Music Data – Data Visualizations

Generate consistent music metadata from audio. Sign up for Cyanite.

Music data exists at every level of the industry: in catalogs, streaming platforms, research databases, and brand strategy decks. But raw data on its own doesn’t communicate much. To extract insight, support decisions, or align teams, that data needs to become visible.

Visualization makes music data readable at scale. It transforms analysis results into formats people can interpret, compare, and act on. When done well, it gives fragmented or overwhelming data clarity.

This article explores how music companies use visualization in practice, which approaches work for different goals, and what makes visualization reliable.

Learn more: This article focuses on the visualization layer of Liv Buli’s Data Pyramid model. For context on how raw music data becomes structured and analyzable in the first place, see An overview of data in the music industry.

How can we make sense of music data?

When you’re managing thousands of tracks, visualization answers questions metadata alone can’t resolve. Which moods dominate your catalog? Where are the gaps? How does a single track evolve over its duration?

Charts and graphs make these patterns visible. A comparison chart might show that 60% of your catalog is tagged as “energetic” while only 15% is tagged as “calm”. A trend chart may reveal how a track shifts from ambient to electronic as it progresses. This is information you can use to review metadata quality, understand catalog composition, and pitch music with confidence.

However, if tags are inconsistent or incomplete, the patterns you see won’t reflect what’s actually in your catalog. Structured music data should be consistent, and it starts with reliable tagging at scale.

Music data visualization techniques Cyanite uses

Once music data is structured, the next step is choosing the right format to make that data readable. Different chart types serve different purposes in catalog work, whether you’re evaluating a single track, comparing options, or understanding patterns across thousands of files.

Cyanite provides these visualizations in the “Detail” view for each track, covering genre, mood, energy level, instrument presence, and voice presence. Each format is designed to surface specific insights quickly.

Horizontal bar charts display individual attribute scores in a simple, scannable format. They show mood scores like “Energetic,” “Sexy,” and “Happy,” making it easy to compare strengths at a glance.

  • Use case: Quickly assess which attributes dominate a track before adding it to a playlist or pitching it for a specific brief.

Radar charts by Cyanite visualize a track’s mood profiles in a circular format. Each axis represents a different mood attribute, and the resulting shape reveals the track’s overall emotional signature. This makes it easy to see which emotions dominate and how they balance against each other. When comparing multiple tracks, radar charts can overlay several mood profiles at once, revealing which tracks share similar emotional characteristics and which diverge.

  • Use case: Evaluate a single track’s emotional profile before pitching, or compare multiple tracks side by side to find the best match for a specific brief. Useful when a music supervisor asks for “uplifting but not aggressive,” or when you’re building emotionally cohesive playlists.

Trend charts reveal how attributes change over time within a track. They show how genre shifts throughout a track’s duration, segment by segment. In this example, you can see a track that starts as electronic dance, briefly touches pop and rock elements, then returns to its electronic dance foundation.

  • Use case: Find tracks that transition between moods or energy levels. Useful for scene changes, dynamic playlist sequencing, or identifying tracks with intro/outro sections that differ from the main body.

Representative segments identify the 30-second portion of a track that best captures its overall character. Cyanite highlights this segment in the waveform below each visualization, making it easy to preview the essence of a track without listening to the full duration.

  • Use case: Create teaser clips for social media, quickly evaluate tracks during pitching, or provide samples for music supervisors who need fast decision-making tools.

Most relevant segments in Cyanite visualize which moments within a track best match a search query. The system analyzes tracks in short segments and highlights the sections that correspond most closely to the intent of a Similarity Search or Free Text Search.

  • Use case: Jump directly to the section of a track that fits a brief. This is useful when searching for specific moments like intros, breakdowns, or choruses, and when reviewing many search results in a limited time.

Together, these visualization formats make track composition visible at different scales. With a foundation of consistent tagging, they turn raw data into actionable insight that supports confident decisions about what to pitch, license, or prioritize.

How music companies use data visualization today

Music companies use data visualization to understand their catalogs. Instead of working through raw metadata, teams can use visuals to understand what the catalog includes, how tracks are distributed across genres and moods, and where there are limitations or opportunities.

In practice, it’s helpful in several scenarios:

  • Catalog analysis: By reviewing visualizations across multiple tracks, teams can identify patterns, such as which moods dominate their catalog and where gaps exist.
  • Brief and pitch preparation: Visualization helps coordinate decisions across people by providing a shared frame of reference when commercial pressure is involved.
  • Programming and curation: Visual cues help teams avoid sonic repetition and maintain contrast between neighboring tracks when building playlists or radio schedules.
  • Catalog development: Teams can check how new releases sit alongside existing music before they are added or promoted.

Visualization needs to be embedded inside operational tools, where catalog work already takes place, because that’s where decisions are made. 

Platforms like Reprtoir and MusicMaster have integrated Cyanite’s visualizations into their products for this reason. By offering sound-based visuals directly within existing workflows, they reflect how central visual analysis has become to modern catalog management.

Why visual appeal matters in music data

In day-to-day work, visualizations help people make decisions. But good visuals need to do more than organize information. They should catch the eye and invite attention to make people want to spend time with the data.

This becomes especially important when data is meant to be shared. As streaming metrics became part of how musical success is discussed, labels began looking for ways to turn abstract performance data into something tangible and visible beyond internal tools.

Visual Cinnamon’s work is one example. They created data art posters for Sony Music based on streaming releases such as “Adore You” by Harry Styles. These posters translate audio structure into circular spectrograms and combine it with streaming context, turning listening data into visual objects people want to look at and share.

Poster design by Visual Cinnamon for the song “Adore You” by Harry Styles

London-based Italian information designer Tiziana Alocci shows us a more expansive take on visual engagement. She uses visualizations for many different use cases: album covers, corporate visualizations, and editorial infographics.

Data-driven album cover by Tiziana Alocci, 2019.

EGOT Club, by Tiziana Alocci for La Lettura, Corriere della Sera, 2019.

As an information designer, my job is to visualize data and represent information visually. My most traditional data visualization works involve the design of insight dashboards, thought-provoking data visualizations, and immersive data experiences. For me, the entire process of researching, sorting, organising, connecting, feeling, shaping, and acting is the highest form of human representation through data.

See Tiziana’s works on Instagram. 

Tiziana Alocci

Information Designer and Art Director, Tiziana Alocci Ltd, Associate Lecturer at University of the Arts London

In business settings, visualizations are still expected to be clear and easy to interpret. But these examples show why visual appeal also matters. When data is readable and visually compelling, people engage with it for longer and trust it more.

Learn more: Benchmarking in the music industry—knowledge layer of the Data Pyramid

How sound branding teams use music data visualization

The balance between design and data science becomes especially important in sound branding, where visualizing music helps teams align on abstract qualities before creative work begins.

We asked companies specializing in sound branding and data analysis how they use data visualizations in their work.

amp Sound Branding works with data visualization experts. Depending on where the company plans on using the data, they visualize it in different ways. 

We try to use whatever technique fits the data and the story we are telling best. Often we use polar area charts and spider-graphs as we find them a good fit for the Cyanite data.

Bjorn Thorleifsson

Head of Strategy & Research, amp Sound Branding

In their research on automotive industry sound, for example, amp used a combination of polar area charts and line charts to visualize and compare brand moods.

Hyundai Genre by AMP

Overall Moods by AMP

At TAMBR sonic branding, a large portion of their work is creating a shared understanding of the musical parameters that surround a brand. 

They say music is a universal language, but more often than not, talking about music is like dancing about architecture. As such, we only start composing once we have agreed on a solid sonic moodboard. For this to happen, we always start with a Cyanite-powered music search based on the brand associations of our client. For each track we present, we also visualize how it scores on the required associations.

Niels de Jong

Sonic Strategist, TAMBR Sonic Branding

TAMBR visualizations remove some of the subjectivity when choosing the right music for a brand. However, these visualizations are merely guidelines, not strict pointers. TAMBR believe that magic happens where data and creativity meet.

Data Visualizations by TAMBR

These examples show how visualization supports real creative and commercial decisions. But what tools make this kind of work possible?

Music data visualization tools

Before music data can be analyzed and visualized, teams need to decide which data is relevant and ensure it’s reliable. Once a dataset has been analyzed, visualization becomes a vehicle for that information to be used in practice. Different tools support this at different points in a music workflow, depending on who is using them and for what purpose.

1. Music analysis and discovery tools

Music analysis and discovery tools consistently categorize and tag tracks, so teams can easily find what they are looking for. They show core musical characteristics, such as genre, mood, emotional profile, and energy level, and make sound-based relationships between tracks readable at scale.

Cyanite falls into this category. It analyzes the audio itself, then applies Auto-Tagging to generate consistent metadata and Auto-Descriptions to provide quick, neutral summaries of how tracks sound. For music search, Cyanite’s Similarity Search (sound-based search), Free Text Search (prompt-based search), and Advanced Search (an add-on to both searches, allowing for custom contextual metadata) help teams locate relevant music efficiently across large catalogs.

Alongside tagging and search, Cyanite visualizes analyzed metadata directly within the web app on each track through graphs. These visualizations show a song’s key characteristics, with a strong focus on mood, so it’s easier to compare tracks without relying on text labels alone. When using Cyanite via the API, the developing team creates the visualizations. 

2. Music visualization tools for researchers

These tools are used in research contexts to study music at a broader level. The focus is on analysis and documentation.

Researchers also use these visualization tools to prove a thesis or provide an overview of a musical field. For example, Ishkur’s Guide to Electronic Music was originally created as a genealogy of electronic music over 80 years. It consists of 153 subgenres and 818 sound files.

Through Cyanite for Innovators, we support research and creative projects built on our sound-based music analysis.

3. Music marketing tools

Music marketing tools use data visualization to track how music performs once released. Unlike analysis and discovery tools, they don’t visualize sound itself. Instead, they focus on audience behavior, platform performance, and market response, helping teams understand reach, traction, and growth over time.

Many platforms have native analytics tools, such as Spotify for Artists, Apple Music for Artists, Bandcamp for Artists, and YouTube Studio. These provide first-party data limited to a single platform, including listener counts, streams, saves, playlist additions, and audience location. They can help users understand performance in detail, but only reflect activity within that specific ecosystem.

Tools like Pandora AMP and Soundcharts are often used to provide performance insights beyond a single streaming platform, especially for tracking discovery and audience response at a market level.

In marketing and pitching, Cyanite describes and positions music based on how it sounds. This helps teams explain fit and intent when presenting tracks to clients or partners.

Read more: For a concrete example of how sound-based analysis supports music marketing and pitching workflows, see how Chromatic Talents uses Cyanite in practice.

The promise and limits of data visualization

Music data visualization helps teams make sense of large catalogs by turning structured sound data into something that’s readable and comparable. At its best, it supports clearer decisions and shared understanding around music. But it also has limitations.

A graph is not meant to replace judgment. When the underlying metadata is inconsistent, incomplete, or treated as an absolute truth rather than a point of reference, visuals can be misleading. This is why visualization only works when paired with domain knowledge and active listening.

The quality of a visualization always reflects the quality of the data beneath it. Clean, consistent tagging is what makes patterns meaningful and comparisons reliable. Without that foundation, visuals become surface-level representations with little value.

Cyanite is built with the benefits of data visualization in mind, as well as its challenges. By combining sound-based analysis, structured tagging, search, and visualization in one place, it helps teams compare tracks, spot patterns, and make decisions without disrupting their workflow. 

If you want to explore how structured music data supports clearer visualization, try Cyanite for free and see how it works in practice.

FAQs

Q: What is music data analytics?

A: Music data analytics is the process of collecting and organizing information to uncover what’s in a music catalog and how it’s structured. It helps teams understand the whole content of a catalog, not just individual tracks.

 

Q: Why isn’t metadata alone enough for large catalogs?

A: Metadata is essential, but it has limits at scale. Tags can be inconsistent, incomplete, or too broad to capture nuance. As catalogs grow, it becomes harder to spot patterns, gaps, or overlaps using text alone. Visualization and sound-based analysis make those patterns visible, helping teams compare tracks and make decisions with greater clarity.

Q: What kinds of graphs are used to visualize music data?

Common music data visualization graphs include comparison charts, trend charts, similarity clusters, and catalog distribution views.

Q: Who uses music data visualizations in practice?

A: Music data visualizations are used by catalog managers, music supervisors, sound branding teams, researchers, and analysts.

Q: How does Cyanite support music data visualization?

A: Cyanite analyzes audio directly and turns it into structured data that can be visualized consistently at catalog level. 

Q: What are the limits of data visualization in music?

A: Visuals help guide attention, but they don’t provide context or replace human judgment. Charts can be misread if the underlying data or its limitations are ignored, which is why music data visualization graphs work best when paired with listening and domain knowledge.

How To Prompt: The Guide to Using Cyanite’s Free Text Search

How To Prompt: The Guide to Using Cyanite’s Free Text Search

Ready to search your catalog in natural language? Try Free Text Search.

Do you have trouble translating your vision for music into precise keywords? If so, this guide on how to prompt using Cyanite’s Free Text Search is for you.

It’s a more natural way to search your music catalog and discover tracks. You can use complete sentences to describe soundscapes, film scenes, daily situations, activities, or environments. Prompts can be written in different languages and can include cultural references, so you’re not forced to reduce your idea to a fixed set of tags.

Before you explore what Free Text Search can do, keep in mind that prompt-based search works best when your input is specific. The clearer you are, the easier it is to find what you’re looking for. 

Read more: What is music prompt search?

Why music catalogs struggle with discovery

Most large catalogs contain inconsistent metadata. Many were built before modern tagging standards, then expanded over time through different workflows. New music arrives faster than metadata teams can standardize it, especially with the volume from UGC and AI-generated releases, while older tracks remain described in ways that don’t always support how music is searched for today.

Traditional search relies on tags and keyword logic. This approach can be effective for many searches, but it has limits when ideas are already highly specific, like with a detailed creative brief or a particular scene description. Translating concrete, nuanced needs into tags often loses critical details and context.

That’s where natural language search makes a difference. Instead of defining a specific vision in terms of available tags, you can describe what you need directly or even paste a brief into the search bar. The system interprets intent, mood, and context in ways that complement tag-based discovery.

This helps sync and licensing teams work faster with detailed requests, and gives catalog teams another tool to surface relevant music, especially from underused parts of the catalog.

Read more: How to use AI music search for your music catalog

How Free Text Search amplifies music discovery

Free Text Search lets you look for music in the way you would naturally describe it. Write detailed prompts in full sentences, and Cyanite’s AI interprets the meaning behind your words to match intent with how tracks actually sound in your catalog.

This type of search is designed for situations where intent doesn’t translate cleanly into keywords. Tag-based searches work well when attributes are fixed and clearly defined, and Similarity Search is useful when you already have a reference track and want to find music that sounds close to it. Teams often get good results when they search in their own words first, then move into other search modes to refine the selection.

How to use Free Text Search effectively

In real-life workflows, searches rarely begin from the same place. Sometimes you’ll start with sound, sometimes with a scene, and sometimes with context. 

Not every idea can be reduced to tags or tied to a specific track. Choosing music is a creative process, so the way people search is often creative too. Free Text Search meets users where they are, allowing them to describe intent in natural language and shape discovery around how they think. 

1. Describing sound

With Free Text Search, you can add context and even cultural references to your search, making it possible to find the perfect soundtrack for your project and get the most out of your music catalog. 

This approach is commonly used when responding to sync briefs that describe musical detail and tone.

Sound-focused prompts should name what musical elements are present, then add how those elements are played or arranged. An extra cue about character or attitude can be included when it helps clarify intent.

[Instruments or sound sources] + [how they are played or arranged] + [optional: character or stylistic cue]

  • “Trailer with sparse repetitive piano and dramatic drum hits with Star-Wars-style orchestra themes”
  • “Laid-back future bass with defiant female vocal”
  • “Staccato strings with a piano playing only single notes”
  • “Solo double bass played dramatically with a bow”

These prompts work because they are specific, but not rigid. That level of detail helps surface relevant tracks faster and reduces reliance on perfectly maintained tags, which is especially valuable in large or uneven catalogs.

Common mistakes to avoid

  • Staying too abstract: Words like “cinematic” or “emotional” on their own don’t give enough information to form a clear sound.
  • Listing elements without context: Naming instruments or genres without describing how they are played or arranged often leads to broad results.
  • Overloading the prompt: Packing too many ideas into one sentence can blur intent and pull results in different directions.
  • Writing like a tag list: Free Text Search works best when the prompt reads like a description, not a stack of keywords.

Read more: AI search tool for music publishing: best 3 ways

2. Describing film scenes

Film scenes can evoke a wide range of emotions and visuals. When using Free Text Search for this purpose, consider whether your prompt captures objective elements of the scene or your own interpretation of it.

Publishers often use scene-based prompts to explore deeper parts of their catalog and surface music suited to narrative use cases beyond obvious genre labels.

You can reference popular movies or shows like Pirates of the Caribbean or Stranger Things in your search prompts.

It helps to think like a director. Focus on the action or moment in the scene and what the viewer is experiencing. The clearer the image you describe, the easier it is for the search to interpret what kind of music belongs there, without needing a list of musical traits.

[Action or moment] + [optional: setting or situation] + [optional: stylistic cue]

  • “Riding a bike through Paris”
  • “Thriller score with Stranger-Things-style synths “
  • “Tailing the suspect through a Middle Eastern bazaar”
  • “The football team is getting ready for the game”

An example result for the prompt: “Riding a bike through Paris”

These prompts work because they describe a cinematic moment rather than a list of musical characteristics. A scene like “riding a bike through Paris” suggests a certain musical style and progression, which helps frame how the music should unfold. That context gives Free Text Search a clearer sense of what the track needs to communicate.

To fine-tune your search, add different keywords, like “orchestral,” “industrial rock,” or “hip-hop,” to steer it in the direction you want.

Common mistakes to avoid

  • Writing scenes that only make sense to you personally: Prompts should be interpretable without extra explanation.
  • Dropping the visual context: Turning a scene into a genre description removes what makes this approach effective.
  • Using obscure references: If the reference is not widely known, it may not clarify the scene.

3. Describing activities, situations, and moods

Free Text Search empowers you to be as specific as your project demands. You can describe when and where music will be heard, and what it should communicate. Combining activity, situation, and mood helps direct discovery toward abstract or niche ideas that don’t translate cleanly into tags, making it easier to surface music that fits its intended use.

When writing the prompts, focus on how the music will be used and what it needs to communicate in that situation. Providing clear usage context helps the search narrow results without requiring detailed musical instruction.

[Style or sound] + [intended use or context] + [optional: tone or functional role]

  • “Latin trap for fitness streaming catalog”
  • “Mellow California rock for sports highlight content”
  • “Colorful pop music for lifestyle brand campaign”
  • “Subtle ambient textures for background use”

Example result for the prompt: Mellow California rock for a road trip”

Common mistakes to avoid

  • Leaving out the use case: Mood alone often leads to broad results without direction.
  • Mixing conflicting contexts: Background use and high-impact language can work against each other.
  • Lack of clarity: When the prompt doesn’t include enough context, results stay generic.

Free Text Search is available in the Cyanite web app. You can test prompts, explore results, and refine searches in minutes.

Using prompts to improve discovery

With Free Text Search, you can explore your music catalog using detailed descriptions. This lets you search based on how music is described in real projects, making it easier to find tracks that fit a specific brief, scene, or use case.

Whether you’re pitching music for sync, artists, or labels, looking to underscore a film scene, or setting the mood for an activity, Free Text Search empowers you to explore music in a whole new way.

As you craft your prompts, try to be specific and objective, as this will return better results. Use concrete details like instruments, playing styles, and specific scenes or activities. 

You already have the resources in your catalog. Free Text Search helps you access them more effectively.

How Do AI Music Recommendation Systems Work

How Do AI Music Recommendation Systems Work

Upgrade your music discovery. Try Similarity Search in Cyanite.

Music recommendation systems support discovery in large music libraries and applications. As access to digital music has expanded, the volume of available tracks has grown beyond what users can navigate through a simple search or browsing alone.

Music services address this by relying on algorithmic recommendation systems to guide listeners and surface relevant tracks. These systems differ in how they generate recommendations and in the types of data they use, which leads to different results and tradeoffs depending on the use case.

In this article, we’ll go through how music-suggestion systems work and introduce the main approaches behind them, outlining how they are applied in practice.

Why music catalogs struggle

As music catalogs grow, manual search slows down. Results become less reliable and predictable. This is reinforced by inconsistent metadata, often caused by missing tags or legacy catalogs, which makes it difficult to surface the right tracks at the right time.

Lost opportunities are the result.

  • Pitching and licensing take longer because relevant tracks are harder to find.
  • Monetization suffers when parts of the catalog remain unseen.
  • In streaming services and music-tech platforms, weak discovery limits engagement and narrows what users actually explore.

The three different music recommendation approaches

A music recommendation system suggests tracks by analyzing information such as audio similarity, metadata, user behavior, and context. Based on this analysis, the system surfaces music that fits a specific intent or situation.

In practice, this supports catalog workflows like finding tracks for sync projects, building playlists, and generating personalized recommendations within large music libraries.

1. Collaborative filtering

The collaborative filtering approach predicts what users might like based on their similarity to other users. To determine similar users, music-suggestion algorithms collect historical user activity such as track ratings, likes, and listening time.

People used to discover music through recommendations from friends with similar tastes, and the collaborative filtering approach recreates that. Only user information is relevant, since collaborative filtering doesn’t take into account any of the information about the music or sound itself. Instead, it analyzes user preferences and behavior and predicts the likelihood of a user liking a song by matching one user to another. 

This approach’s most prominent problem is filter bubbles, which can arise when collaborative filtering algorithms reinforce existing user preferences, potentially narrowing musical exploration. Despite being designed to personalize experiences, these systems may inadvertently create echo chambers by prioritizing content similar to what users have already engaged with.

Another problem with this approach is the cold start. The system doesn’t have enough information at the beginning to provide accurate recommendations. This applies to new users whose listening behavior has not yet been tracked. New songs and artists are also affected, as the system needs to wait before users interact with them.

Collaborative filtering approaches

Collaborative filtering can be implemented by comparing users or items:

  • User-based filtering establishes user similarity. User A is similar to user B, so they might like the same music.
  • Item-based filtering establishes the similarity between items based on how users have interacted with them. Item A can be considered similar to item B because users rated them both 5/10. 

Collaborative filtering also relies on different forms of user feedback:

  • Explicit rating is when users provide obvious feedback for items such as likes or shares. However, not all items receive ratings, and sometimes users interact with an item without rating it. In that case, the implicit rating can be used. 
  • Implicit ratings are predicted based on user activity. When the user doesn’t rate the item but listens to it 20 times, it is assumed that the user likes the song.

2. Context-aware recommendation approach

Context-aware recommendation focuses on how music is used in a given setting. This involves factors like the listener’s activity and circumstances. These things can influence music choice but are not captured by collaborative filtering or content-based approaches.

Research by the Technical University of Berlin links music listening choices to the listener context. This could be environment-related or user-related.

Environment-related context

In the past, recommender systems were developed that established a link between the user’s geographical location and music. For example, when visiting Venice, you could listen to a Vivaldi concert. When walking the streets of New York, you could blast Billy Joel’s “New York State of Mind.” Emotion-indicating tags and knowledge about musicians were used to recommend music that fit a geographical place.

User-related context

User-related context describes the listener’s current situation, including what they are doing, how they are feeling, where they are, the time of day, and whether they are alone or with others.

These factors can significantly influence music choice. For example, when working out, you might want to listen to more energetic music than your usual listening habits and musical preferences would suggest.

3. Content-based filtering

Content-based filtering uses metadata attached to the audio, such as descriptions or keywords (tags), as the basis of the recommendation. When a user likes an item, the system determines that they are likely to enjoy other items with similar metadata.

There are two common ways to assign metadata to content items: through a human-based or automated approach.

The human-based approach can take two forms: professional curation by library editors who characterize content with genre, mood, and other classes, or crowdsourced metadata assignment where a community manually tags content. The more people participate in crowdsourcing, the more accurate and less subjectively biased the metadata becomes.

Human-based approaches require significant resources, particularly crowdsourcing. As Alex Paguirian, Product Manager at Marmoset, comments: “When it comes down to calculating the BPM and key of any given song, you would have to put someone behind a piano with a metronome, which is completely unsustainable and a strange use of labor.” This illustrates why automated systems are increasingly used to characterize music at scale.

The automated approach is where algorithmic systems automatically characterize content. This is what we’re doing at Cyanite. We use AI to understand music and assign relevant tags to the songs in our system.

Musical Metadata

Musical metadata is information that is adjacent to the audio file. It can be objectively factual or subjectively descriptive. In the music industry, the latter is also often referred to as creative metadata.

For example, artist, album, and year of publication are factual metadata. Creative metadata describes the actual content of a musical piece; for example, the mood, energy, and genre. Understanding the types of metadata and organizing the library’s taxonomy in a consistent way is very important, as the content-based recommender uses this metadata to select music. If the metadata is flawed, the recommender might pull out the wrong track.

Content-based recommender systems can use factual metadata, descriptive metadata, or a combination of both. They allow for more objective evaluation of music and can increase access to long-tail content, improving search and discovery in large catalogs.

When it comes to automating this process, companies like Cyanite step in.

Music Metadata extraction through MIR 

Music information retrieval (MIR) refers to the techniques used to extract descriptive metadata from music. It’s an interdisciplinary research field combining digital signal processing, machine learning, artificial intelligence, and musicology. In music analysis, its scope ranges from BPM and key detection to higher-level tasks such as automatic genre and mood classification. It also involves research on musical audio similarity and related music search algorithms.

At Cyanite, we apply a combination of MIR techniques and neural network models to analyze full audio tracks and generate structured, sound-based metadata, such as genre, mood, energy, tempo, and instrumentation, at catalog scale.

How Cyanite powers AI recommendations

Most consumer music platforms rely on behavior-based recommendation systems. Spotify is one example of a platform that uses collaborative filtering, now likely supported by AI. These systems learn from listening behavior and user similarity, which can lead to filter bubbles. For artists who already have a lot of listeners, this can give them a consistent advantage.

Cyanite’s AI recommendations are based purely on sound. Each track is analyzed to capture its audible musical characteristics through MIR. These characteristics are translated into embeddings, which represent how a track sounds in a form that can be compared at scale.

The algorithms we have built through MIR are considered industry standard and are all developed entirely in-house.

The embeddings serve two purposes. 

  1. They are used to generate musical metadata, also called Auto-Tagging. This produces structured, sound-based metadata such as genre, mood, energy, tempo, key, instrumentation, and voice presence. Auto-Tagging analyzes the full audio of each track and applies these labels consistently across the catalog.

  2. The same embeddings enable sound-based comparison. When teams work with search and recommendations, Similarity Search compares a reference track with the rest of the catalog by measuring the similarity of embeddings. Tracks that are most alike in sound are returned as a ranked recommendation list. The same embeddings also power Free Text Search, where teams can describe a desired sound in natural language and find tracks that fit that description. In both cases, artist size and popularity don’t influence the result, which helps democratize the search process.
Custom interval

You can try our search algorithms via our Web App, with five free monthly analyses. For more advanced discovery workflows, Advanced Search is available through the API. It builds on Similarity Search and Free Text Search by adding similarity scores, multiple reference tracks, and the option to upload custom tags, which can be used as filters. This allows teams to refine results against their own taxonomy or brief requirements.

The API lets teams run Auto-Tagging and search directly inside their own tools or platforms. They don’t need to work in a separate interface. Auto-Tagging can be used on its own, or teams can combine it with Music Search to find the right tracks for sync, playlists, marketing, and similar day-to-day use cases.

AI recommendation use cases

The following use cases highlight where AI recommendations add practical value in professional music discovery.

  • Finding alternatives that are musically similar to a known reference track: Sometimes, a desired sound is easier to point to than describe. At Melodie Music, Marmoset, and Chromatic Talents, reference tracks are used in these situations as concrete starting points. Teams upload or link a reference track, then use Similarity Search to explore alternatives that share comparable musical characteristics.

  • Turning vague or subjective descriptions into usable search results: At Melodie Music, users often struggled to translate creative intent into fixed keywords, even in a well-curated catalog. Free Text Search allows them to describe a desired sound in their own words, while Similarity Search lets them move from a reference track to close matches that are alike in feel and structure. This reduces the need to guess the “right” tags and shortens the trial-and-error loop between searching and listening.

  • Reducing time spent browsing large music catalogs: Similarity Search and Free Text Search guide users to a smaller, relevant set of tracks. This means teams working with large catalogs spend less time browsing. Instead of scanning hundreds of options, users begin with a reference or written description and listen with clear intent, helping them reach confident decisions faster while retaining creative control.

Finding what your catalog needs

Choosing a music recommendation approach depends on your personal needs and the data you have available. A trend we’re seeing is a hybrid approach that combines features of collaborative filtering, content-based filtering, and context-aware recommendations. However, all fields are under constant development, and innovations make each approach unique. What works for one music library might not be applicable to another.

Common challenges across the field include access to sufficiently large data sets and a clear understanding of how different musical characteristics influence people’s perception and use of music. These challenges become especially visible in large or underutilized catalogs, where discovery can’t rely on user behavior alone.

To try out Cyanite’s technology, register for our free web app to analyze music and try similarity searches without the need for any coding.

FAQs

Q: What is an AI music recommendation system?

A: An AI music recommendation system suggests tracks by analyzing data such as audio characteristics, metadata, user behavior, or listening context. The goal is to surface music that fits a specific intent, use case, or situation within a large catalog. These systems are commonly used in music recommendation apps, professional catalogs, and music-tech platforms.

Q: What are the main types of music recommendation approaches?

A: The three most common approaches are collaborative filtering, content-based filtering, and context-aware recommendation. Many systems combine elements of all three to balance accuracy, scale, and flexibility.


Cyanite uses a content-based, sound-driven approach, generating recommendations by analyzing the audio itself rather than relying on user behavior or listening history. This means our sound-based music recommender system is suited to large and professional catalogs.

 

Q: How do music companies use AI recommendations today?

A: Music companies use AI song recommendations to speed up sync pitching, build playlists, surface underused catalog assets, support personalization in music-tech products, and select music for branding or retail projects. These workflows rely on music recommendation engines to reduce manual search and improve discovery.

Q: How does Cyanite approach music recommendations?

A: Cyanite analyzes the sound of full tracks to generate structured, audio-based metadata and embeddings. These embeddings are used for Auto-Tagging and Similarity Search and Free Text Search in music catalogs, allowing tracks to be compared and recommended based on how they sound rather than on user interaction data.

How to smoothly migrate from Musiio to Cyanite (Search Edition)

How to smoothly migrate from Musiio to Cyanite (Search Edition)

With Musiio closing its API service soon, many music platforms are facing a time-sensitive challenge: keeping their search and discovery workflows operational without disruption.

If your product, internal tools, or customer-facing experience rely on similarity search, replacing your search provider is more than a backend adjustment. Search directly impacts user trust, discovery quality, and product performance.

This guide outlines a practical way to migrate similarity workflows from Musiio to Cyanite; and how to use this transition as a product upgrade. 

Don’t miss out the first part we did on this – focussing on the migration from Musiio’s to Cyanite’s Auto Tagging. Check it out below.

What changes when switching a music search provider?

Replacing a similarity search provider is not just a technical endpoint swap. Even if two systems both offer similarity search, ranking behavior, reference handling, and filtering capabilities can differ.

A smooth migration therefore focuses on:

  • replacing the API endpoints
  • validating search results internally
  • ensuring the product experience remains consistent

Cyanite Search in one paragraph

Cyanite provides audio-based search via API for music libraries, streaming services, sync platforms, and music-tech companies.

Search workflows can be built using:

Similarity can be performed using:

  • Your own track IDs
  • MP3 uploads
  • Spotify links
  • YouTube links
  • any of the above combined (Advanced Search only)

Step-by-step migration plan

Step 1: Start testing immediately (Spotify-based evaluation or test environment)

Before replacing your production similarity workflows, the first step is to test Cyanite’s search capabilities in isolation.

You can begin immediately by testing similarity search against Cyanite’s Spotify-based showcase database. This allows your team to:

  • evaluate similarity quality
  • compare ranking behavior
  • test reference workflows (track IDs, Spotify links, etc.)

No full catalog setup is required for this initial evaluation.

If you want to test similarity search against your own full catalog, we can set up a dedicated test environment together. 

To get started, create an API integration here:
https://api-docs.cyanite.ai/docs/create-integration/

Similarity Search documentation:
https://api-docs.cyanite.ai/docs/similarity-search

You can then:

  • run similarity searches using track IDs
  • test Spotify and YouTube links
  • explore multi-track similarity
  • combine similarity with filters via Advanced Search (per request)

If you would like to test Advanced Search (multi-track similarity, similarity scores, and metadata filtering), simply contact us at business@cyanite.ai and we’ll enable it for your evaluation.

Step 2: Identify your similarity search inputs

Most Musiio customers use similarity search in one of these ways:

  • Searching similar tracks using a track ID from their own catalog
  • Searching similar tracks using an external MP3 upload
  • Searching similar tracks using a YouTube link

Cyanite supports all of these workflows and additionally supports Spotify links.

Your first step is to map your existing Musiio workflow to one of these Cyanite input types:

  • Track ID (fastest and most stable)
  • Audio upload (MP3)
  • External links (Spotify or YouTube)

Step 3: Replace similarity workflows (real-time vs external references)

Track ID-based similarity (instant results – recommended for real-time use cases)

Using your own track IDs is the most stable and fastest approach.

This is ideal for:

  • “Show similar” features
  • user-facing discovery modules
  • recommendation systems
  • internal sync tools

With track IDs, similarity search operates in real time.
Cyanite supports up to 10 search requests per second, making it suitable for production-grade discovery experiences.

External reference workflows (results after analysis – MP3, Spotify, YouTube)

External references are useful for:

  • searching your catalog using a client reference track
  • brief matching
  • creative mood board discovery

Cyanite supports similarity search using:

Track ID-based searches return results in real time. External references typically require a few seconds up to around a minute for analysis before results are returned.

Before switching production endpoints, we recommend validating ranking quality and relevance with a representative sample of your catalog.

Step 4: Upgrade with Advanced Search (instant results – multi-track similarity + filtering)

Once single-track similarity is stable, many teams extend their setup using Advanced Search, which acts as an add-on to Similarity Search.

Advanced Search extends similarity from a simple reference match to a controllable discovery layer:

  • Multi-track similarity (up to 50 reference tracks)
  • Similarity scores, quantifying how close results are in percentage terms
  • Most Relevant Segments
  • Custom Metadata Filters
  • Up to 500 search results

Multi-track similarity is particularly powerful for:

  • playlist generation
  • “Discover Weekly” style workflows
  • brief-based search where multiple references define a sound

Importantly, Advanced Search also allows you to combine similarity with your own metadata.

You can:

  • search for tracks similar to a reference
  • while filtering by internal tags
  • or by metadata such as release date, territory, clearance status, new releases, or priority tracks (anything that you attach as a custom tag to your tracks)

This enables highly controlled discovery workflows that go beyond simple similarity replacement.

    Step 5: Add Free Text Search (instant results – optional but high-impact upgrade)

    While Musiio did not offer Free Text Search, Cyanite offers this feature, complementing Similarity Search.

    Free Text Search allows users to search using natural language queries such as:

    • “uplifting acoustic pop with female vocals”
    • “dark cinematic tension build”
    • “minimal piano with emotional atmosphere”
    • “lofi beats for studying”

    For music libraries and sync platforms, this can significantly improve:

    • discovery speed
    • usability for non-expert users
    • onboarding experience
    • catalog accessibility

    Many teams migrate similarity first, then introduce Free Text Search as a second-phase upgrade.

    Example migration timeline

    Day 1:
    Create an integration and test similarity with track IDs.

    Day 2–3:
    Replace similarity endpoints in staging and review results.

    Week 1:
    Go live with single-track similarity replacement.

    Week 2+:
    Add Advanced Search and optionally introduce Free Text Search as a product upgrade.

    A note on migration

    Although both Musiio and Cyanite offer similarity search via API, the underlying concepts and implementation details differ.

    This means migration is not just a technical endpoint replacement. It requires a short evaluation phase to ensure alignment with your existing product logic and user experience.

    In practice, most teams complete this evaluation within days, but it should not be skipped.

    Final thought: replace or improve

    Many teams use this moment to:

    • strengthen their discovery experience
    • introduce multi-track similarity
    • enable Free Text Search
    • modernize search workflows without building a large data science team

    If your team is affected by Musiio’s shutdown, we’re happy to support you with migration guidance.

    Get migration support

    If you want support migrating from Musiio to Cyanite, you can:

    FAQs

    Q: Which similarity search API can replace Musiio?

    A: Cyanite offers audio-based Similarity Search via API for track IDs, MP3 uploads, Spotify links, and YouTube links. Advanced Search and Free Text Search provide additional capabilities beyond Musiio’s feature set.

    Q: Can I migrate similarity search from Musiio quickly?

    A: Yes. Many teams begin by replacing track-ID-based similarity workflows first, as this allows real-time continuity with minimal product disruption.

    Q: Does Cyanite support multi-track similarity search?

    A: Yes. Multi-track similarity (up to 50 reference tracks) is available via Advanced Search. This is especially useful for playlist generation, brief-based search, and recommendation workflows.

    Q: How can I test Advanced Search?

    A: Advanced Search can be enabled for evaluation upon request. Simply contact business@cyanite.ai and we’ll activate it for your integration, typically within one business day.

    Q: Can I filter similarity results using my own metadata?

    A: Yes. Advanced Search allows you to combine similarity with filters based on your internal metadata, such as release date, territory, clearance status, or anything you attach as custom tags.

    Q: Does Cyanite offer a Prompt-based Search?

    A: Yes. Cyanite supports natural language search, enabling users to search for music using descriptive queries. Musiio does not offer Free Text Search.

    Q: What are Cyanite’s rate limits for similarity search?

    A: Cyanite supports up to 10 search requests per second by default, enabling real-time similarity workflows for user-facing discovery features.

    Q: How is Cyanite priced for teams migrating from Musiio?

    A: API access typically includes a base fee. Search usage and advanced features are volume-based. For larger volumes and enterprise use cases, bulk discounts are available.

    Q: Is retagging my full catalog required?

    A: No. You can migrate incrementally by tagging only new uploads. However, if tagging is central to your search and discovery experience, retagging the full catalog provides a cleaner and more consistent metadata foundation.

    Q: Will migrating affect my search and recommendation systems?

    A: Tagging changes can affect any downstream system that relies on metadata, including search filters, playlists, and recommendation logic. That’s why we recommend testing with a representative batch and reviewing dependencies before switching fully.

    Q: Is retagging my full catalog required?

    A: No. You can migrate incrementally by tagging only new uploads. However, if tagging is central to your search and discovery experience, retagging the full catalog provides a cleaner and more consistent metadata foundation.

    Q: How is Cyanite priced for teams migrating from Musiio?

    A: Cyanite’s pricing model will feel familiar to many Musiio customers. API access is structured with a base fee, while tagging is usage-based. For catalog processing, teams can either pay as they go or purchase credits in advance. Bulk discounts are available for larger volumes and back-catalog migrations.

    Q: How do I get support for migration?

    A: You can book a migration call via our Typeform or contact us directly at business@cyanite.ai. Our team can support integration guidance, taxonomy alignment, and back catalog processing.
    How to smoothly migrate from Musiio to Cyanite (Tagging Edition)

    How to smoothly migrate from Musiio to Cyanite (Tagging Edition)

    With Musiio announcing the shutdown of its API service by the end of February, many music platforms and libraries are currently facing a time-sensitive challenge: ensuring continuity in their tagging workflows without breaking downstream systems.

    If your team relies on automated tagging to power discovery, search filters, recommendations, or internal music workflows, switching providers is not just a technical change. It’s also a conceptual one.

    This guide outlines a practical, low-risk way to migrate from Musiio to Cyanite’s tagging infrastructure. The goal is simple: keep your systems running, avoid surprises, and improve your metadata foundation over time.

    Why switching tagging providers is not a simple “API swap”

    When a tagging provider changes, most teams underestimate how many things depend on the output. Tagging sits at the base layer of many product experiences, including:

    • search and filtering
    • playlisting and discovery
    • internal recommendation systems
    • catalog curation workflows
    • editorial tooling
    • analytics and reporting

    Even if two providers both offer “mood”, “genre”, or “energy”, they often differ in:

    • taxonomy structure and granularity
    • multi-label behavior (how many tags are returned)
    • naming conventions
    • tag distributions across your catalog

    A smooth migration means planning for both:

    1. the technical integration
    2. the conceptual differences in metadata

    Cyanite tagging in one paragraph

    Cyanite provides scalable, audio-based music tagging via API, designed for enterprise catalogs and production-grade workflows. Instead of relying on user behavior, tags are generated directly from the sound of each track, creating a consistent and reusable metadata layer that can support search, discovery, recommendations, and catalog intelligence.

    For teams that want to go deeper, Cyanite’s full API documentation is publicly available: https://api-docs.cyanite.ai/

    The two migration paths (choose your strategy first)

    Before touching code, your team should make one key decision:

    Do you want a fast continuity migration, or a clean long-term metadata foundation?

    Option A: Fast continuity (quickest path to stay operational)

    This approach is ideal if you need to migrate quickly and avoid any immediate impact on your product.

    You will:

    • integrate Cyanite tagging for all new uploads going forward
    • keep existing Musiio tags for your back catalog (for now)
    • avoid a large back-catalog processing project
    • gradually transition systems to Cyanite taxonomy over time

    This is typically the fastest way to stay operational. However, it’s important to note that new tracks will be tagged using a different taxonomy, which may require adjustments in downstream systems (e.g. filters, dashboards, or recommendation logic).

    Option B: Clean long-term foundation (recommended for search and discovery)

    This approach is ideal if tagging plays a central role in your product and you want a consistent metadata layer across your full catalog.

    You will:

    • re-tag your full back catalog with Cyanite
    • unify your taxonomy across all tracks
    • avoid mixing metadata systems long-term
    • improve consistency for search, recommendations, and analytics

    This path requires more work upfront but typically results in better long-term product quality.

    Step-by-step migration plan

    Step 1: Set up a quick test integration (free evaluation)

    Before migrating production workflows, we recommend starting with a small, representative test batch. This allows your team to validate both the tagging output and the end-to-end workflow (upload → tagging → results) before switching anything in production.

    A good test batch includes:

    • different genres and regions
    • older and newer tracks
    • high-performing tracks and long-tail tracks
    • tracks with vocals and instrumentals
    • if relevant: Arabic, Turkish, and other regional repertoires

    You can create a Cyanite API integration and run your first tests for free:

    • By default, testing can be done with 5 songs
    • For teams that need a slightly larger evaluation, we can unlock up to 100 free credits

    Cyanite provides a step-by-step guide to creating an integration here:
    https://api-docs.cyanite.ai/docs/create-integration

    To speed up your first tests, our query builder helps you quickly generate and validate API requests:
    https://api-docs.cyanite.ai/docs/library-track-query-builder

    Once your integration is set up, you can:

    • upload tracks via API
    • request tagging results
    • store the output in your system

    Approach 1: Keep your existing tags and migrate incrementally

    If you choose the fast continuity path, you can start tagging all new uploads with Cyanite while keeping your back catalog unchanged.

    This approach works well if:

    • you need to migrate quickly
    • your product relies on existing tags
    • you want to avoid a full catalog reprocessing project initially

    Over time, you can gradually transition downstream systems to Cyanite’s taxonomy.

    Approach 2: Retag for consistency (recommended)

    If your platform relies heavily on search, filtering, or discovery, a clean long-term foundation is usually worth it.

    Retagging your catalog with Cyanite gives you:

    • a consistent metadata layer across the full catalog
    • simpler downstream logic
    • better analytics and reporting
    • improved search and recommendation quality

    Cyanite’s full tagging taxonomy can be reviewed in detail here:

    Step 5: Review and go live

    Once your integration is complete, you can switch your tagging workflow to Cyanite for new uploads and, if applicable, begin your back-catalog migration.

    Many teams choose to review a representative sample of tagged tracks internally before going fully live, especially if tagging feeds directly into search, filtering, or recommendation features.

    The exact validation process depends on your product setup and internal workflows.

    Common migration pitfalls (and how to avoid them)

    Pitfall 1: Treating it like a simple API swap

    Tagging sits at the base layer of many systems. Plan for downstream dependencies early.

    Pitfall 2: Trying to force a perfect 1:1 taxonomy mapping

    Most teams waste time trying to recreate their old tag system exactly. We highly recommend to adopt a consistent taxonomy and update downstream logic accordingly.

    Pitfall 3: Mixing two tag systems in the UI for too long

    If you run two taxonomies in parallel, set a clear timeline for consolidation. Otherwise, editorial teams and users can get confused.

    Pitfall 4: Migrating without a clear back catalog strategy

    If you retag your full catalog, consider a phased rollout:

    • start with the most used tracks
    • then cover the long tail

    Example migration timeline (realistic and low-risk)

    A typical migration can look like this:

    Day 1:
    Create an integration and run a test batch (5 to 100 songs).

    Day 2 to 3:
    Integrate Cyanite tagging in parallel and store results separately.

    Week 1:
    Switch tagging for all new uploads.

    Week 2+:
    Optional back catalog retagging via S3 ingestion.

    This approach ensures continuity while giving your team time to validate quality and adjust downstream systems.

    Final thoughts: a migration can be an upgrade

    A forced migration is never ideal. But it can also be an opportunity to improve your metadata foundation.

    Many teams use this moment to:

    • modernize their tagging workflows
    • improve consistency across catalogs
    • strengthen search and discovery experiences
    • reduce dependency on behavior-driven signals

    If your team is impacted by Musiio’s API shutdown, we’re happy to support you with a smooth transition, taxonomy alignment, and optional back-catalog retagging.

    Looking to migrate search workflows as well? We’re currently preparing a Search Edition of this guide.

    Get migration support

    If you want support migrating from Musiio to Cyanite, you can:

    FAQs

    Q: How do I migrate from Musiio’s tagging API to Cyanite?

    A: Migrating from Musiio to Cyanite typically involves three steps:

    1. Create a Cyanite API integration and test with a representative batch

    2. Run Cyanite in parallel with your current system

    3. Decide whether to tag only new uploads or retag your full catalog

    Many teams complete initial integration within days, depending on system complexity.

    Q: Can I test Cyanite before fully replacing Musiio?

    A: Yes. You can test Cyanite’s tagging API with up to 5 songs for free. Up to 100 credits can be unlocked for evaluation, allowing you to validate tagging output, taxonomy structure, and system compatibility before switching production workflows.

    Q: Do I need to retag my entire catalog when switching from Musiio?

    A: No. You can migrate incrementally by tagging only new uploads with Cyanite while keeping existing Musiio tags for legacy tracks. However, if tagging plays a central role in search, filtering, or recommendations, many teams choose to retag their full catalog for long-term consistency.

    Q: How does Cyanite handle large catalog migrations compared to Musiio?

    A: For ongoing uploads, Cyanite processes up to 10 songs per minute via API. For large back catalogs, Cyanite provides an S3 bucket ingestion workflow. Full catalog processing is typically completed within 5 to 10 working days, depending on volume.

    Q: Will replacing Musiio affect my search and recommendation systems?

    A: Tagging changes can impact any system relying on metadata, including search filters and recommendation logic. That’s why we recommend testing with a representative batch and reviewing downstream dependencies before fully switching providers.

    Q: Is Cyanite’s taxonomy identical to Musiio’s taxonomy?

    A: No tagging taxonomies are identical. While both providers offer categories like mood, genre, energy, and instrumentation, structure and granularity may differ. Teams can either map existing tags temporarily or use migration as an opportunity to consolidate on a single, consistent taxonomy. Review Cyanite’s taxonomy here.

    Q: Can I run Musiio and Cyanite in parallel during migration?

    A: Yes. Running both systems in parallel for a short validation period is a common and low-risk migration strategy. This allows your team to compare outputs and adjust downstream systems before completing the switch.

    Q: Will migrating affect my search and recommendation systems?

    A: Tagging changes can affect any downstream system that relies on metadata, including search filters, playlists, and recommendation logic. That’s why we recommend testing with a representative batch and reviewing dependencies before switching fully.

    Q: Is retagging my full catalog required?

    A: No. You can migrate incrementally by tagging only new uploads. However, if tagging is central to your search and discovery experience, retagging the full catalog provides a cleaner and more consistent metadata foundation.

    Q: Will migrating affect my search and recommendation systems?

    A: Tagging changes can affect any downstream system that relies on metadata, including search filters, playlists, and recommendation logic. That’s why we recommend testing with a representative batch and reviewing dependencies before switching fully.

    Q: Is retagging my full catalog required?

    A: No. You can migrate incrementally by tagging only new uploads. However, if tagging is central to your search and discovery experience, retagging the full catalog provides a cleaner and more consistent metadata foundation.

    Q: How is Cyanite priced for teams migrating from Musiio?

    A: Cyanite’s pricing model will feel familiar to many Musiio customers. API access is structured with a base fee, while tagging is usage-based. For catalog processing, teams can either pay as they go or purchase credits in advance. Bulk discounts are available for larger volumes and back-catalog migrations.

    Q: How do I get support for migration?

    A: You can book a migration call via our Typeform or contact us directly at business@cyanite.ai. Our team can support integration guidance, taxonomy alignment, and back catalog processing.

    Everything you’ve ever wanted to know about Cyanite (answering your FAQs)

    Everything you’ve ever wanted to know about Cyanite (answering your FAQs)

    Ready to explore your catalog? Sign up for Cyanite.

    As music catalogs grow, finding the right track gets harder. Metadata doesn’t always keep up, but teams are still expected to deliver fast, reliable results.

    Libraries, publishers, sync teams, and the technical leads supporting them need systems that make large catalogs easier to understand and search. Cyanite is designed to support that work.

    This guide provides a clear, high-level introduction to how Cyanite works and how it’s used in practice, giving teams a simple starting point before diving deeper into specific topics.

    Learn more: Explore our FAQs to dig deeper into how Cyanite works.

    The problem of scaling modern music catalogs

    Once a catalog reaches a certain size, searching it becomes an inconsistent process. Music is described through tags and metadata that were added by different people, at different times, often for different needs. As the catalog grows, those descriptions stop lining up, which makes tracks harder to compare and surface reliably.

    Over time, the same song can become discoverable in one context and invisible in another. Familiar tracks tend to show up first, while large parts of the catalog stay beneath the surface simply because their sound isn’t clearly represented in the data.

    Scaling a modern music catalog means creating a shared, consistent way to describe sound, so music can be worked with confidently across teams and workflows, no matter how large the catalog becomes.

    What Cyanite is (and what it is not)

    Cyanite is an intelligent music system that works directly with sound. It analyzes each track and translates what can be heard into structured information that stays consistent across the catalog. That information is used both to tag music automatically and support sound-based search.

    Teams can use Cyanite through the web app, integrate it into their own systems via an API, or access it directly within supported music CMS environments.

    Cyanite is not a replacement for listening or creative judgment. It doesn’t decide what should be used, pitched, or licensed. It provides a consistent, sound-based foundation that helps teams work with music at scale while keeping human decision-making at the center.

    How Cyanite analyzes music

    Cyanite analyzes music through sound, not user behavior. Instead of relying on plays, clicks, or listening history, it focuses on the audio itself and produces a consistent, reliable sound description. This means each piece of music enters the system under the same logic, regardless of when it was added or who uploaded it.

    Read more: How do music recommendation systems work?

    Core capabilities

    At its core, Cyanite helps teams organize and work with large music catalogs through music tagging and search. The same audio-based logic applied to every track creates consistent descriptions and keeps music easy to find, compare, and explore, even as catalogs grow.

    A table showing Cyanite's AI-Tagging Taxonomy

    To make large catalogs easier to work with, Cyanite applies consistent labeling based on each track’s full audio.

    • Auto-Tagging analyzes the audio to generate metadata like genre, mood, and tempo.
    • Auto-Descriptions generate concise, neutral descriptions that highlight how a track sounds and give teams quick context without having to listen first.

    Sound-based search: Similarity, Free Text, and Advanced Search

    To help teams find music, Cyanite offers multiple ways to search a catalog. 

    • Similarity Search finds tracks with a similar sound to a reference song, whether it’s from your catalog, an uploaded file, or a YouTube preview. It’s often a good fit when a brief starts with a musical reference rather than a written description.
    • Free Text Search allows teams to describe music in natural language, including full sentences and prompts in different languages. It then matches that intent to sound in the catalog.
    • Advanced Search, available through the API as an add-on for Similarity and Free Text Search, adds more control as searches become more specific. It enables filters and visibility into why tracks appear in the results, making it easier to refine and compare matches.

    Privacy-first, IP-safe audio analysis

    Cyanite is built for professional music catalogs, with all data processed and stored on servers in the EU in line with GDPR. Audio files are stored securely, can be deleted at any time on request, and are not shared with third parties. All analysis and search algorithms are developed in-house. For additional protection, Cyanite also supports spectrogram-based uploads, allowing audio to be analyzed without being reconstructable into playable sound.

    How teams combine AI and human expertise

    Cyanite is used for organizing, pitching, searching, and curating a catalog. Automation applies a consistent, sound-based foundation across every track, while teams add context, intent, and custom metadata where it matters. 

    Because there are clear limits to what can be inferred from audio alone, most teams adopt a hybrid approach to their work. They use Cyanite to keep catalogs structured and searchable at scale, while human input shapes how the music is ultimately used.

    How Cyanite fits into existing catalog systems

    Cyanite is used at the point where teams need to explore a catalog for a pitch, brief, or curation task. It applies a consistent, sound-based foundation across all tracks, so decisions can be informed by reliable discovery results. With technology supporting the process, teams can confidently listen, compare, and narrow options, applying human judgment to make the selection.

    Where to go deeper

    Now that we’ve covered the basics, you can explore specific parts of Cyanite in more detail in the following articles:

    Getting started with Cyanite

    To evaluate Cyanite, the simplest starting point is a track sample analysis. Many teams begin with a small set of tracks to review tagging results and search behavior before deciding whether to scale further. This makes it easy to validate fit without committing a full catalog upfront.

    For teams building products or integrating search into their own tools, integrating our API is a hands-on way to explore analysis, tagging, and similarity search in a live environment. You can create an API integration for free after registering via the web app.

    When preparing for a larger evaluation, a bit of structure helps. Audio should be provided in MP3 and grouped into clear folders or batches that reflect how the catalog is organized. Most teams start with a representative subset and expand in phases once results and timelines are clear. If you are not able to deliver your music as MP3 files, reach out to support@cyanite.ai

    Can Meta’s audio aesthetic model actually rate the quality of music?

    Can Meta’s audio aesthetic model actually rate the quality of music?

    Last year, Meta released Audiobox Aesthetics (AES), a research model that proposes scoring audio based on how people would rate it. The model outputs four scores: Production Quality (PQ), Production Complexity (PC), Content Enjoyment (CE), and Content Usefulness (CU). 

    The study suggests that audio aesthetics can be broken into these axes, and that a reference-free model can predict these scores directly from audio. If that holds, the scores could start informing decisions and become signals people lean on when judging music at scale.

    I took a closer look to understand how the model frames aesthetic judgment and what this means in practice. I ran Audiobox Aesthetics myself and examined how its scores behave with real music.

    What Meta’s Audiobox Aesthetics paper claims

    Before jumping into my evaluation, let’s take a closer look at what Meta’s Audiobox Aesthetics paper set out to do.

    The paper introduces a research model intended to automate how audio is evaluated when no reference version exists. The authors present this as a way to automate listening judgments. They describe human evaluations as costly and inconsistent, leading them to seek an automated alternative.

    To address this need, the authors propose breaking audio evaluation into four separate axes and predicting a separate score for each:

    • Production Quality (PQ) looks at technical execution, focusing on clarity and fidelity, dynamics, frequency balance, and spatialization.
    • Production Complexity (PC) reflects how many sound elements are present in the audio.
    • Content Enjoyment (CE) reflects how much listeners enjoy the audio, including their perception of artistic skill and overall listening experience.
    • Content Usefulness (CU) considers whether the audio feels usable for creating content.

    The model is trained using ratings from human listeners who follow the same guidelines across speech, music, and sound effects. It analyzes audio in short segments of around 10 seconds. For longer tracks, the model scores each segment independently and provides an average. 

    Beyond the audio itself, the model has no additional context. It does not know how a track is meant to be used or how it relates to other music. According to the paper, the scores tend to align with human ratings and could help sort audio when it’s not possible to listen to it all. In that way, the model is presented as a proxy for listener judgment.

    Why I decided to evaluate the model

    I wasn’t the only one who was curious to look into this model. Jeffrey Anthony’s “Can AI Measure Beauty? A Deep Dive into Meta’s Audio Aesthetics Model,” for instance, offers a deep, philosophical examination of what it means to quantify aesthetic judgment, including questions of ontology and judgment. I decided to tackle the topic even more with a hands-on approach, testing the model on some real-world examples to understand whether we could find some interesting patterns in the model’s predictions. 

    What caught my attention most was how these scores are meant to be used. Once aesthetic judgments are turned into numbers, they start to feel reliable. They look like something you can sort by, filter on, or use to decide what gets heard and what gets ignored.

    This matters in music workflows. Scores like these could influence how catalogs are cleaned up, how tracks are ranked for sync, and how large libraries of music are evaluated without listening. With a skeptical but open mindset, I set out to discover how these scores behave with real-world data.

     

    What I found when testing the model

    A) Individual-track sanity checks

    I began with a qualitative sanity check using individual songs whose perceptual differences are unambiguous to human listeners. The tracks I selected represent distinct production conditions, stylistic intentions, and levels of artistic ambition.

    I included four songs:

    The motivation for this test was straightforward. A model claiming to predict Production Quality should assign a lower PQ to “Funky Town” (low-quality MP3) than to “Giorgio by Moroder.” A model claiming to estimate production or musical complexity should recognize “Blue Calx” by Aphex Twin as more complex than formulaic late-90s pop-trance such as DJ Visage’s “Schumacher Song.” Likewise, enjoyment and usefulness scores should not collapse across experimental electronic music, audiophile-grade disco-funk, old-school pop-trance, and degraded consumer audio.

    You can see that the resulting scores, shown in the individual-track comparison plot above, contradict these expectations. “Funky Town” receives a PQ score only slightly lower than “Giorgio by Moroder,” indicating near insensitivity to codec degradation and mastering fidelity. Even more strikingly, “Blue Calx” is assigned the lowest Production Complexity among the four tracks, while “The Schumacher Song” and “Funky Town” receive higher PC scores. This directly inverts what most listeners would consider to be structural or compositional complexity.

    Content Enjoyment is highest for “Funky Town” and lowest for “Blue Calx,” suggesting that the CE dimension aligns more closely with catchiness or familiarity than with artistic merit or aesthetic depth.

    Taken together, these results indicate that AES is largely insensitive to audio fidelity. It fails to reflect musical or structural complexity, and instead appears to reward constant spectral activity and conventional pop characteristics. Even at the individual track level, the semantics of Production Quality and Production Complexity don’t match their labels.

    B) Artist-level distribution analysis

    Next, I tested whether AES produces distinct aesthetic profiles for artists with musical identities, production aesthetics, and historical contexts that are clearly different. I analyzed distributions of Production Quality, Production Complexity, Content Enjoyment, and Content Usefulness for Johann Sebastian Bach, Skrillex, Dream Theater, The Clash, and Hans Zimmer.

    If AES captures musically meaningful aesthetics, we would expect to see systematic separation between these artists. For example, Hans Zimmer and Dream Theater might have a higher complexity score than The Clash. Skrillex’s modern electronic productions might have a higher quality score than early punk recordings. Bach’s works might show high complexity but variable enjoyment or usefulness depending on the recording and interpretation.

    Instead, the plotted distributions show strong overlap across artists for CE, CU, and PQ, with only minor shifts in means. Most scores cluster tightly within a narrow band between approximately 7 and 8, regardless of artist. PC exhibits slightly more variation, but still fails to form clear stylistic groupings. Bach, Skrillex, Dream Theater, and Hans Zimmer largely occupy overlapping regions, while The Clash is not consistently separate.

    This suggests that AES doesn’t meaningfully encode artist-level aesthetic or production differences. Despite extreme stylistic diversity, the model assigns broadly similar aesthetic profiles, reinforcing the interpretation that AES functions as a coarse estimator of acceptability or pleasantness rather than a representation of musical aesthetics.

    C) Bias analysis using a balanced gender-controlled dataset

    Scoring models are designed to rank, filter, and curate songs in large music catalogs. If these models encode demographic-correlated priors, they can silently amplify existing biases at scale. To test this risk, I analyzed whether AES exhibits systematic differences between tracks with female lead vocals and tracks without female lead vocals.

    In our 2025 ISMIR paper, we showed that common music embedding models pick up non-musical singer traits, such as gender and language, and exhibit significant bias as a result. Because AES is intended to judge quality, aesthetics, and usefulness, it would be particularly problematic if it had similar biases. They could directly influence which music is considered “better” or more desirable.

    I constructed a balanced dataset using the same methodology used in our 2025 paper, equalizing genre distribution and singer language across groups.

    For each group, I computed score distributions for Content Enjoyment, Content Usefulness, Production Complexity, and Production Quality, visualized them, and performed statistical testing using Welch’s t-test alongside Cohen’s d effect sizes. For context, Welch’s t-test is a statistical test that compares whether the average scores between two groups are significantly different. Cohen’s d is a measure of effect size that quantifies how large that difference is in standardized units.

    The results show consistent upward shifts for female-led tracks in CE, CU, and PQ. All three differences are statistically significant with small-to-moderate effect sizes. In contrast, there is virtually no difference in Production Complexity score between groups.

    This pattern indicates that the model systematically assigns higher enjoyment, usefulness, and quality scores to material with female vocals, even under controlled conditions. Because complexity remains unaffected, the effect doesn’t appear to stem from structural musical differences. Instead, it likely reflects correlations in training data and human annotations, or the model treating certain vocal timbres and production styles associated with female vocals as implicit quality indicators.

    These findings suggest that AES encodes demographic-correlated aesthetic priors, which is problematic for a model intended to judge musical quality, aesthetics, and usefulness.

    When a measure becomes a target, it ceases to be a good measure.

    Charles Goodhart

    Economist

    Why this matters for the industry

    Economist Charles Goodhart famously observed that “when a measure becomes a target, it ceases to be a good measure.” He was describing what happens when a metric starts to drive decisions rather than just being an indicator. Once a number is relied on, it begins to shape how people think and choose.

    That idea applies directly to aesthetic scoring. A score, once it exists, carries weight. It gets used as a shortcut in decisions, even when its meaning is incomplete. This matters in music workflows because aesthetic judgment depends on context and purpose. 

    When a simplified score is treated as reliable, systems can start favoring what scores well rather than what actually sounds better or serves a creative goal. Over time, that can quietly steer decisions away from how audio is perceived and used in practice.

    How we approach audio intelligence at Cyanite

    At Cyanite, music isn’t judged in a vacuum, and neither are the decisions built on top of it. That’s why we don’t rely on single aesthetic scores. Instead, we focus on making audio describable and searchable in ways that stay transparent and grounded in context.

    Aesthetic scoring can give the illusion of precision, but it often lumps together different technical qualities, genres, and styles. In music search and discovery, a single score doesn’t explain why a track is surfaced or excluded. That reasoning matters to us. Not to decide what’s “good,” but to give teams tools they can understand and trust.

    We see audio intelligence as a way to expose structure, not replace judgment. Our systems surface identifiable musical attributes and relationships, knowing that the same track can be the right or wrong fit depending on how it’s used. The goal is to support human decision-making, not substitute it with scores.

    Experimentation has a place, but in music, automation works best when it’s explainable and limit-aware.

    What responsible progress in music AI should look like

    Progress in music and AI is underpinned by transparency. Teams should be able to understand how a model was trained and how its outputs relate to the audio. When results are interpretable, people can see why a track surfaces and judge for themselves whether the signal makes sense in their own context.

    That transparency depends on data choices. Music spans styles, cultures, eras, and uses, and models reflect whatever they are fed. Developers need to work with broad, representative data and be clear about where coverage is thin. Being open about what a model sees, and what it does not, makes its behavior more predictable and its limits easier to manage.

    Clear communication matters just as much once tools are in use. For scores and labels to be applied responsibly, teams need a shared understanding of what those signals reflect and where their limits are. Otherwise, even well-intentioned metrics can be stretched beyond what they are able to support.

    This kind of openness helps the industry build tools people can understand and trust in real workflows. 

    We explored how these expectations show up in practice in “The state of AI transparency in music 2025,” a report developed with MediaTracks and Marmoset on how music licensing professionals make decisions around AI, creator background, and context. You can read the full report here.

    So… does Meta’s model provide meaningful ratings for music?

    Based on these tests, the answer is no. The model produces stable scores, but they don’t map cleanly to how musical quality or complexity are assessed in real catalog work. Instead, the model appears to align more with easily detectable production traits than with the distinctions people consistently make when judging music in context.

    That doesn’t make Audiobox Aesthetics insignificant. It can support research by defining a clear scoring framework, showing how reference-free predictors can be trained across speech, music, and sound, and making its models and data available for inspection and comparison. It also illustrates where AES scores can be useful, particularly when large volumes of audio need to be filtered or monitored but full listening is impractical.

    Problems emerge when scores like these begin shaping decisions. When a score is presented as a measure of quality, people need to know what it’s actually measuring so they can judge whether it applies to their use case. Without that clarity, it becomes easy to trust the number even when it’s not a good fit.

    At Cyanite, we see this as a reminder of the importance of responsibility in music and AI. Progress is driven by systems that stay grounded in real listening behavior and make their assumptions visible.

    How Melodie Music combines sound-based AI search and contextual metadata to spotlight original Australian artists

    How Melodie Music combines sound-based AI search and contextual metadata to spotlight original Australian artists

    Ready to improve your music discovery workflows? Try Similarity Search in Cyanite.

    Cyanite aligns with our philosophy because it doesn’t use AI to generate content; it uses AI to uncover it. It solves a genuine pain point for our users: the time-consuming nature of music search. We immediately saw that Cyanite could amplify our existing search system rather than overwrite it. It wasn’t a case of ‘AI versus humans’; it was AI empowering humans to find better music, faster.

    Evan Buist

    Managing Director , Melodie Music

    Melodie is a music licensing platform that provides pre-cleared music for film, TV, advertising, and content creation. All artists and tracks on the platform are carefully curated and hand-selected for quality, originality, and emotional resonance. Ethics are at the core of Melodie’s company philosophy. It operates under a 50/50 revenue and royalty split, meaning Melodie doesn’t earn money on downloads until the artist does.

    To make it easier to discover artists at scale, Melodie continues to refine how users navigate its catalog. AI helps users explore more quickly—but it doesn’t replace the human element behind editorial curation.

    The rising tension between depth and speed

    As Melodie’s catalog grew, a familiar tradeoff emerged: depth versus speed.

    Despite thoughtful editorial tagging, the reality was that users often struggled to translate nuanced creative briefs into static keywords. “Describing music is inherently subjective; what sounds ‘uplifting’ to one person might sound ‘intense’ to another. As the saying goes, talking about music is like dancing about architecture,” explains Evan.

    By relying solely on tags, users often found themselves in an experimental searching-listening-refining-repeating loop—a time-consuming effort that most editors and producers simply don’t have the bandwidth for.

    Melodie recognized this problem early on and set out to improve the user experience in their library. As Evan puts it, “bridging the gap between ‘hearing it in your head’ and ‘finding it on the screen’ is the holy grail of music licensing.”

    AI as an enabler, not a generator

    Human curation is central to how Melodie operates. Tracks are not scraped or auto-generated. Over time, it became clear that tags on their own couldn’t support the kind of discovery users needed, so AI was added to help surface music intuitively and improve navigation.

    Cyanite aligned naturally with that philosophy.

    Rather than positioning AI as a substitute for curation, Cyanite’s AI search treats sound as data that can be understood, compared, and explored. What clicked for Melodie in their search for AI music analysis software was Cyanite’s approach: “The technology felt musical rather than just mathematical. The analysis is intuitive and forgiving, respecting the nuances of the tracks,” says Evan.

    Thanks to this shared understanding, Cyanite became part of Melodie’s day-to-day music discovery process.

    How Cyanite fits into Melodie’s workflow

    Today, Melodie users move fluidly between different music discovery pathways depending on their working process.

    Sound-based Similarity Search

    Users can use Cyanite’s Similarity Search to analyze a reference song and instantly explore tracks with a comparable emotional arc, energy, and sonic character. The reference can come from Spotify, YouTube, or a temporary edit.

    This closes the gap between intuition and results in seconds.

    A gif showing the similarity search interface of melodie music

    Prompt-based Free Text Search

    Some users prefer to express what they are looking for in their own words. Prompt-based search allows them to describe mood, pacing, or instrumentation, even with spelling errors or mixed languages. Evan believes natural language search has done for music libraries what Google did for information in the late 90s: democratized access.

    Regardless of how a user describes music, AI provides a laser-accurate shortlist in seconds. It turns discovery into exploration, allowing users to combine the speed of AI with Melodie’s human-tagged editorial filters to find the perfect track.

    Evan Buist

    Managing Director , Melodie Music

    A gif showing the similarity search interface of melodie music
    A screenrecording showing a music similartiy search and highlighting music tags

    Cyanite has become a vital part of our ecosystem, helping us prove that technology can support culture, not replace it.

    Evan Buist

    Managing Director , Melodie Music

    Music CMS Solutions Compatible with Cyanite: A Case Study

    Music CMS Solutions Compatible with Cyanite: A Case Study

    In today's digital age, efficiently managing vast amounts of content is crucial for businesses, especially in the music industry. For those who decide not to build their own library environment, music Content Management Systems (CMS) have become indispensable tools....

    AI Music Discovery: How Marmoset Uses Cyanite | A Case Study

    AI Music Discovery: How Marmoset Uses Cyanite | A Case Study

    Founded in 2010, Marmoset is a full-service music licensing agency representing hundreds of independent artists and labels. At the heart of it, their core experience involves browsing for music. They offer music discovery for any moving visual media. From sync (movies...

    From upload to output: how Cyanite turns audio into reliable metadata at scale

    From upload to output: how Cyanite turns audio into reliable metadata at scale

    Explore how Cyanite turns sound into structured metadata: Just upload a couple of songs to our web app.

    Managing a music catalog involves more than just storing files. As catalogs grow, teams start running into a different kind of challenge: music becomes harder to find, metadata becomes inconsistent, and strong tracks remain invisible simply because they are described differently than newer material.

    Many teams still rely on manual tagging or have inherited metadata systems that were never designed for scale. Over time, this leads to uneven descriptions, slower search, and workflows that depend more on individual knowledge than on shared systems. Creative teams spend valuable time navigating the catalog instead of working with the music itself.

    Cyanite’s end-to-end tagging workflow was built to address this challenge. It gives teams a stable, shared foundation they can build on, supporting human judgement—not replacing it. It complements subjective, manual labeling with a consistent, audio-based process that works the same way for every track, whether you’re onboarding new releases or making a legacy catalog more organized.

    This article walks through how that workflow functions in practice—from the moment audio enters the system to the point where structured metadata becomes usable across teams and tools.

    Why tagging workflows tend to break down as catalogs grow

    Most tagging workflows start with care and intention. A small team listens closely, applies descriptive terms, and builds a shared understanding of the catalog. But as volume increases and more people get involved, the system begins to stretch.

    As catalogs scale, the same patterns tend to appear across organizations:

    • Different editors describe the same sound in different ways.
    • Older metadata no longer aligns with newer releases.
    • Genre and mood definitions shift over time.
    • Search results reflect wording more than sound.

    When this happens, teams increasingly rely on memory instead of the systems in place. This leads to strong tracks getting overlooked, response times increasing, and trust in the metadata eroding.

    Cyanite’s workflow addresses this fragility by grounding metadata in the audio itself and applying the same logic across the entire catalog.

    Preparing your catalog for audio-based tagging

    Teams can adopt Cyanite very quickly, as there’s very little preparation involved. The system doesn’t require existing metadata, spreadsheets, or reference information. It listens to the audio file and derives all tags from the sound alone.

    Getting started requires very little setup:

    • MP3 files up to 15 minutes in length
    • No pre-existing metadata
    • No manual pre-labeling
    • No changes to your current file structure

    Even 128 kbit/s MP3s are usually sufficient, which means older archive files can be analyzed as they are—no need for additional audio preparation. Teams can then choose how they want to bring audio into Cyanite based on volume and workflow. Once that’s decided, tagging can begin immediately.

    If you’re unsure about uploading copyrighted audio to Cyanite, you can explore our security standards and privacy-first workflows, including options to process audio in a copyright-safe way using encrypted or abstracted data.

    Bringing audio into Cyanite in a way that fits your workflow

    Different organizations manage music in different ways, so Cyanite supports several ingestion paths that all lead to the same analysis results.

    Teams working with smaller batches often start in the web app. This is common for sync teams reviewing submissions, catalog managers auditing older libraries, or teams testing Cyanite before deeper integration. Audio can be uploaded directly, selected from disk, or referenced via a YouTube link, with analysis starting automatically once the file is added.

    Platforms and larger catalogs usually integrate via the API. In this setup, tagging runs inside the organization’s own systems. Audio is uploaded programmatically, and results are delivered automatically via webhook as structured JSON as soon as processing is complete. This approach supports continuous ingestion without manual steps and fits naturally into existing pipelines.

    For very large catalogs, Cyanite can also provide a dedicated S3 bucket with CLI credentials. This allows high-throughput ingestion without relying on browser-based uploads. It’s often used during initial onboarding of catalogs containing thousands of tracks.

    Some teams prefer not to upload files themselves at all. In those cases, audio can be shared via common transfer tools before the material is processed and delivered in the agreed format.

    What happens once the analysis is complete?

    Cyanite produces a structured, consistent description of how each track sounds, independent of who uploaded it or when it entered the catalog.

    Metadata becomes available either in the web app library or directly inside your system via the API. We can also deliver an additional CSV and Google Spreadsheet export on request.

    Each track receives a stable set of static tags and values, including:

    • Genres and free-genre descriptors
    • Moods and emotional dynamics
    • Energy and movement
    • Instrumentation and instrument presence
    • Valence–arousal values
    • The most representative part of the track
    • An Auto-Description summarizing key characteristics

    All tags are generated through audio-only analysis, which ensures that legacy tracks and new releases follow the same logic. Over time, this consistency becomes the foundation for faster search, clearer filtering, and more reliable collaboration across teams.

    The full tagging taxonomy is available for teams that want deeper insight into how attributes are defined and structured. Explore Cyanite’s tagging taxonomy here.

    Curious how the Google Spreadsheet export looks? Check out this sample.

    How long tagging takes at different catalog sizes?

    Cyanite processes audio quickly. A typical analysis time is around 10 seconds per track. Because processing runs in parallel, turnaround time depends more on workflow setup than on catalog size.

    In practice, teams can expect:

    • Small batches to be ready almost instantly
    • Medium-sized libraries to complete within hours
    • Enterprise-scale catalogs to be onboarded within 5–10 business days, regardless of size

    For day-to-day use via the API, results arrive in near real time via webhook as soon as processing finishes. This makes the workflow suitable both for large one-time onboarding projects and continuous ingestion as new music arrives.

    Understanding scores, tags, and why both matter

    Cyanite’s models produce two complementary layers of information.

    Numerical scores describe how strongly an attribute is present, both across the full track and within time-based segments. These values range from zero to one, with 0.5 representing a meaningful threshold.

    Cyanite creates final tags by using an additional decision layer that considers how different attributes relate to one another. It doesn’t just apply a simple cutoff. This approach helps resolve ambiguities, stabilize hybrid sounds, and produce tags that make musical sense in context.

    This means you get metadata that remains robust even for tracks that blend genres, moods, or production styles—a common challenge in modern catalogs.

    Exporting metadata into your existing systems

    Once tags are available, your team can export them in the format that best fits your workflow.

    API users typically work with structured JSON, delivered automatically via webhook and accessible through authenticated requests. Cyanite’s Query Builder allows teams to explore available fields and preview real outputs before integration.

    For one-time projects or larger deliveries, metadata can also be provided as CSV files. Web app users can request CSV export through Cyanite’s internal tools, which is especially useful during catalog cleanups or migrations.

    Because the structure remains consistent across formats, metadata can be reused across systems without rework.

    Learn how to quickly build your queries for the Cyanite API with our Query Builder.

    How teams use tagged metadata in practice

    Once audio-based tagging is in place, teams tend to notice changes quickly. Search becomes faster and more predictable. Creative teams can filter by sound instead of guessing keywords. Catalog managers spend less time fixing metadata and more time shaping the catalog strategically.

    In practice, tagged metadata supports workflows such as:

    • Catalog management and cleanup
    • Creative search and curation
    • Ingestion pipelines
    • Licensing and rights
    • Sync briefs and pitching
    • Internal discovery tools
    • Audits and reporting

    Over time, consistent metadata reduces friction between departments and makes catalog operations more resilient as libraries continue to grow.

    Best practices from real-world usage

    Teams see the smoothest results when they work with clean audio sources, batch large uploads, manage API credentials carefully, and switch to S3-based ingestion as catalogs become larger. Thinking about export formats early also helps avoid rework during onboarding projects.

    None of this changes the outcome of the analysis itself, but it does make the overall process more predictable and easier to manage at scale.

    With Cyanite, we have a partner whose technology truly matches the scale and diversity of our catalog. Their tagging is fast and reliable, and Similarity Search unlocks a whole new way to discover music, not just through filters, but through feeling. It’s a huge step forward in how we help creators connect with the right tracks.

    Stan McLeod

    Head of Product, Lickd

    Final thoughts

    Cyanite’s tagging workflow is designed to scale with your catalog without making your day-to-day work more complex. Whether you upload a handful of tracks through the web app or process tens of thousands via the API, the result will be the same: structured, consistent metadata that reflects how your music actually sounds.

    If you’re ready to move away from manual tagging and toward a more stable foundation for search and discovery, explore the different ways to work with Cyanite and choose the setup that fits your workflow.

    Want to work with Cyanite? Explore your options, and get in touch with our business team, who can provide guidance if you’re unsure how to start.

    FAQs

    Q: Do I need to send existing metadata to use Cyanite’s tagging workflow?

    A: No. Cyanite analyzes the audio itself. It doesn’t rely on existing tags or descriptions.

    Q: Can Cyanite handle both legacy catalogs and new releases?

    A: Yes, it can. The same analysis logic applies to all tracks, which helps unify older and newer material under a single metadata structure.

    Q: How are results delivered when using the API?

    A: Results are sent automatically via webhook as structured JSON as soon as processing is complete.

    Q: Is the tagging output consistent across export formats?

    A: Yes. JSON and CSV exports use the same underlying structure and values.

    Q: Who typically uses this workflow?

    A: Music publishers, production libraries, sync teams, music-tech platforms, and catalog managers use Cyanite’s tagging workflow to support search, licensing, onboarding, and catalog maintenance.

    Q: How long will it take to tag my music?

    A: Small batches are tagged almost immediately. For larger catalogs, we usually need 5–10 business days for the complete setup.