Can Meta’s audio aesthetic model actually rate the quality of music?

Can Meta’s audio aesthetic model actually rate the quality of music?

Last year, Meta released Audiobox Aesthetics (AES), a research model that proposes scoring audio based on how people would rate it. The model outputs four scores: Production Quality (PQ), Production Complexity (PC), Content Enjoyment (CE), and Content Usefulness (CU). 

The study suggests that audio aesthetics can be broken into these axes, and that a reference-free model can predict these scores directly from audio. If that holds, the scores could start informing decisions and become signals people lean on when judging music at scale.

I took a closer look to understand how the model frames aesthetic judgment and what this means in practice. I ran Audiobox Aesthetics myself and examined how its scores behave with real music.

What Meta’s Audiobox Aesthetics paper claims

Before jumping into my evaluation, let’s take a closer look at what Meta’s Audiobox Aesthetics paper set out to do.

The paper introduces a research model intended to automate how audio is evaluated when no reference version exists. The authors present this as a way to automate listening judgments. They describe human evaluations as costly and inconsistent, leading them to seek an automated alternative.

To address this need, the authors propose breaking audio evaluation into four separate axes and predicting a separate score for each:

  • Production Quality (PQ) looks at technical execution, focusing on clarity and fidelity, dynamics, frequency balance, and spatialization.
  • Production Complexity (PC) reflects how many sound elements are present in the audio.
  • Content Enjoyment (CE) reflects how much listeners enjoy the audio, including their perception of artistic skill and overall listening experience.
  • Content Usefulness (CU) considers whether the audio feels usable for creating content.

The model is trained using ratings from human listeners who follow the same guidelines across speech, music, and sound effects. It analyzes audio in short segments of around 10 seconds. For longer tracks, the model scores each segment independently and provides an average. 

Beyond the audio itself, the model has no additional context. It does not know how a track is meant to be used or how it relates to other music. According to the paper, the scores tend to align with human ratings and could help sort audio when it’s not possible to listen to it all. In that way, the model is presented as a proxy for listener judgment.

Why I decided to evaluate the model

I wasn’t the only one who was curious to look into this model. Jeffrey Anthony’s “Can AI Measure Beauty? A Deep Dive into Meta’s Audio Aesthetics Model,” for instance, offers a deep, philosophical examination of what it means to quantify aesthetic judgment, including questions of ontology and judgment. I decided to tackle the topic even more with a hands-on approach, testing the model on some real-world examples to understand whether we could find some interesting patterns in the model’s predictions. 

What caught my attention most was how these scores are meant to be used. Once aesthetic judgments are turned into numbers, they start to feel reliable. They look like something you can sort by, filter on, or use to decide what gets heard and what gets ignored.

This matters in music workflows. Scores like these could influence how catalogs are cleaned up, how tracks are ranked for sync, and how large libraries of music are evaluated without listening. With a skeptical but open mindset, I set out to discover how these scores behave with real-world data.

 

What I found when testing the model

A) Individual-track sanity checks

I began with a qualitative sanity check using individual songs whose perceptual differences are unambiguous to human listeners. The tracks I selected represent distinct production conditions, stylistic intentions, and levels of artistic ambition.

I included four songs:

The motivation for this test was straightforward. A model claiming to predict Production Quality should assign a lower PQ to “Funky Town” (low-quality MP3) than to “Giorgio by Moroder.” A model claiming to estimate production or musical complexity should recognize “Blue Calx” by Aphex Twin as more complex than formulaic late-90s pop-trance such as DJ Visage’s “Schumacher Song.” Likewise, enjoyment and usefulness scores should not collapse across experimental electronic music, audiophile-grade disco-funk, old-school pop-trance, and degraded consumer audio.

You can see that the resulting scores, shown in the individual-track comparison plot above, contradict these expectations. “Funky Town” receives a PQ score only slightly lower than “Giorgio by Moroder,” indicating near insensitivity to codec degradation and mastering fidelity. Even more strikingly, “Blue Calx” is assigned the lowest Production Complexity among the four tracks, while “The Schumacher Song” and “Funky Town” receive higher PC scores. This directly inverts what most listeners would consider to be structural or compositional complexity.

Content Enjoyment is highest for “Funky Town” and lowest for “Blue Calx,” suggesting that the CE dimension aligns more closely with catchiness or familiarity than with artistic merit or aesthetic depth.

Taken together, these results indicate that AES is largely insensitive to audio fidelity. It fails to reflect musical or structural complexity, and instead appears to reward constant spectral activity and conventional pop characteristics. Even at the individual track level, the semantics of Production Quality and Production Complexity don’t match their labels.

B) Artist-level distribution analysis

Next, I tested whether AES produces distinct aesthetic profiles for artists with musical identities, production aesthetics, and historical contexts that are clearly different. I analyzed distributions of Production Quality, Production Complexity, Content Enjoyment, and Content Usefulness for Johann Sebastian Bach, Skrillex, Dream Theater, The Clash, and Hans Zimmer.

If AES captures musically meaningful aesthetics, we would expect to see systematic separation between these artists. For example, Hans Zimmer and Dream Theater might have a higher complexity score than The Clash. Skrillex’s modern electronic productions might have a higher quality score than early punk recordings. Bach’s works might show high complexity but variable enjoyment or usefulness depending on the recording and interpretation.

Instead, the plotted distributions show strong overlap across artists for CE, CU, and PQ, with only minor shifts in means. Most scores cluster tightly within a narrow band between approximately 7 and 8, regardless of artist. PC exhibits slightly more variation, but still fails to form clear stylistic groupings. Bach, Skrillex, Dream Theater, and Hans Zimmer largely occupy overlapping regions, while The Clash is not consistently separate.

This suggests that AES doesn’t meaningfully encode artist-level aesthetic or production differences. Despite extreme stylistic diversity, the model assigns broadly similar aesthetic profiles, reinforcing the interpretation that AES functions as a coarse estimator of acceptability or pleasantness rather than a representation of musical aesthetics.

C) Bias analysis using a balanced gender-controlled dataset

Scoring models are designed to rank, filter, and curate songs in large music catalogs. If these models encode demographic-correlated priors, they can silently amplify existing biases at scale. To test this risk, I analyzed whether AES exhibits systematic differences between tracks with female lead vocals and tracks without female lead vocals.

In our 2025 ISMIR paper, we showed that common music embedding models pick up non-musical singer traits, such as gender and language, and exhibit significant bias as a result. Because AES is intended to judge quality, aesthetics, and usefulness, it would be particularly problematic if it had similar biases. They could directly influence which music is considered “better” or more desirable.

I constructed a balanced dataset using the same methodology used in our 2025 paper, equalizing genre distribution and singer language across groups.

For each group, I computed score distributions for Content Enjoyment, Content Usefulness, Production Complexity, and Production Quality, visualized them, and performed statistical testing using Welch’s t-test alongside Cohen’s d effect sizes. For context, Welch’s t-test is a statistical test that compares whether the average scores between two groups are significantly different. Cohen’s d is a measure of effect size that quantifies how large that difference is in standardized units.

The results show consistent upward shifts for female-led tracks in CE, CU, and PQ. All three differences are statistically significant with small-to-moderate effect sizes. In contrast, there is virtually no difference in Production Complexity score between groups.

This pattern indicates that the model systematically assigns higher enjoyment, usefulness, and quality scores to material with female vocals, even under controlled conditions. Because complexity remains unaffected, the effect doesn’t appear to stem from structural musical differences. Instead, it likely reflects correlations in training data and human annotations, or the model treating certain vocal timbres and production styles associated with female vocals as implicit quality indicators.

These findings suggest that AES encodes demographic-correlated aesthetic priors, which is problematic for a model intended to judge musical quality, aesthetics, and usefulness.

When a measure becomes a target, it ceases to be a good measure.

Charles Goodhart

Economist

Why this matters for the industry

Economist Charles Goodhart famously observed that “when a measure becomes a target, it ceases to be a good measure.” He was describing what happens when a metric starts to drive decisions rather than just being an indicator. Once a number is relied on, it begins to shape how people think and choose.

That idea applies directly to aesthetic scoring. A score, once it exists, carries weight. It gets used as a shortcut in decisions, even when its meaning is incomplete. This matters in music workflows because aesthetic judgment depends on context and purpose. 

When a simplified score is treated as reliable, systems can start favoring what scores well rather than what actually sounds better or serves a creative goal. Over time, that can quietly steer decisions away from how audio is perceived and used in practice.

How we approach audio intelligence at Cyanite

At Cyanite, music isn’t judged in a vacuum, and neither are the decisions built on top of it. That’s why we don’t rely on single aesthetic scores. Instead, we focus on making audio describable and searchable in ways that stay transparent and grounded in context.

Aesthetic scoring can give the illusion of precision, but it often lumps together different technical qualities, genres, and styles. In music search and discovery, a single score doesn’t explain why a track is surfaced or excluded. That reasoning matters to us. Not to decide what’s “good,” but to give teams tools they can understand and trust.

We see audio intelligence as a way to expose structure, not replace judgment. Our systems surface identifiable musical attributes and relationships, knowing that the same track can be the right or wrong fit depending on how it’s used. The goal is to support human decision-making, not substitute it with scores.

Experimentation has a place, but in music, automation works best when it’s explainable and limit-aware.

What responsible progress in music AI should look like

Progress in music and AI is underpinned by transparency. Teams should be able to understand how a model was trained and how its outputs relate to the audio. When results are interpretable, people can see why a track surfaces and judge for themselves whether the signal makes sense in their own context.

That transparency depends on data choices. Music spans styles, cultures, eras, and uses, and models reflect whatever they are fed. Developers need to work with broad, representative data and be clear about where coverage is thin. Being open about what a model sees, and what it does not, makes its behavior more predictable and its limits easier to manage.

Clear communication matters just as much once tools are in use. For scores and labels to be applied responsibly, teams need a shared understanding of what those signals reflect and where their limits are. Otherwise, even well-intentioned metrics can be stretched beyond what they are able to support.

This kind of openness helps the industry build tools people can understand and trust in real workflows. 

We explored how these expectations show up in practice in “The state of AI transparency in music 2025,” a report developed with MediaTracks and Marmoset on how music licensing professionals make decisions around AI, creator background, and context. You can read the full report here.

So… does Meta’s model provide meaningful ratings for music?

Based on these tests, the answer is no. The model produces stable scores, but they don’t map cleanly to how musical quality or complexity are assessed in real catalog work. Instead, the model appears to align more with easily detectable production traits than with the distinctions people consistently make when judging music in context.

That doesn’t make Audiobox Aesthetics insignificant. It can support research by defining a clear scoring framework, showing how reference-free predictors can be trained across speech, music, and sound, and making its models and data available for inspection and comparison. It also illustrates where AES scores can be useful, particularly when large volumes of audio need to be filtered or monitored but full listening is impractical.

Problems emerge when scores like these begin shaping decisions. When a score is presented as a measure of quality, people need to know what it’s actually measuring so they can judge whether it applies to their use case. Without that clarity, it becomes easy to trust the number even when it’s not a good fit.

At Cyanite, we see this as a reminder of the importance of responsibility in music and AI. Progress is driven by systems that stay grounded in real listening behavior and make their assumptions visible.

How Cyanite protects your sensitive audio: privacy-first workflows for every catalog

How Cyanite protects your sensitive audio: privacy-first workflows for every catalog

Looking for secure AI music analysis? Discover Cyanite’s integration options. 

For many music teams, a significant hesitation about AI analysis is not about its capability or quality. It’s about trust. When teams explore AI-driven tagging or search, the conversation almost always leads to the same question: What happens to our audio once it leaves our system?

At Cyanite, we’ve built our technology around that concern from the very beginning. Rather than offering a single security promise, we provide multiple privacy-first workflows designed to meet different levels of sensitivity and compliance. This gives teams the flexibility to choose how their audio is handled, without compromising on tagging quality or metadata depth.

This article outlines the three privacy models Cyanite offers, explains how each one works in practice, and helps you decide which setup best fits your catalog and internal requirements.

Why audio privacy matters in modern music workflows

For those who manage it, audio represents creative identity, contractual responsibility, and, often, years of human effort. It’s not just another data type. Sending that material outside an organization can feel risky, even when the technical safeguards are strong and the operational benefits are clear.

Teams that evaluate our services often raise concerns about protecting unreleased material, complying with licensing agreements, and maintaining long-term control over how their catalogs are used. They look for assurances around:

  • Safeguarding confidential or unreleased content
  • Complying with NDAs and contractual obligations
  • Meeting internal legal or security standards
  • Maintaining full ownership and control

These are not edge cases. They reflect everyday realities for publishers, film studios, broadcasters, and music-tech platforms alike. That’s why Cyanite treats privacy as a core design principle.

Security option 1: GDPR-compliant processing on secure EU servers

For many organizations, strong data protection combined with minimal operational complexity is the right balance. In Cyanite’s standard setup, all audio is processed on secure servers located in the EU and handled in full compliance with GDPR.

In practical terms, this means:

  • Audio files are never shared with third parties.
  • Songs can be deleted anytime.
  • Ownership and control of the music always remains with the customer.

This model works well for publishers, production libraries, sync platforms, and music-tech companies that want to scale tagging and search workflows without maintaining their own infrastructure. For most catalogs, this level of protection is both robust and sufficient.

That said, not every organization is able to send audio outside its own environment, even under GDPR. For those cases, Cyanite offers additional options.

Learn more: See how AI music tagging works in Cyanite and how it supports large catalogs.

Security option 2: zero-audio pipeline—tagging without transferring audio

Some teams manage catalogs that cannot be transferred externally at all. These include confidential film productions, enterprise music departments, and archives operating under strict internal compliance rules. For these situations, Cyanite provides a spectrogram-based workflow that enables full tagging without the audio files ever being sent.

Three spectograms

Spectrograms from left to right: Christina Aguilera, Fleetwood Mac, Pantera

Instead of uploading MP3s, audio is converted locally on the client side into spectrograms using a small Docker container provided by Cyanite. A spectrogram is a visual representation of frequency patterns over time. It contains no playable audio, cannot be converted back into a waveform without significant quality loss, and does not expose the original performance in any usable form.

From a metadata perspective, the results are identical to audio-based processing. From a privacy perspective, the original audio never leaves the customer’s environment. This makes the zero-audio pipeline a strong middle ground for teams that want AI-powered tagging while maintaining strict control over their content.

From a product perspective, all Cyanite features can be fully leveraged.

For us at Synchtank, the spectrogram-based upload was key. Many of our clients are cautious about where their audio goes, and this approach lets us use high-quality AI tagging and search without transferring any copyrighted audio. That balance, confidence for our customers without compromising on quality, is what made the difference for us.Amy Hegarty, CEO at Synchtank 

Learn more: What are spectrograms, and how can they be applied to music?

Security option 3: fully on-premise deployment via the Cyanite Audio Analyzer on the AWS Marketplace

For organizations with the highest security and compliance requirements, Cyanite also offers a pseudo-on-premises deployment option via the AWS Marketplace. In this setup, Cyanite’s tagging engine runs entirely inside the customer’s own AWS cloud infrastructure via the Cyanite Audio Analyzer.

This approach provides:

  • Complete pseudo-on-premise processing
  • Zero data transfer outside your AWS cloud environment
  • Full control over storage, access, and compliance
  • Tagging accuracy identical to cloud-based workflows

This option is typically chosen by film studios, broadcasters, public institutions, and organizations working with unreleased or highly sensitive material that must pass strict internal or external audits.

Because the pseudo-on-premise container operates in complete isolation (no internet connection), search-based features—including Similarity Search, Free Text Search, and Advanced Search—are not available in this setup. In pseudo-on-premise environments, Cyanite therefore focuses exclusively on audio tagging and metadata generation.

Important note: The rates on the AWS Marketplace are intentionally high to deter fraudulent activity. Please contact us for our enterprise rates and find the best plan for your needs.

Choosing the right privacy model for your catalog

Selecting the right setup depends less on catalog size and more on how tightly you need to control where your audio lives. A useful way to frame the decision is to consider how much data movement your internal policies allow.

In practice, teams tend to choose based on the following considerations:

  • GDPR cloud processing works well when secure external processing is acceptable.
  • Zero-audio pipelines suit teams that cannot transfer audio but can share abstract representations.
  • Pseudo-on-premise deployment is best for environments requiring complete isolation.

All three options deliver the same tagging depth, consistency, and accuracy. The difference lies entirely in how data moves, or doesn’t move, between systems.

Final thoughts

Using AI with music requires trust—trust that audio is handled responsibly, that ownership is respected, and that workflows adapt to real-world constraints rather than forcing compromises. Cyanite’s privacy-first architecture is designed to uphold that trust, whether you prefer cloud-based processing, a zero-audio pipeline, or a fully isolated pseudo-on-premise deployment.

If you’d like to explore which setup best fits your catalog, workflow, and compliance needs, you can review the available integration options.

FAQs

Q: Where is my audio processed when using Cyanite’s cloud setup?

A: In the standard setup, audio is processed on secure servers located in the EU and handled in full compliance with GDPR. Audio is not shared with third parties and remains your property at all times.

Q: Can I use Cyanite without sending audio files at all?

A: Yes. With the zero-audio pipeline, you convert audio locally into spectrograms and send only those abstract frequency representations to Cyanite. The original audio never leaves your environment, while full tagging results are still generated.

Q: What is the difference between the zero-audio pipeline and pseudi-on-premise deployment?

A: The zero-audio pipeline sends spectrograms to Cyanite’s cloud for analysis. The pseudo-on-premise deployment runs the Cyanite Audio Analyzer entirely inside your own AWS cloud infrastructure, which is cut off from the internet and only connected to your system. Pseudo-on-premises offers maximum isolation but only supports tagging, without search features.

Q: Are Similarity Search and Free Text Search available in all privacy setups?

A: Similarity Search, Free Text Search, and Advanced Search are available in cloud-based and zero-audio pipeline workflows. In fully pseudo-on-premise deployments, Cyanite focuses exclusively on tagging and metadata generation due to the isolated environment.

Q: Which privacy option is right for my catalog?

A: That depends on your internal security, legal, and compliance requirements. Teams with standard protection needs often use GDPR cloud processing. Those with higher sensitivity choose the zero-audio pipeline. Organizations requiring full isolation opt for on-premise deployment. Cyanite supports all three.

What is Music Prompt Search? ChatGPT for music?

What is Music Prompt Search? ChatGPT for music?

Last updated on March 6th, 2025 at 02:14 pm

How Music Prompt Search Works & Why It’s Only Part of the Puzzle

Alongside our Similarity Search, which recommends songs that are similar to one or many reference tracks, we’ve built an alternative to traditional keyword searches. We call it Free Text Search – our prompt-based music search. 

Imagine describing a song before you’ve even heard it:

Dreamy, with soft piano, a subtle build-up, and a bittersweet undertone. Think rainy day reflection.

This is the kind of prompt that Cyanite can turn into music suggestions – not based on genre or mood tags, but on the actual sound of the music. 

Music Prompt Search Example with Cyanite’s Free Text Search

What Is Music Prompt Search?

Prompt search allows you to enter a natural language description (e.g. uplifting indie with driving percussion and a nostalgic feel) and get back music that matches that idea sonically. 

We developed this idea in 2021 and were the first ones to launch a music search that was based on pure text input in 2022. Since then we’ve been improving and refining this kind of AI-powered search so that it can accurately translate text into sound. That way, you will get the closest result to the prompt that your catalog allows for. 

We are not searching for certain keywords that appear in a search. We directly map text to music. We make the system understand which text description fits a song. This is what we call Free Text Search.

Roman Gebhardt

CAIO & Founder, Cyanite

Built with ChatGPT? Not All Prompts Are Created Equal

More recently, different companies have entered the field of prompt-based music search, using large language models like ChatGPT as a foundation. These models are strong at interpreting natural language, but can not understand music the way we do. 

They generate tags based on text input and then search those tags. So in reality, these algorithms work like a traditional keyword search, and only decipher natural language prompts into keywords. 

When Prompt Search Shines

Prompt search is a game-changer when:

  • You have a specific scene or mood in mind
  • You’re working with briefs from film, games, or advertising
  • You want to match the energy or emotional arc of a moment

This is ideal for music supervisors, marketers, and creative producers.

Note: Our Free Text Search just got better!

With our latest update, Free Text Search is now:

✅ Multilingual – use prompts in nearly any language

✅ Culturally aware – understand references like “Harry Potter” or “Mario Kart”

✅ Significantly more accurate and intuitive

It’s available free for all API users on V7 and for all web app accounts created after March 15. Older accounts can request access via email.

Why We Build Our Own Models

We chose to develop every model in-house 

Not only for data security and IP protection, but because music deserves a dedicated algorithm. 

Few things are as complex and deep as the world of music. General-purpose AI doesn’t understand the nuance of tempo shifts, the subtle timbre of analog synths, or the emotional trajectory of a song.

Our models are trained on the sound itself. That means:

    • More precise results
    • Higher musical integrity
    • More confidence when recommending or licensing tracks

If you wanna learn more on how our models are working – check out this blog article and interview with our CAIO Roman Gebhardt.

Want to try our Free Text Search on your own music catalog?

Sync Music Matching with AI-powered Metadata | A Case Study with SyncMyMusic

Sync Music Matching with AI-powered Metadata | A Case Study with SyncMyMusic

The Problem

The sync licensing industry faces a fundamental information asymmetry problem. With hundreds of production music libraries operating globally, producers struggle to identify which companies are actively placing their style of music. Jesse Josefsson, veteran of 10,000+ sync placements, identified this gap as a core market inefficiency.

Genres were wrong, moods were wrong. Just not even close to what I would think as acceptable answers for an auto tagging model.

Jesse Josefsson

Founder, SyncMyMusic

Key Challenges:

    • Producers pitching to inappropriate libraries for years without results
    • Manual research taking days or weeks per opportunity
    • Inaccurate tagging solutions create more problems than they solve
    • Industry professionals “flying blind” when making strategic decisions

The Solution

One of the members said it was so accurate, it was almost spooky because it got things and it labeled things that even they wouldn’t have probably thought of themselves.” – Jesse Josefsson

After evaluating multiple auto-tagging solutions, SyncMyMusic selected Cyanite based on accuracy standards and industry reputation. The platform architecture combines TV placement data with AI-powered music metadata analysis to deliver targeted recommendations.

Why Cyanite:

    • Industry-leading accuracy in genre and mood classification
    • Partnership credibility through SourceAudio integration
    • Responsive customer support with sub-2-hour response times
    • Seamless API integration capabilities

The Implementation

I’m what they would probably call a “vibe coder”. I don’t have coding skills, but if I can do this, you can do this.Jesse Josefsson

Jesse built the entire SyncMatch platform using AI tutoring (ChatGPT/Grok) and automation tools (make.com) without traditional coding experience. The implementation took 2.5 months from concept to MVP, demonstrating how modern no-code approaches can deliver enterprise-grade solutions.

AI Music Discovery: How Marmoset Uses Cyanite | A Case Study

AI Music Discovery: How Marmoset Uses Cyanite | A Case Study

Founded in 2010, Marmoset is a full-service music licensing agency representing hundreds of independent artists and labels. At the heart of it, their core experience involves browsing for music. They offer music discovery for any moving visual media. From sync (movies...

AI-Powered Music Marketing feat. Chromatic Talents

AI-Powered Music Marketing feat. Chromatic Talents

Chromatic Talents acts like a music brand consultancy providing a comprehensive range of services in artist management, development, digital branding, and business development. Find out how they use AI-Powered Music Marketing powered by Cyanite. The goal of the...

Cyanite Advanced Search (API only)

Cyanite Advanced Search (API only)

Ready to supercharge your discovery workflows? Try out the Advanced Search API.

We’re excited to introduce Advanced Search, the biggest upgrade to Similarity and Free Text Search since we launched. With this release, we’re offering a sneak preview into the power of the new Cyanite system.

Advanced Search brings next-level precision, scalability, and usabilityall designed to supercharge your discovery workflows. From advanced filtering to more nuanced query controls, this feature is built for music teams ready to move faster and smarter.

Note: Advanced Search is an API-only feature intended for teams with developer resources who want to integrate Cyanite’s intelligence directly into their own systems.

Advanced Search Feature Overview

Click on the bullet point to jump to each feature directly

Multi-Track Search – multiple search inputs for playlist magic

Similarity Scores: Total Clarity, Total Control

Now each result comes with a clear percentage score, helping you quickly evaluate how close a match really is—both for the overall track and for each top scoring segment. It’s a critical UX improvement that helps users better understand and trust the search results at a glance.

Most Relevant Segments zoom in on the best parts

We’re not just showing you results, we’re showing you their strongest moments. Each track now highlights its Most Relevant Segments for both Similarity and Free Text queries. It’s an instant way to jump to the most relevant slice of content without scrubbing through an entire track. 

Custom Metadata Filters – smarter searches start with smarter filters

Upload your own metadata to filter results before the search even begins. Want only pre-cleared tracks? Looking for music released after 2020? With Custom Metadata Filtering, you can target exactly what you need, making your search dramatically more efficient.

Up to 500 Search Results – sometimes more is more

Tired of hitting a ceiling with limited search returns? Now, Similarity Search and Free Text Search deliver up to 500 results, giving you a much broader snapshot of what’s out there. Whether you’re refining a vibe or exploring diverse sonic textures, you’ll have a fuller landscape to work with.

Testing Advanced Search free for a month gave us the confidence we needed to update our search and tagging systems. The integration was smooth, and we were able to ship several exciting features right away – but we’ve only scratched the surface of its full capabilities!

Jack Whitis

CEO, Wavmaker

Ready to level up your catalog search?

Advanced Search introduces a more powerful way to work with your catalog. It is most useful for teams who already understand our core music discovery tools. If you have not yet tried Similarity Search or Free Text Search, sign up to Cyanite and start finding tracks that match the musical references or creative direction you’re working with. 

When you’re ready to take it a step further, explore a track’s strongest moments or enhance your metadata with custom tags using Advanced Search. Make sure you are operating on Cyanite’s v7 architecture, since it enables the full capabilities of the new system.