Explore how Cyanite turns sound into structured metadata: Just upload a couple of songs to our web app.
Managing a music catalog involves more than just storing files. As catalogs grow, teams start running into a different kind of challenge: music becomes harder to find, metadata becomes inconsistent, and strong tracks remain invisible simply because they are described differently than newer material.
Many teams still rely on manual tagging or have inherited metadata systems that were never designed for scale. Over time, this leads to uneven descriptions, slower search, and workflows that depend more on individual knowledge than on shared systems. Creative teams spend valuable time navigating the catalog instead of working with the music itself.
Cyanite’s end-to-end tagging workflow was built to address this challenge. It gives teams a stable, shared foundation they can build on, supporting human judgement—not replacing it. It complements subjective, manual labeling with a consistent, audio-based process that works the same way for every track, whether you’re onboarding new releases or making a legacy catalog more organized.
This article walks through how that workflow functions in practice—from the moment audio enters the system to the point where structured metadata becomes usable across teams and tools.
Why tagging workflows tend to break down as catalogs grow
Most tagging workflows start with care and intention. A small team listens closely, applies descriptive terms, and builds a shared understanding of the catalog. But as volume increases and more people get involved, the system begins to stretch.
As catalogs scale, the same patterns tend to appear across organizations:
- Different editors describe the same sound in different ways.
- Older metadata no longer aligns with newer releases.
- Genre and mood definitions shift over time.
- Search results reflect wording more than sound.
When this happens, teams increasingly rely on memory instead of the systems in place. This leads to strong tracks getting overlooked, response times increasing, and trust in the metadata eroding.
Cyanite’s workflow addresses this fragility by grounding metadata in the audio itself and applying the same logic across the entire catalog.
Preparing your catalog for audio-based tagging
Teams can adopt Cyanite very quickly, as there’s very little preparation involved. The system doesn’t require existing metadata, spreadsheets, or reference information. It listens to the audio file and derives all tags from the sound alone.
Getting started requires very little setup:
- MP3 files up to 15 minutes in length
- No pre-existing metadata
- No manual pre-labeling
- No changes to your current file structure
Even 128 kbit/s MP3s are usually sufficient, which means older archive files can be analyzed as they are—no need for additional audio preparation. Teams can then choose how they want to bring audio into Cyanite based on volume and workflow. Once that’s decided, tagging can begin immediately.
If you’re unsure about uploading copyrighted audio to Cyanite, you can explore our security standards and privacy-first workflows, including options to process audio in a copyright-safe way using encrypted or abstracted data.
Bringing audio into Cyanite in a way that fits your workflow
Different organizations manage music in different ways, so Cyanite supports several ingestion paths that all lead to the same analysis results.
Teams working with smaller batches often start in the web app. This is common for sync teams reviewing submissions, catalog managers auditing older libraries, or teams testing Cyanite before deeper integration. Audio can be uploaded directly, selected from disk, or referenced via a YouTube link, with analysis starting automatically once the file is added.
Platforms and larger catalogs usually integrate via the API. In this setup, tagging runs inside the organization’s own systems. Audio is uploaded programmatically, and results are delivered automatically via webhook as structured JSON as soon as processing is complete. This approach supports continuous ingestion without manual steps and fits naturally into existing pipelines.
For very large catalogs, Cyanite can also provide a dedicated S3 bucket with CLI credentials. This allows high-throughput ingestion without relying on browser-based uploads. It’s often used during initial onboarding of catalogs containing thousands of tracks.
Some teams prefer not to upload files themselves at all. In those cases, audio can be shared via common transfer tools before the material is processed and delivered in the agreed format.
What happens once the analysis is complete?
Cyanite produces a structured, consistent description of how each track sounds, independent of who uploaded it or when it entered the catalog.
Metadata becomes available either in the web app library or directly inside your system via the API. We can also deliver an additional CSV and Google Spreadsheet export on request.
Each track receives a stable set of static tags and values, including:
- Genres and free-genre descriptors
- Moods and emotional dynamics
- Energy and movement
- Instrumentation and instrument presence
- Valence–arousal values
- The most representative part of the track
- An Auto-Description summarizing key characteristics
All tags are generated through audio-only analysis, which ensures that legacy tracks and new releases follow the same logic. Over time, this consistency becomes the foundation for faster search, clearer filtering, and more reliable collaboration across teams.
The full tagging taxonomy is available for teams that want deeper insight into how attributes are defined and structured. Explore Cyanite’s tagging taxonomy here.
Curious how the Google Spreadsheet export looks? Check out this sample.
How long tagging takes at different catalog sizes?
Cyanite processes audio quickly. A typical analysis time is around 10 seconds per track. Because processing runs in parallel, turnaround time depends more on workflow setup than on catalog size.
In practice, teams can expect:
- Small batches to be ready almost instantly
- Medium-sized libraries to complete within hours
- Enterprise-scale catalogs to be onboarded within 5–10 business days, regardless of size
For day-to-day use via the API, results arrive in near real time via webhook as soon as processing finishes. This makes the workflow suitable both for large one-time onboarding projects and continuous ingestion as new music arrives.
Understanding scores, tags, and why both matter
Cyanite’s models produce two complementary layers of information.
Numerical scores describe how strongly an attribute is present, both across the full track and within time-based segments. These values range from zero to one, with 0.5 representing a meaningful threshold.
Cyanite creates final tags by using an additional decision layer that considers how different attributes relate to one another. It doesn’t just apply a simple cutoff. This approach helps resolve ambiguities, stabilize hybrid sounds, and produce tags that make musical sense in context.
This means you get metadata that remains robust even for tracks that blend genres, moods, or production styles—a common challenge in modern catalogs.
Exporting metadata into your existing systems
Once tags are available, your team can export them in the format that best fits your workflow.
API users typically work with structured JSON, delivered automatically via webhook and accessible through authenticated requests. Cyanite’s Query Builder allows teams to explore available fields and preview real outputs before integration.
For one-time projects or larger deliveries, metadata can also be provided as CSV files. Web app users can request CSV export through Cyanite’s internal tools, which is especially useful during catalog cleanups or migrations.
Because the structure remains consistent across formats, metadata can be reused across systems without rework.
Learn how to quickly build your queries for the Cyanite API with our Query Builder.
How teams use tagged metadata in practice
Once audio-based tagging is in place, teams tend to notice changes quickly. Search becomes faster and more predictable. Creative teams can filter by sound instead of guessing keywords. Catalog managers spend less time fixing metadata and more time shaping the catalog strategically.
In practice, tagged metadata supports workflows such as:
- Catalog management and cleanup
- Creative search and curation
- Ingestion pipelines
- Licensing and rights
- Sync briefs and pitching
- Internal discovery tools
- Audits and reporting
Over time, consistent metadata reduces friction between departments and makes catalog operations more resilient as libraries continue to grow.
Best practices from real-world usage
Teams see the smoothest results when they work with clean audio sources, batch large uploads, manage API credentials carefully, and switch to S3-based ingestion as catalogs become larger. Thinking about export formats early also helps avoid rework during onboarding projects.
None of this changes the outcome of the analysis itself, but it does make the overall process more predictable and easier to manage at scale.
With Cyanite, we have a partner whose technology truly matches the scale and diversity of our catalog. Their tagging is fast and reliable, and Similarity Search unlocks a whole new way to discover music, not just through filters, but through feeling. It’s a huge step forward in how we help creators connect with the right tracks.
Final thoughts
Cyanite’s tagging workflow is designed to scale with your catalog without making your day-to-day work more complex. Whether you upload a handful of tracks through the web app or process tens of thousands via the API, the result will be the same: structured, consistent metadata that reflects how your music actually sounds.
If you’re ready to move away from manual tagging and toward a more stable foundation for search and discovery, explore the different ways to work with Cyanite and choose the setup that fits your workflow.
Want to work with Cyanite? Explore your options, and get in touch with our business team, who can provide guidance if you’re unsure how to start.
FAQs
Q: Do I need to send existing metadata to use Cyanite’s tagging workflow?
A: No. Cyanite analyzes the audio itself. It doesn’t rely on existing tags or descriptions.
Q: Can Cyanite handle both legacy catalogs and new releases?
A: Yes, it can. The same analysis logic applies to all tracks, which helps unify older and newer material under a single metadata structure.
Q: How are results delivered when using the API?
A: Results are sent automatically via webhook as structured JSON as soon as processing is complete.
Q: Is the tagging output consistent across export formats?
A: Yes. JSON and CSV exports use the same underlying structure and values.
Q: Who typically uses this workflow?
A: Music publishers, production libraries, sync teams, music-tech platforms, and catalog managers use Cyanite’s tagging workflow to support search, licensing, onboarding, and catalog maintenance.
Q: How long will it take to tag my music?
A: Small batches are tagged almost immediately. For larger catalogs, we usually need 5–10 business days for the complete setup.
