Start Analyzing Your Music On Moods and Genres For Free 🎉 Click Here

Cyanite API Update – Version 6 now live!

February 10, 2021

After months of hard work, our new API update is finally live! Part of the new classifier generation are our 13 new moods, EDM sub-genres, percussion and the sound-based musical era of a song. Here you can find the new API generation’s full documentation: https://api-docs.cyanite.ai/blog/2021/02/05/changelog-2020-02-05

The new API update includes:

• 13 new moods

17 different genres

• 8 EDM sub-genres

• Voice – male/female/instrumental

• Voice presence

• Percussion

• Musical era

• Experimental keywords

We also set up versioning because we do not want to force anyone immediately to upgrade to the latest generation. We now introduce each new generation as separate GraphQL fields.

Further, we added a new webhook format that is more flexible and consistent.

Now let’s take a closer look at it!

Mood

The mood multi-label classifier provides the following labels:

aggressivecalmchilleddarkenergeticepichappyromanticsadscarysexyetherealuplifting

Each label has a score reaching from 0-1, where 0 (0%) indicates that the track is unlikely to represent a given mood and 1 (100%) indicates a high probability that the track represents a given mood.

Since the mood of a track might not always be properly described by a sigle tag, the mood classifier is able to predict multiple moods for a given song instead of only one. A track could be classified with dark (Score: 0.9), while also being classified with aggressive (Score: 0.8).

The mood can be retrieved both averaged over the whole track and segment-wise over time with 15s temporal resolution. In addition the score the API also exposes a list which includes the most likely moods, or the term ambiguous in case of none of the audio not reflecting any of our mood tags properly.

Genre

The genre multi-label classifier provides the following labels:

ambientbluesclassicalcountryelectronicDancefolkindieAlternativejazzlatinmetalpoppunkrapHipHopreggaernbrocksingerSongwriter

Each label has a score reaching from 0-1 where 0 (0%) indicates that the track is unlikely to represent a given genre and 1 (100%) indicates a high probability that track represents a given genre.

Since music could break genre borders the genre classifier can predict multiple genres for a given song instead of only predicting one genre. A track could be classified with rapHipHop (Score: 0.9) but also reggae (Score: 0.8).

The genre can be retrieved both averaged over the whole track and segment-wise over time with 15s temporal resolution. In addition the score the API also exposes a list which includes the most likely genres.

EDM Sub-Genre

In case a track’s genre got classified as electronicDance, the EDM sub-genre classifier is available for going to a deeper analysis layer, applying the following labels for edm sub-genres:

breakbeatDrumAndBassdeepHouseelectrohouseminimaltechHousetechnotrance

Each label has a score reaching from 0-1 where 0 (0%) indicates that the track is unlikely to represent a given sub-genre and 1 (100%) indicates a high probability that track represents a given sub-genre.

The EDM sub-genre can be retrieved both averaged over the whole track and segment-wise over time with 15s temporal resolution. In addition the score the API also exposes a list which includes the most likely EDM sub-genres.

Voice

The voice classifier categorizes the audio as female or male singing voice or instrumental (non-vocal).

Each label has a score reaching from 0-1 where 0 (0%) indicates that the track is unlikely to have the given voice elements and 1 (100%) indicates a high probability that track contains the given voice elements.

The voice classifier results can be retrieved both averaged over the whole track and segment-wise over time with 15s temporal resolution.

Voice Presence

This label describes the amount of singing voice throughout the full duration of the track and may be nonelowmedium or high.

Percussion

The instrument classifier currently only predicts the presence of a percussive instrument, such as drums or drum machines or similar. The result is displayed under the label of percussion.

The label has a score reaching from 0-1 where 0 (0%) indicates that the track is unlikely to contain a given instrument and 1 (100%) indicates a high probability that track contains a given instrument.

The instrument classifier result can be retrieved both averaged over the whole track and segment-wise over time with 15s temporal resolution.

Musical Era

The musical era classifier describes the era the audio was likely produced in, or which the sound of production suggests.

Experimental Keywords

Experimental taxonomy that can be associated with the audio. The data is experimental and expected to change. The access must be requested from the cyanite sales team.

Example keywords:

upliftingedmfriendlymotivatingpleasanthappyenergeticjoyblissgladnessauspiciouspleasureforcefuldeterminedconfidentpositiveoptimisticagileanimatedjourneypartydrivingkickingimpellingupbeat,

Go ahead and start coding

Contact us with any questions about our API services via mail@cyanite.ai. Give us a shout-out on Twitter, LinkedIn or wherever you feel like. Don’t hold back with feedback on what we can improve.

If you want to get a first grip on how Cyanite works, you can also register for our free web app to analyze music and try out similarity searches without any coding needed.

If you are a coder and want to join the ride, please send your application to careers@cyanite.ai.

More Cyanite content on AI and music