Experience Our Biggest Web App Update with 5,000+ New Genres! 🎉 Discover Now

Empowering Researchers with Free AI Music Analysis – Cyanite for Innovators Spotlight

Empowering Researchers with Free AI Music Analysis – Cyanite for Innovators Spotlight

Last updated on March 6th, 2025 at 02:14 pm

We believe in the power of our AI music analysis tools to fuel creativity across diverse fields, from the arts and research to grassroots movements and creative coding.

That’s why we launched Cyanite for Innovators in 2023, a support programme designed to empower individuals and teams working on non-commercial projects across the arts, research and creative coding communities. Participants gain access to our AI music analysis tools, opening the door for groundbreaking experimentation in areas like interactive art, digital installations, and AI-driven research.

So far, we’ve received 20+ applications, five of which have been selected for the programme:

 

Are you working on a groundbreaking AI-driven project? Apply now and join a community of innovators shaping the future.

 

How to Apply

1. Click the button below.

2. Submit a detailed proposal outlining your project’s objectives, timeline, and expected outcomes.

3. Allow us 4-8 weeks to review your application and get back to you.

Music and Dance Visualizer by Dilucious

Agustin Di Luciano, a digital artist and developer, is pushing the limits of real-time interactivity with audio processing, motion capture, procedural generation, and Cyanite’s AI music analysis /ML technology.

His project creates immersive, AI-powered sensory landscapes, transforming movement and sound into stunning real-time visuals. These images are from his recent exhibition at the Art Basel Miami where he showed his Music Visualizer to Miami’s top tech entrepreneurs as well as notable Latin American art collectors.

Soundsketcher

Led by Asteris Zacharakis, PhD and funded by the Hellenic Foundation for Research and Innovation, Soundsketcher (2024–2025) is a cutting-edge research project that blends computational, cognitive, and arts-based methodologies to develop an application that automatically converts audio into graphic scores.

This technology assists users in learning, analyzing, and co-creating music, making music notation and composition more accessible and intuitive.

MusiQ

Kristóf Kasza developed MusiQ for his thesis at Budapest University of Technology and Economics, a multi-user service that sorts song requests based on parameters analyzed by Cyanite, using a FastAPI backend with Docker and a Flutter-based frontend.

Cyanite’s AI music analysis API enabled precise music sorting by key, genre, BPM, and mood, contributing to the project’s success, which earned him top marks, and while he has since started a full-time software development job, he looks forward to further enhancing MusiQ in the future.

Expanding the Boundaries of Queer Music Analysis: A Comparative Study with AI Insights

A research team arround Dr. Jörg Mühlhans from the University of Vienna, Institute of Musicology is conducting a large-scale quantitative analysis of 125 queer music songs, revealing key trends in emotional tone and queer representation, and aims to integrate Cyanite’s AI to validate, expand, and refine these insights for future research.

Thesis: What is the greatest factor in making a timeless song?

Huw Lloyd is conducting primary research for his dissertation to investigate the key musical, historical/cultural, and economic factors that contribute to a “timeless song”—one that resonates across generations—aiming to determine the most influential elements and provide insights for musicians seeking to understand and apply these trends.

Feel inspired?

If you have a research project in mind or would like to try out the technology that supports these innovative projects, click the button below to analyze your own songs!

PR: SourceAudio and Cyanite Join Forces, Offering Improved Music Discovery for Sync Licensing

PR: SourceAudio and Cyanite Join Forces, Offering Improved Music Discovery for Sync Licensing

Last updated on March 5th, 2025 at 05:14 pm

PRESS RELEASE

SourceAudio and Cyanite Join Forces, Offering Improved Music Discovery for Sync Licensing via AI-Powered Tagging and Search Los Angeles, February 26, 2025 – Source Audio, the music industry’s most widely adopted sync platform, today announced a groundbreaking collaboration with Cyanite, a pioneering AI-powered music analysis platform. With SourceAudio’s platform growing by approximately 50,000 new tracks weekly, this strategic alliance integrates Cyanite’s advanced AI music tagging and search capabilities directly into the industry’s most comprehensive music licensing ecosystem.

The integration enables SourceAudio’s vast network of music libraries and content owners to opt-in to instantly access Cyanite’s AI tagging and search system within their existing workflows, dramatically streamlining the metadata enhancement process for millions of tracks. This seamless solution eliminates traditional barriers to adoption, allowing content owners to immediately improve their music’s discoverability and licensing potential across SourceAudio’s rapidly expanding platform.

In additon, Cyanite has chosen SourceAudio as their delivery platform for both new and existing customers. This means all Cyanite users can now benefit from SourceAudio’s comprehensive hosting and licensing solution, with their audio files automatically tagged and optimized for search, discovery, pitching, and monetization via the world’s fastest-growing sync and CMS platform.

“By integrating Cyanite’s comprehensive music classification technology directly into SourceAudio’s platform, we’re delivering immediate ROI to our clients through enhanced discovery and optimization,” said Andrew Harding, Co-Founder & CEO at SourceAudio. “This collaboration combines the world’s most sophisticated AI tagging solution with sync licensing’s most active platform, creating unprecedented opportunities for rights holders to maximize the value of their catalogs.”

“Cyanite and SourceAudio complement each other perfectly, so teaming up with them just makes sense,” said Markus Schwarzer, CEO at Cyanite. “Rights holders use our tools because they know that enhanced metadata means better discoverability and more licenses. Now that they can immediately add their optimized catalogs to SourceAudio’s sync platform, the pathway to monetization is even shorter and more lucrative for our clients.”

The integration is already in use by a variety of music licensing platforms. These include ReelWorld, the most trusted name in radio jingles, radio imaging, and station branding, as well as Gramoscope Music, one of Hollywood’s leading music catalogs for TV and film. Viralnoise, Gramoscope’s new cutting-edge royalty-free music subscription-based platform for high-quality music, is also using the integration. Currently utilizing SourceAudio as their main sync platform, these companies will now be able to speed up harmonizing the different tagging schemas of all catalogs into one via Cyanite’s AI-powered metadata tagging.

“ReelWorld is trusted by some of the biggest brands on the planet to create sounds, music, and audio identities that are heard by millions of people every day,” said Craig Wallace, Chief Experience Officer of ReelWorld. “Adding the power of Cyanite to our services helped our clients find the right piece of music for their brand identity easier, faster, and more effectively than ever before! Having it integrated directly into SourceAudio is a game-changer.”

“This integration bridges the gap between powerful catalog management and seamless user navigation,” says Alec Puro, CEO and Founder of Gramoscope Music. “Our music supervision partners know exactly what they need, and now they have the tools to find it faster than ever. Speeding up the process from inspiration to licensing is essential for us and our clients, and this upgrade really makes a huge difference.”

“Our main customers are content creators and influencers who approach searching for music much differently than professional music supervisors. By integrating Cyanite’s intuitive tools into SourceAudio, we empower them to search seamlessly, whether through visual cues or reference sounds,” says Eric Meyers, Creative Director of Viralnoise. “Searching for music is the cornerstone of our user experience, and this addition truly makes this process seamless.”

For more information, visit sourceaudio.com and cyanite.ai.

About SourceAudio

SourceAudio, the music industry’s most widely adopted sync platform serving 570,000 users, over 100 media giants, and 2,500 US radio stations daily, aggregates more than 33 million songs from top-tier labels, libraries, catalogs, and publishers. Processing over 500,000 music searches weekly for sync licensing, the platform provides unparalleled connectivity between rights holders and major content creators, broadcast networks, and streaming services. At its core, SourceAudio excels in four high-value areas for clients: music discovery, distribution, protection, and payments. Its users maximize revenue opportunities through various channels, including YouTube Content ID, distribution, performance royalty collection, and global sync licensing at scale, effectively future-proofing catalogs in the ever-evolving music industry landscape.

About Cyanite

With over 200 business customers and 200,000 individual users, Cyanite leads the industry in AI-driven music analysis, tagging and search solutions helping music companies transform their catalogs into personalized, AI-powered music libraries with advanced discovery and recommendation features. Engineered in Germany, Cyanite’s fully proprietary software enables efficient keywording and music discovery for the entertainment and advertising industries. Trusted by leading companies like BMG, Epidemic Sound, and Warner Chappell, Cyanite offers API and no-code solutions to streamline music organization and discovery. Recognized with the VIA 2023 Award for Best New Music Business, Cyanite aims to become the universal intelligence that understands, connects, and recommends the world’s music.

SourceAudio Press Contacts:

Laurie Jakobsen, Jaybird Communications, 646-484-6764, laurie@jaybirdcom.com

Bill Greenwood, Jaybird Communications, 609-221-2374, bill@jaybirdcom.com

Cyanite Press Contacts:

Matt Cartmell, Carta Communications, +44 7930 485333, matt.cartmell@cartacomms.com

ENDS

PR: DAACI and Cyanite Announce Strategic Collaboration

PR: DAACI and Cyanite Announce Strategic Collaboration

Last updated on February 12h, 2025 at 05:14 pm

PRESS RELEASE

DAACI and Cyanite Announce Strategic Collaboration to Enhance AI-Driven Music Discovery and Adaptive Editing for the Sync Community

London/Berlin, February 12, 2025 – DAACI, a leader in adaptive AI music technology, and Cyanite, a pioneering provider of AI-powered music tagging and search solutions, are proud to announce a strategic partnership designed to advance the sync industry. Even Music Ally was reporting about it. The partnership aims to simplify creative sync by combining the task of finding the right song quickly with the delivery of perfectly edited music, cut directly to any video or gaming asset.

Connecting DAACI’s patented Natural Edits technology with Cyanite’s advanced tagging and search capabilities, this partnership offers an API-first approach to revolutionizing the workflows of music libraries, publishers, and sync platforms. These technologies allow platforms to streamline how sync professionals find, customize, and license tracks—all while ensuring seamless integration into existing infrastructures.

Natural Edits, part of DAACI’s Natural Series, enables platforms to offer their users customizable editing options for original tracks—such as looping background beds, precise cuts for specific narratives, or snippets for advertising. Coupled with Cyanite’s AI-driven tagging and search technology, platforms can empower their clients to efficiently discover and adapt music to meet a variety of creative briefs, from film and TV placements to advertising campaigns and gaming, reducing what was once a day-long process to a matter of minutes.

“Our partnership with DAACI is focused on enhancing what music platforms can offer to the sync community,” says Markus Schwarzer, CEO of Cyanite. “Through our API integration, we’re giving music libraries and publishers the ability to make music discovery and customization intuitive, scalable, and efficient for their clients. This is about enabling the industry to evolve while keeping workflows seamless.”

Dr. Joe Lyske, Co-founder of DAACI, adds, “The sync ecosystem thrives on innovation, and our partnership with Cyanite ensures that music platforms can adapt to the changing needs of their clients. With Natural Edits and Cyanite’s tagging and search capabilities, we’re providing music libraries and sync platforms with tools to unlock the full potential of their catalogs, offering tailored solutions for professionals in the field.”

“We’re excited to see this new collaboration between DAACI and Cyanite bring AI-driven music discovery and adaptive editing to the forefront,” said Jeff Perkins, CEO of Soundstripe, a client of both DAACI and Cyanite. “We’re always looking for ways to help content creators move faster while ensuring they have access to the highest-quality music. This partnership makes it even easier for our users to find and customize the perfect track in record time.”

This collaboration signifies a pivotal moment for the sync industry, where advanced AI capabilities are integrated directly into the platforms relied upon by sync professionals. By bridging music discovery and adaptive editing through API-driven solutions, DAACI and Cyanite are equipping the sync industry with the tools it needs to stay competitive and impactful.

ENDS

About DAACI:

DAACI develops next-gen smart and AI creative music tools.

Our series of patented technologies empower music makers to meet the rapidly growing demand for personalised music. Our technologies encompass tools that supercharge the creative process dynamically composing new music in real-time, and smart editing systems that seamlessly adapt existing tracks. Built by a world-class team of musicians and composers, DAACI’s technology is based on over 30 years of research. Incorporating a growing portfolio of 79 granted patents and supported by partnerships with the UKRI Centre for Doctoral Training in Artificial Intelligence and Music at Queen Mary University of London and the innovative Abbey Road Red incubator, DAACI is the go-to solution for creators who make and use music. With our series of pioneering plugins and tools, creators can benefit from our unique approach and our understanding of deep music theory, giving them multiple lifetimes’ worth of musical experience at their fingertips. For more information on DAACI: https://www.daacigroup.com

About Cyanite:

Cyanite helps music companies transform their catalogs into personalized, AI-powered music libraries with advanced search and recommendation features. Based in Mannheim and Berlin, Germany, Cyanite develops software that enables efficient keywording and music discovery for the entertainment and advertising industries. Trusted by leading companies like BMG, Epidemic Sound, and Warner Chappell, Cyanite offers API and no-code solutions to streamline music organization and discovery. Recognized with the VIA 2023 Award for Best New Music Business, Cyanite aims to become the universal intelligence that understands, connects, and recommends music globally.

Press Contacts:

Gemma Robinson

OLEX Communications

gemma@olexcommunications.co.uk

+44 7854 813 153

AI Music Search Algorithms: Gender Bias or Balance?

AI Music Search Algorithms: Gender Bias or Balance?

This is part 1 of 2. To dive deeper into the data we analyzed, click here to check out part 2.

Gender Bias in AI Music: An Introduction

Gender Bias in AI Music Search is often overlooked. With the upcoming release of Cyanite 2.0, we aim to address this issue by evaluating gender representation in AI music algorithms, specifically comparing male and female vocal representation across both our current and updated models.

Finding music used to be straightforward: you’d search by artist name or song title. But as music catalogs have grown, professionals in the industry need smarter ways to navigate vast libraries. That’s where Cyanite’s Similarity Search comes in, offering an intuitive way to discover music using reference tracks. 

In our evaluation, we do not want to focus solely on perceived similarity but also on the potential gender bias of our algorithm. In other words, we want to ensure that our models not only meet qualitative standards but are also fair—especially when it comes to gender representation

In this article, we evaluate both our currently deployed algorithms Cyanite 1.0 and Cyanite 2.0 to see how they perform in representing artists of different genders, using a method called propensity score estimation.

Cyanite 2.0 – scheduled for Nov 1st, 2024, will cover an updated version of Cyanite’s Similarity and Free Text Search, scoring higher in blind tests measuring the similarity of recommended tracks to the reference track.

    Why Gender Bias and Representation Matters in Music AI

    In machine learning (ML), algorithmic fairness ensures automated systems aren’t biased against specific groups, such as by gender or race. For music, this means that AI music search should equally represent both male and female artists when suggesting similar tracks.

    An audio search algorithm can sometimes exhibit gender bias as an outcome of a Similarity Search. For instance, if an ML model is trained predominantly on audio tracks with male vocals, it may be more likely to suggest audio tracks that align with traditional male-dominated artistic styles and themes. This can result in the underrepresentation of female artists and their perspectives.

    The Social Context Behind Artist Representation

    Music doesn’t exist in a vacuum. Just as societal biases influence various industries, they also shape music genres and instrumentation. Certain instruments—like the flute, violin, and clarinet—are more often associated with female artists, while the guitar, drums, and trumpet tend to be dominated by male performers. These associations can extend to entire genres, like country music, where studies have shown a significant gender bias with a decline in female artist representation on radio stations over the past two decades. 

    What this means for AI Music Search models is that if they aren’t built to account for these gendered trends, they may reinforce existing gender- and other biases, skewing the representation of female artists.

    How We Measure Fairness in Similarity Search

    At Cyanite, we’ve worked to make sure our Similarity Search algorithms reflect the diversity of artists and their music. To do this, we regularly audit and update our models to ensure they represent a balanced range of artistic expressions, regardless of gender.

    But how do we measure whether our models are fair? That’s where propensity score estimation comes into play.

    What Are Propensity Scores?

    In simple terms, propensity scores measure the likelihood of a track having certain features—like specific genres or instruments—that could influence whether male or female artists are suggested by the AI. These scores help us analyze whether our models are skewed toward one gender when recommending music.

    By applying propensity scores, we can see how well Cyanite’s algorithms handle gender bias. For example, if rock music and guitar instrumentation are more likely to be associated with male artists, we want to ensure that our AI still fairly recommends tracks with female vocals in those cases.

    Bar chart comparing the average female vocal presence across two Cyanite AI models. The blue bars represent the old model (Cyanite 1.0), and the green bars represent the improved model (Cyanite 2.0). A horizontal dashed purple line at 50% indicates the target for gender parity. The x-axis displays the likelihood of female vocals in different ranges, while the y-axis shows the percentage of female presence.

    Picture 1: We aim for gender parity in each bin, meaning the percentage of tracks with female vocals should be approximately 50%. The closer we are to that horizontal purple dashed line, the better our algorithm performs in terms of gender fairness.

    Comparing Cyanite 1.0 and Cyanite 2.0

    To evaluate our algorithms, we created a baseline model that predicts the likelihood of a track featuring female vocals, relying solely on genre and instrumentation data. This gave us a reference point to compare with Cyanite 1.0 and Cyanite 2.0.

    Take a blues track featuring a piano. Our baseline model would calculate the probability of female vocals based only on these two features. However, this model struggled with fair gender representation, particularly for female artists in genres and instruments dominated by male performers. The lack of diverse gender representation in our test dataset for certain genres and instruments made it difficult for the baseline model to account for societal biases that correlate with these features.

    The Results

    The baseline model significantly underestimated the likelihood of female vocals in tracks with traditionally male-associated characteristics, like rock music or guitar instrumentation. This shows the limitations of a model that only considers genre and instrumentation, as it lacks the capacity to handle high-dimensional data, where multiple layers of musical features influence the outcome.

    In contrast, Cyanite’s algorithms utilize rich, multidimensional embeddings to make more meaningful connections between tracks, going beyond simple genre and instrumentation pairings. This allows our models to provide more nuanced and accurate predictions.

    Despite its limitations, the baseline model was useful for generating a balanced test dataset. By calculating likelihood scores, we paired male vocal tracks with female vocal tracks that had similar characteristics using a nearest-neighbour approach. This helped eliminate outliers, such as male vocal tracks without clear female counterparts and resulted in a balanced dataset of 2,503 tracks, each with both male and female vocal representations.

    When we grouped tracks into bins based on the likelihood of female vocals, our goal was a near-equal presence of female vocals across all bins, with 50% representing the ideal gender balance. We conducted this analysis for both Cyanite 1.0 and Cyanite 2.0.

    The results were clear: Cyanite 2.0 produced the fairest and most accurate representation of both male and female artists. Unlike the baseline model and Cyanite 1.0, which showed fluctuations and sharp declines in female vocal predictions, Cyanite 2.0 consistently maintained balanced gender representation across all probability ranges.

    To see more explanation on how propensity scores can help aid gender bias in AI music and balance the gender gap, check out part 2 of this article.

    Conclusion: A Step Towards Fairer Music Discovery

    Cyanite’s Similarity Search has applications beyond ensuring gender fairness. It helps professionals to:

     

    • Use reference tracks to find similar tracks in their catalogs.
    • Curate and optimize playlists based on similarity results.
    • Increase the overall discoverability of a catalog.

    Our comparative evaluation of artist gender representation highlights the importance of algorithmic fairness in music AI. With Cyanite 2.0, we’ve made significant strides in delivering a balanced representation of male and female vocals, making it a powerful tool for fair music discovery.

    However, it’s crucial to remember that societal biases—like those seen in genres and instrumentation—don’t disappear overnight. These trends influence the data that AI music search models and genAI models are trained on, and we must remain vigilant to prevent them from reinforcing existing inequalities.

    Ultimately, providing fair and unbiased recommendations isn’t just about gender—it’s about ensuring that all artists are represented equally, allowing catalog owners and music professionals to explore the full spectrum of musical talent. At Cyanite, we’re committed to refining our models to promote diversity and inclusion in music discovery. By continuously improving our algorithms and understanding the societal factors at play, we aim to create a more inclusive music industry—one that celebrates all artists equally.

    If you’re interested in using Cyanite’s AI to find similar songs or learn more about our technology, feel free to reach out via mail@cyanite.ai.

    You can also try our free web app to analyze music and experiment with similarity searches without needing any coding skills.

    AI Music Recommendation Fairness: Gender Balance

    AI Music Recommendation Fairness: Gender Balance

    Eylül

    Eylül

    Data Scientist at Cyanite

    Part 2 of 2. To get a more general overview of AI Music recommendation fairness – more specifically the topic of gender bias, click here to check out part 1.

    Diving Deeper: The Statistics of Fair Music Discovery

    While the first part of this article introduced the concept of gender fairness in music recommendation systems in an overview, this section delves into the statistical methods and models that we employ at Cyanite to evaluate and ensure AI music recommendation fairness, particularly in gender representation. This section assumes familiarity with concepts like logistic regression, propensity scores, and algorithmic bias, so let’s dive right into the technical details.

    Evaluating Fairness Using Propensity Score Estimation

    To ensure our music discovery algorithms offer fair representation across different genders, we employ propensity score estimation. This technique allows us to estimate the likelihood (or propensity) that a given track will have certain attributes, such as the genre, instrumentation, or presence of male or female vocals. Essentially, we want to understand how different features of a song may bias the recommendation system and adjust for that bias accordingly to enhance AI music recommendation fairness.

    Baseline Model Performance

    Before diving into our improved music discovery algorithms, it’s essential to establish a baseline for comparison. We created a basic logistic regression model that utilizes only genre and instrumentation to predict the probability of a track featuring female vocals. 

    A model is considered well-calibrated when its predicted probabilities (represented by the blue line) closely align with the actual outcomes (depicted by the purple dashed line in the graph below). 

    Calibration plot comparing the predicted probability to the true probability in a logistic regression model. The solid blue line represents the logistic regression performance, while the dashed purple line represents a perfectly calibrated model. The x-axis shows the predicted probability, and the y-axis shows the true probability in each bin

    Picture 1: Our analysis shows that the logistic regression model used for baseline analysis tends to underestimate the likelihood of female vocal presence within a track at higher probability values. This is evident from the model’s performance, which falls below the diagonal line in reliability diagrams. The fluctuations and non-linearity observed suggest the limitations of relying solely on genres and instrumentation to predict artist representation accurately.

    Propensity Score Calculation

    In Cyanite’s Similarity Search – one of our music discovery algorithms – we model the likelihood of female vocals in a track as a function of genre and instrumentation using logistic regression. This gives us a probability score for each track, which we refer to as the propensity score. Here’s a basic formula we use for the logistic regression model:

    Logistic regression formula used to calculate the probability that a track contains female vocals based on input features like genre and instrumentation. The equation shows the probability of the binary outcome Y being 1 (presence of female vocals) given input features X. The formula includes the intercept (β0) and coefficients (β1, β2, ..., βn) for each input feature.

    Picture 2: The output is a probability (between 0 and 1) representing the likelihood that a track will feature female vocals based on its attributes. 

    Binning Propensity Scores for Fairness Evaluation

    To assess the AI music recommendation fairness of our models by observing the correlations between the input features such as genre and instrumentation with the gender of the vocals, we analyze for each propensity the model outcome of the female artist ratio. To see the trend of continuous propensity scores into discrete variables and the average of female vocal presentation for that range, binning has been applied. 

    We then calculate the percentage of tracks within each bin that have female vocals as the outcome of our models. This allows us to visualize the actual gender representation across different probability levels and helps us evaluate how well our music discovery algorithms promote gender balance.

     

    A bar chart comparing the average female vocal presence in Cyanite's Similarity Search results across different metadata groups.

    Picture 3: We aim for gender parity in each bin, meaning the percentage of tracks with female vocals should be approximately 50%. The closer we are to that horizontal purple dashed line, the better our algorithm performs in terms of gender fairness.

    Comparative Analysis: Cyanite 1.0 vs Cyanite 2.0

    By comparing the results of Cyanite 1.0 and Cyanite 2.0 against our baseline logistic regression model, we can quantify how much fairer our updated algorithm is.

    • Cyanite 1.0 showed an average female presence of 54%, indicating a slight bias towards female vocals.

    • Cyanite 2.0, however, achieved 51% female presence across all bins, signaling a more balanced and fair representation of male and female artists.

    This difference is crucial in ensuring that no gender is disproportionately represented, especially in genres or with instruments traditionally associated with one gender over the other (e.g., guitar for males, flute for females). Our results underscore the improvements in AI music recommendation fairness.

    How Propensity Scores Help Balance the Gender Gap

    Propensity score estimation is a powerful tool that allows us to address biases in the data samples used to train our music discovery algorithms. Specifically, propensity scores help ensure that features like genre and instrumentation do not disproportionately affect the representation of male or female artists in music recommendations.

    The method works by estimating the likelihood of a track having certain features (such as instrumentation, genre, or other covariates) using and checking if those features directly influence our Similarity Search by putting our algorithms to the test. Therefore, we investigate the spurious correlation which is directly related to gender bias in our dataset, partly from the societal biases. 

    We would like to achieve a scenario where we could represent genders equally in all kinds of music. This understanding allows us to fine-tune the model’s behavior to ensure more equitable outcomes and further improve our algorithms.

    Conclusion: Gender Balance 

    In conclusion, our comparative analysis of artist gender representation in music discovery algorithms highlights the importance of music recommendation fairness in machine learning models.

    Cyanite 2.0 demonstrates a more balanced representation, as evidenced by a near-equal presence of female and male vocals across various propensity score ranges.

    If you’re interested in using Cyanite’s AI to find similar songs or learn more about our technology, feel free to reach out via mail@cyanite.ai.

    You can also try our free web app to analyze music and experiment with similarity searches without needing any coding skills.

    Music Genre Finder – Biggest Web App Update Ever!

    Music Genre Finder – Biggest Web App Update Ever!

    Biggest Web App Update Ever!

    We’re thrilled to announce the launch of our biggest web app update since its launch. We’ve been listening to your feedback and have updated the web app with the world’s most accurate genre model, over 40 new instruments, and new library views.

    Best of all: It’s free for all our users.

    The World’s Best Music Genre Finder: Free Genres

    We know that finding the right genres for your music can be a challenge – and with all the feedback we have received over the months, we have developed the most accurate genre auto-tagging algorithm in the world – with over 1,500 genres. Missing something? Let us know.

    You can find our new free genres next to the web app’s main and subgenres.

    40+ New Instruments

    We also added 40 new instruments to our web app – on top of our existing instrument tags. Everything you need – from Steel Drums to Glockenspiel.

    You can find the new advanced instrument tags right next to the regular instruments in your library.

    New Library Views

    With over 35 tagging classifiers, metadata can be overwhelming at times. That’s why we chose to let you decide how deeply you want to explore your song data.

    You can choose between three different views of your Cyanite library just by clicking the “view” drop-down in your library. Select between:

    • Compact View (ideal to get an overview)
    • Full View (dive deep into all the Cyanite metadata for your tracks)
    • API View (the full view with our API classifier names)

    Curious? Try it out!

    All of these new features are free for all our users. Don’t have a Cyanite account yet? Click here, or the button below and get started. 5 analyses per month are on us! Start using the world’s best genre finder now.

     

    Your Cyanite Team.