Case Study: How Syncvault uses Cyanite’s AI Tagging To Unlock the Power of Music Promotion

Case Study: How Syncvault uses Cyanite’s AI Tagging To Unlock the Power of Music Promotion

Introduction

In the vast landscape of music tools for artists, London-based company SyncVault stands out as a reliable platform, empowering artists and brands to promote their music, products, and services. 

With an engaged community of social media influencers and content creators, SyncVault opens doors to new opportunities in the world of music promotion. 

To amplify their impact, SyncVault sought a state-of-the-art solution to unlock the full potential of their curated music catalog. This is where Cyanite entered the picture, offering AI-powered music analysis and tagging technology.

 

Defining the Challenge: Enhancing Music Metadata Insight

SyncVault aimed to extract deeper insights and data from their diverse repertoire of songs. 

Unlike conventional licensing providers with extensive libraries, SyncVault has a small and highly curated selection of tracks for which it required a solution capable of accurately generating multi-genre metadata and assigning appropriate weightage to each genre to improve music search and data insight.

 

Discovering the Suitable Partner: Cyanite

SyncVault found an ideal partner in Cyanite, which was recommended by their own network and whose product offering aligned seamlessly with SyncVault’s objectives. 

First, Cyanite’s comprehensive and accurate music analysis and tagging technology met their specific requirements. Cyanite’s taxonomy, which offers various tags in over 25 different classes, won over the team after a free tagging trial of 100 songs.

Second, SyncVault was impressed by Cyanite’s transparent, scalable, and competitive pricing model.

 

The Transformation: Streamlined Efficiency and Accuracy

After signing an agreement and booking a 1-year subscription, SyncVault seamlessly integrated Cyanite’s solutions into their workflow in just a few weeks. 

Picture 1: Mood-based keywords and search results on SyncVault platform

Additionally, Cyanite’s AI technology enhanced SyncVault’s music analytics, providing valuable insights into song structure, tempo, genre, key, mood, and more.

Empowering Team and Users: Elevating the SyncVault Experience

Cyanite’s auto-tagging capabilities significantly improved SyncVault’s efficiency and productivity, enabling its small team to categorize their repertoire faster and more consistently.

Furthermore, users experienced an enhanced music search, allowing them to filter and find the perfect soundtrack for their creative needsmore quickly. The partnership with Cyanite transformed SyncVault’s platform, fostering a thriving community where music resonates with listeners.

Picture 2: A look at how Syncvault’s curation team uses Cyanite tags in the backend.

Additionally, Cyanite’s AI technology enhanced SyncVault’s music analytics, providing valuable insights into song structure, tempo, genre, key, mood, and more.

A Promising Future: Expanding Horizons

SyncVault is experiencing a steady expansion of its service as it adds more tracks to the Content ID management system. Its catalogue is growing month on month creating more opportunities for licensing tracks for its brand partners.

SyncVault envisions extending its music promotion services to Content ID clients, creating more opportunities for brands to discover the ideal songs for their creative campaigns.

As SyncVault continues its expansion, Cyanite’s AI search and recommendation tools such as Similarity Search or Free Text Search would work seamlessly with their catalogue further enhancing the customer experience and forging new frontiers in music promotion. Integrating auto-tagging was just the first step towards an even deeper partnership between two music-enthusiastic companies.

If you want to learn more about SyncVault, you can check out their platform here: https://syncvault.com/

If you want to learn more about our API services, check out our docs here: https://api-docs.cyanite.ai/

Guest post for Hypebot: How AI can generate new revenue for existing music catalogs?

Guest post for Hypebot: How AI can generate new revenue for existing music catalogs?

Our CEO Markus Schwarzer has published a guest post on UK-based music industry medium Hypebot.

In this guest post, our CEO Markus elaborates on how AI can be used to resurface, reuse, and monetize long-forgotten music, addressing concerns about its impact on the music industry. By leveraging AI-driven curation and tagging capabilities, music catalog owners can extract greater value from their collections, enabling faster search, diverse curation, and the discovery of hidden music, while still protecting artists and intellectual property rights.

You can read the full guest post below or head over to Hypebot via this link.


by Markus Schwarzer, CEO of Cyanite

AI-induced anxiety is ever-growing.

Whether it’s the fear that machines will evolve capabilities beyond their coders’ control, or the more surreal case of a chatbot urging a journalist to leave his wife, paranoia that artificial intelligence is getting too big for its boots is building. One oft-cited concern, voiced in an open letter from a group of AI-experts and researchers calling themselves the Future of Life Institute calling for a pause in AI development, is whether, alongside mundane donkeywork, we risk automating more creative human endeavors.

It’s a question being raised in recording studios and music label boardrooms. Will AI begin replacing flesh and blood artists, generating music at the touch of a button?

While some may discount these anxieties as irrational and accuse AI skeptics of being dinosaurs who are failing to embrace the modern world, the current developments must be taken seriously.

AI poses a potential threat to the livelihood of artists and in the absence of new copyright laws that specifically deal with the new technology, the music industry will need to find ways to protect its artists.

We all remember when AI versions of songs by The Weeknd and Drake hit streaming services and went viral. Their presence on streaming services was short-lived but it’s a very real example of how AI can potentially destabilise the livelihood of artists. Universal Music Group quickly put out a statement asking the music industry “which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.

“there are vast archives of music of all genres lying dormant and thousands of forgotten tracks”

However, there are ways that AI can deliver real value to the industry – and specifically to the owners of large music catalogues. Catalogue owners often struggle with how to extract the maximum value out of the human-created music they’ve already got.

But we can learn from genAI approaches. Recently introduced by AI systems like Midjourney, ChatGPT or Riffusion, prompt-based search experiences are prone to creep into anyone’s user behavior. But instead of having to fall back to bleak replicas of human-created images, texts, or music, AI engines can give music catalogue owners the power to build comparable search experience with the advantage of surfacing well-crafted and sounding songs with a real human and a real story behind it.

There are vast archives of music of all genres lying dormant, and thousands of forgotten tracks within existing collections, that could be generating revenue via licensing deals for film, TV, advertising, trailers, social media clips and video games; from licences for sampling; or even as a USP for investors looking to purchase unique collections. It’s not a coincidence that litigation over plagiarism is skyrocketing. With hundreds of millions of songs around, there is a growing likelihood that the perfect song for any use case already exists and just needs to be found rather than mass-generated by AI.

With this in mind, the real value of AI to music custodians lies in its search and curation capabilities, which enable them to find new and diverse ways for the music in their catalogues to work harder for them.

How AI music curation and AI tagging work

To realize the power of artificial intelligence to extract value from music catalogues, you need to understand how AI-driven curation works.

Simply put, AI can do most things a human archivist can do,but much, much faster; processing vast volumes of content, and tagging, retagging, searching, cross-referencing and generating recommendations in near real-time. It can surface the perfect track – the one you’d forgotten, didn’t know you had, or would never have considered for the task in hand – in seconds.

This is because AI is really good at auto-tagging, a job few humans relish. It can categorise entire music libraries by likely search terms, tagging each recording by artist and title, and also by genre, mood, tempo and language. As well as taking on a time-consuming task, AI removes the subjectivity of a human tagger, while still being able to identify the sentiment in the music and make complex links between similar tracks. AI tagging is not only consistent and objective (it has no preference for indie over industrial house), it also offers the flexibility to retag as often as needed.

The result is that, no matter how dusty and impenetrable a back catalogue, all its content becomes accessible for search and discovery. AI has massively improved both identification and recommendation for music catalogues. It can surface a single song using semantic search, which identifies the meaning of the lyrics. Or it can pick out particular elements in the complexities of music in your library which make it sound similar to another composition (one that you don’t own the rights to, for example). This allows AI to use reference songs to search through catalogues for comparable tracks.

The power of AI music catalog search

The value of AI to slice and dice back catalogs in these ways is considerable for companies that produce and licence audio for TV, film, radio and multimedia projects. The ability to intelligently search their archives at high speed means they can deliver exactly the right recording to any given movie scene or gaming sequence.

Highly customisable playlists culled from a much larger catalogue are another benefit of AI-assisted search. While its primary function is to allow streaming services such as Spotify to deliver ‘you’ll like this’ playlists to users, for catalogue owners it means extracting infinitely refinable sub-sets of music which can demonstrate the archive’s range and offer a sonic smorgasbord to potential clients.

“the extraction of ‘hidden’ music”

Another major value-add is the extraction of ‘hidden’ music. The ability of AI to make connections based on sentiment and even lyrical hooks and musical licks, as well as tempo, instruments and era, allows it to match the right music to any project with speed and precision only the most dedicated catalogue curator could fulfil. With its capacity to search vast volumes of content, AI opens the entirety of a given library to every search, and surfaces obscure recordings. Rather than just making money from their most popular tracks, therefore, the owners of music archives can make all of their collection work for them.

The tools to do all of this already exist. Our own solution is a powerful AI engine that tags and searches an entire catalogue in minutes with depth and accuracy. Meanwhile, AudioRanger is an audio recognition AI which identifies the ownership metadata of commercially released songs in music libraries. And PlusMusic is an AI that makes musical pieces adaptive for in-game experiences. As the gaming situation changes, the same song will then adapt to it.

Generative AI – time for careful reflection

The debate on the role of generative AI in the music industry won’t be solved anytime soon and it shouldn’t. We should reflect carefully on the incorporation of any technology that might potentially reshape our industry. We should ask questions such as: how do we protect artists? How do we use the promise of generative AI to enhance human art? What are the legal and ethical challenges that this technology poses? All of these issues must be addressed in order for the industry to reap the benefits of generative AI.

Adam Taylor, President and CEO of the American production music company APM Music, shared with me that he believes it is vital to safeguard intellectual property rights, including copyright, as generative AI technologies grow across the world. As he puts it: “While we are great believers in the power of technology and use it throughout our enterprise, we believe that all technology should be used in responsible ways that are human-centric. Just as it has been throughout human history, we believe that our collective futures are intrinsically tied to and dependent on retaining the centrality of human-centered art and creativity.

The debate around the role of generative AI models will continue to play out as we look for ways to embrace new technologies and protect artists, and naturally there are those like Adam who will wish to adopt a cautious approach. But while there are many who are reluctant to wholeheartedly embrace generative AI models, andthere are many more who are willing to embrace analysis and search AI to protect their catalogues and make them more efficient and searchable.

Ultimately, it’s down to the industry to take control of this issue, find a workable level of comfort with AI capabilities, and build AI-enhanced music environments that will vastly improve the searchability – and therefore usefulness – of existing, human-generated music.

If you want to get more updates from Markus’ view on the music industry, you can connect with him on LinkedIn here.

 

More Cyanite content on AI and music

Debating the upsides of Universal Music Group’s recent AI attack (guest post on Music Ally)

Debating the upsides of Universal Music Group’s recent AI attack (guest post on Music Ally)

Our CEO Markus Schwarzer has published a guest post on UK-based music industry medium Music Ally. In the post, Markus addresses the concern that major labels and other large music companies have shown recently about the use of Artificial Intelligence in music and business – and the importance of stepping back and thinking carefully about as-yet unknown repercussions, before moving into a future where AI benefits us all.

You can read the full guest post below or head over to Music Ally via this link.

In recent months, Universal Music Group has become the ringleader of a front that has formed against generative music AI companies – and latterly all AI companies.

After news made the rounds of UMG’s recent actions, people everywhere (including myself) spoke out about the positives of AI. AI has the potential to improve art, create a better environment for DIY artists, and foster new musical ecosystems. However, whilst the industry was debating the prosperous future of music fuelled by AI, with leveled playing fields, democratised accesses, and transparency, we forgot one thing. All of these positive outcomes might be true in the future, but the current reality of generative AI is different.

Currently, it is an uncontrolled wild west where new models have shown that they’re not just some game for the tech-interested individuals among us, but an actual threat to the livelihoods of artists.

Reading through and experimenting with recent generative music AI advancements, I can’t help but feel reminded of Pause Giant AI Experiments: An Open Letter, which was directed at developers of large language models (LLMs) like Open•AI’s GPT-4 or Meta’s LLaMA. It urged them to halt their developments and think about the implications of their projects for at least six months.

The open letter made some requests which are equally applicable to the music industry. Just like LLMs, some generative music startups see themselves “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Just like LLMs we may run into the risk that “no one – not even their creators – can understand, predict, or reliably control” them. Just like LLMs, we need to ask ourselves “Should we automate away all the jobs, including the fulfilling ones?”

The latter is a question that we at Cyanite and other AI companies also have to ask ourselves frequently. Do we automate meaningful jobs, or just tedious unloved chores to free up time for creative work?

But unlike LLMs, the music industry has copyright law to enforce the temporary halt of new training models (at least in those areas where it is enforceable). So what if the UMG-attempted halt of new generative AI training allows us to take a step back and try to get an objective perspective on recent developments? This is something that is not possible with LLMs, because training data is so much more accessible and less controllable. Which is the reason people have to write open letters in the first place – a strategy which has somewhat questionable expectations of success.

Many in the industry have criticised UMG’s approach as a general barrage of fire launched at any company working with AI, in the hope of hitting some of their targets; one that will ultimately also harm companies working on products beneficial for the industry, while also eventually forcing advancements in the generative space into the uncontrollable underground.

Despite this being undoubtedly true, we can’t deny that it has sparked a very important debate on whether we need to slow down the acceleration of AI. I would argue that if UMG’s actions will let us pause AI for a second, take a deep breath, imagine the future of music AI and then start developing towards exactly that goal, their actions would have a hugely positive effect.

If you want to get more updates from Markus’ view on the music industry, you can connect with him on LinkedIn here.

SoundOut launches OnBrand in cooperation with Cyanite

SoundOut launches OnBrand in cooperation with Cyanite

We are proud to share the latest press release by UK-based company SoundOut which is the world leader in sonic testing for audio branding.

In the announcement below, read about the new OnBrand platform developed by SoundOut and powered by the AI of Cyanite and how it empowers marketers to build certainty into every music choice for campaign.

 

Campaign music search for brands and agencies

October 26th, 2022, London: SoundOut launches OnBrand, an entirely new approach to music search that revolutionises the process of selecting music for marketing campaigns.

Hugely scalable AI-powered music search platform removes uncertainty from every brand campaign music decision

• OnBrand ensures music choices always match brand personality and campaign goals
• Increases certainty of ROI from music choices
• Launch partners include Unilever, Scholz & Friends (WPP) and Global Radio
• Combines leading SoundOut brand personality technologies with the scalability of German music AI company Cyanite to transform commercial music selection

SoundOut, the world leader in sonic testing, has launched a revolutionary AI-powered music search and testing SaaS platform named OnBrand. It enables marketers to build certainty into every music choice for campaigns. OnBrand is powered by AI algorithms that predict the granular emotional impact of music, trained on feedback from half a million people.

OnBrand enables marketers to search across any number of music catalogues to identify campaign music that is both on-brand and campaign appropriate, using a combination of over 200 brand attributes, plus self-defined brand personality and brand archetypes. In this way, OnBrand delivers greater certainty of immediate impact and sustained ROI from their campaigns, by reducing subjectivity and risk from music selection.

Global companies Unilever, Global, the Media & Entertainment Group, and Scholz & Friends – part of the WPP Network – are among the first users of the OnBrand platform.

Stephanie Bau, Global Assistant Brand Manager at Unilever, said: “With the growth of social media platforms like TikTok, sound has become the ultimate tool in a marketer’s arsenal. Choosing the right sound for our future campaigns has never been more important and this technology will enable brands to amplify their personality and have greater certainty of ROI from campaigns during these economically challenging times.

Julian Krohn, Director Music & Audio, Scholz & Friends (WPP), said: “From an agency perspective, OnBrand is a uniquely powerful tool that will enable us to add significant value to our clients’ campaigns. Ensuring that music is both brand and campaign appropriate has never been easier – and OnBrand can only increase their return on marketing investment. We’re looking forward to working closely with the tool!

Powered by a unique double-stacked AI layer of algorithms trained entirely on human derived data, OnBrand first automatically tags music with up to 500 separate attributes thanks to a partnership with Cyanite, the world-leading AI music tagging company. Then it uses a further AI layer to map these tags to SoundOut’s emotional DNA map of music, created with the input of over 500,000 consumer surveys and over 12 million datapoints.

Jo McCrostie, Creative Director at Global Radio, Europe’s largest commercial radio group, commented: “OnBrand represents a truly seismic revolution in how companies find brand appropriate music for commercial use. A previous lack of objectivity in music choices has restricted investment in audio marketing such as radio ads. I’ve seen for myself the positive reaction from brands to the new platform and it looks set to be transformational for the audio advertising industry.

OnBrand can automatically rate any track against over 200 emotional attributes in a fraction of the time taken by people. It enables catalogues of millions of tracks to be emotionally indexed in under 24 hours with over 95% precision compared to human indexation.

David Courtier-Dutton, CEO of SoundOut, said: “Until now, choosing music for marketing has been a largely subjective exercise, with little in the way of objective metrics to confirm brand fit and emotional resonance. At a stroke, OnBrand introduces an objective, hugely scalable solution for brands worldwide. It enables data-informed music choices and provides robust cost/benefit analysis for any commercial music investment. OnBrand is not only totally brand-centric but it speaks brand language; enabling brands to enhance campaign performance whilst simultaneously strengthening their emotional bonds with consumers.

Markus Schwarzer, CEO of Cyanite, added: “AI music tagging technology has advanced significantly over the past few years and has now been adopted by many of the world’s leading music and entertainment companies. The additional AI brand centric layer that OnBrand delivers truly democratises catalogue search for brands, enabling them to find the perfect track for any campaign using brand language rather than musical attributes.

 

About SoundOut

SoundOut is the world leader in strategic sonic branding and audio marketing testing. It has achieved this lead position by combining three powerful capabilities.

  • Working with world leading music psychologists and over 500,000 consumers, it has mapped the explicit emotional DNA of sound and used this as the foundation for a suite of tools, such as BrandMatch, that can be used at various stages of sonic branding development to increase the certainty of a return on investment.

  • The development of a wholly owned consumer panel of over 3.5 million people, which enables brands to test their sonic assets at scale.

  • The testing and analysis of almost 200 in market sonic logos with over 400,000 consumers (The SoundOut Index) that reveals the key criteria that are essential to audio branding and audio marketing success.

SoundOut works with many of the most iconic brands in the world (such as TikTok, Amazon, Toyota, DHL, Ford, Unilever and GSK) as well as all the major record labels and many leading radio groups. SoundOut specialises in helping organisations trigger the right emotional response from their customers by matching brand personality and attributes to music. As a result, SoundOut provides the data and insight needed by clients to increase the certainty of achieving a strong ROI from their audio branding and marketing investments.

Clients use SoundOut’s unrivalled strategic sonic testing capabilities to identify the effectiveness potential of new sonic identities before they are launched and ensure that they resonate with the core brand personality.

OnBrand now scales these capabilities to all use of music in brand marketing, enabling brands to index huge music catalogues and search them based on the personality, attributes or archetype of their brand.

AI Panel: Using AI Music Search in a Co-Creative Approach between Human and Machine

AI Panel: Using AI Music Search in a Co-Creative Approach between Human and Machine

In September 2022, Cyanite co-founder, Markus took part in a panel discussion at Production Music Conference 2022 in Los Angeles.

The panel topic was to discuss the role of AI in a co-creative approach between humans and machines. The panel participants included Bruce Anderson (APM Music), Markus Schwarzer (Cyanite), Nick Venti (PlusMusic), Philippe Guillaud (MatchTune), and Einar M. Helde (AIMS API)

The panel raised pressing discussion points on the future of AI so we decided to publish our takeaways here. To watch the full video of the panel, scroll down to the middle of the article. Enjoy the read! 

Human-Machine Co-creativity

AI performs many tasks that are usually difficult for people, such as analyzing song data, extracting information, searching music, and creating completely new tracks. As AI usage increases, questions of AI’s potential and its ability to create with humans or create on their own have been raised. The possibility of AI replacing humans is, perhaps, one of the most contradicting topics. 

The PMC 2022 panel focused on the topic of co-creativity. Some AI can create on their own, but co-creativity represents creativity between the human and the machine.

So it is not the sum of individual creativity, rather it is the emergence of various new forms of interactions between humans and machines. To find out all the different ways AI music search can be co-creative, let’s dive into the main takeaways from the panel:

Music industry challenges

The main music industry challenge that all participants agreed on was the overwhelming amount of music produced these days. Another challenge is reaching a shared understanding of music.

The way someone searches for music depends on their understanding of music which can widely differ and their role in the music industry. Music supervisors, for example, use a different language to search for music than film producers.

We talked about it in detail at Synchtank blog back in May 2022. AI can solve these issues, especially with the new developments in the field.

Audience Question from Adam Taylor, APM Music: Where do we see AI going in the next 5 years?

So what’s in store for music AI in the next 5 years? We’re entering a post-tagging era marked by the combination of developments in music search. Keyword search will no longer be the main way to search for or index music. Instead, the following developments will take place: 

 

  • Similarity Search has shown that we can use complex inputs to find music. Similarity search pulls a list of songs that match a reference track. It is projected to be the primary way of searching for music in the future. 

 

  • Free Searches – Search in full-text that allows searching for music in your own words based on natural language processing technologies. With a Free Search, you enter what comes to mind into a search bar and the AI suggests a song. This is a technology similar to DALL-E or Midjourney that returns an image based on text input.

 

  • Music service that already knows what to do – in a further perspective, music services will emerge that recommend music depending on where you are in your role or personal development. These services will cater to all levels of search: from an amateur level that simply gives you a requested song to expert searches following an elaborate sync brief including images and videos that accompany the brief or even a stream of consciousness.

Audience Question from Alan Lazar, Luminary Scores: Can I decode which songs have the potential to be a hit?

While some AI companies attempted to decode the hit potential of music, it is still unclear if there is any way to determine whether the song becomes a hit.

The nature of pop culture and the many factors that compile a hit from songwriting to production and elusive factors such as what is the song connected make it impossible to predict whether or not a song becomes a hit. 

The vision for AI from Cyanite – where would we like to see it in the future?

AI curation in music is developing at a lightning speed. We’re hoping that it will make music space more exciting and diverse, which includes in particular: 

 

  • Democratization and diversity of the field – more opportunities will become available for musicians and creators, including democratized access to sync opportunities and other ways to make a livelihood from music. 

 

  • Creativity and surprising experiences – right now AI is designed to do the same tasks at a rapid speed. We’re hoping AI will be able to perform tasks co-creatively and produce surprising experiences based on music but also other factors. As music has the ability to touch directly into people’s emotions, it has the potential to be a part of a greater narrative.
YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Video from the PMC 2022 panel: Using AI Music Search In A Co-Creative Approach Between Human and Machine

Bonus takeaway: Co-creativity between users and tech – supplying music data to technology

It seems that we should be able to pull all sorts of music data from the environments such as video games and user-generated content. However, the diversity of music projects is quite astonishing.

So when it comes to co-creativity from the side of enhancement of machine tagging with human tagging, personalization can be harmful to B2B. In B2B, AI mainly works with audio features without the involvement of user-generated data.

Conclusion

To sum up, AI can co-create with humans and solve the challenges facing the music industry today. There is a lot in store for AI’s future development and there is a lot of potential.

Still, AI is far away from replacing humans and should not replace them completely. Instead, it will improve in ways that will make music searches more intuitive and co-creative responding to human input in the form of a text search, image, or video. 

As usual with AI, some people overestimate what it can do. Some tasks such as identifying music’s hit potential remain unthinkable for AI.

On the other hand, it’s not hard to envision the future where AI can help democratize access to opportunities for musicians and produce surprising projects where music will be a part of a shared emotional experience.

We hope you enjoyed this read and learned more about AI co-creativity and the future of AI music search. If you’re interested to learn more, you can also check out the article “The 4 Applications of AI in the Music Industry”. If you have any feedback, questions, or contributions, please reach out to markus@cyanite.ai.

I want to integrate AI search into my library – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

Case Study: How SWR uses Cyanite’s recommendation algorithms for their new radio app

Case Study: How SWR uses Cyanite’s recommendation algorithms for their new radio app

About SWR

SWR is not only Germany’s 3rd biggest radio station but also investing heavily in the future of radio via its own innovation entity SWR audiolab. Their goal is to develop new radio experiences to attract audiences who no longer listen to traditional radio. For instance, the new SWR mobile app enables users to skip songs – a concept completely revolutionary to linear radio.

Weekly listeners of SWR owned radio stations in Germany: 7,07 million

Visits of SWR online services in 2019: 218,3 million

“Our aspiration is to create the future of radio for tomorrow and beyond by combining media trends and technologies with the demands of our users.”

Challenge

By providing on-demand content, SWR steps out of their radio license and have to pay royalties for every song stream like Spotify or Apple Music. Every skip means additional costs.

Shall I skip or shall I go?

Solution: Bespoke recommendation system

SWR implemented Cyanite’s music intelligence and user insight systems to automate song recommendations and personalized playlists based on listener preferences significantly lowering skips and increasing session times. The technology is delivered via the Cyanite API and seamlessly integrated into SWR’s new radio app.

This is a screenshot of Cyanite’s library view which easily turns a catalogue into an accountable database

Results

+ 60% lower skip rate

+ Annual cost saving of >60,000 €

+ 46% increase in session time

+ Valuable data on user behaviour and personal music preference

Christian Hufnagel

Christian Hufnagel

Co-Founder of SWR audio labs

Christian Hufnagel is the co-founder of SWR audio labs and drives the innovation activities of SWR forward.

Cyanite recommendation algorithms help us to develop a personalized radio station of the future that better attracts and engages listeners.”

I want to integrate AI in my service as well – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.