Experience Our Biggest Web App Update with 5,000+ New Genres! 🎉 Discover Now

Case Study: How Syncvault uses Cyanite’s AI Tagging To Unlock the Power of Music Promotion

Case Study: How Syncvault uses Cyanite’s AI Tagging To Unlock the Power of Music Promotion

Introduction

In the vast landscape of music tools for artists, London-based company SyncVault stands out as a reliable platform, empowering artists and brands to promote their music, products, and services. 

With an engaged community of social media influencers and content creators, SyncVault opens doors to new opportunities in the world of music promotion. 

To amplify their impact, SyncVault sought a state-of-the-art solution to unlock the full potential of their curated music catalog. This is where Cyanite entered the picture, offering AI-powered music analysis and tagging technology.

 

Defining the Challenge: Enhancing Music Metadata Insight

SyncVault aimed to extract deeper insights and data from their diverse repertoire of songs. 

Unlike conventional licensing providers with extensive libraries, SyncVault has a small and highly curated selection of tracks for which it required a solution capable of accurately generating multi-genre metadata and assigning appropriate weightage to each genre to improve music search and data insight.

 

Discovering the Suitable Partner: Cyanite

SyncVault found an ideal partner in Cyanite, which was recommended by their own network and whose product offering aligned seamlessly with SyncVault’s objectives. 

First, Cyanite’s comprehensive and accurate music analysis and tagging technology met their specific requirements. Cyanite’s taxonomy, which offers various tags in over 25 different classes, won over the team after a free tagging trial of 100 songs.

Second, SyncVault was impressed by Cyanite’s transparent, scalable, and competitive pricing model.

 

The Transformation: Streamlined Efficiency and Accuracy

After signing an agreement and booking a 1-year subscription, SyncVault seamlessly integrated Cyanite’s solutions into their workflow in just a few weeks. 

Picture 1: Mood-based keywords and search results on SyncVault platform

Additionally, Cyanite’s AI technology enhanced SyncVault’s music analytics, providing valuable insights into song structure, tempo, genre, key, mood, and more.

Empowering Team and Users: Elevating the SyncVault Experience

Cyanite’s auto-tagging capabilities significantly improved SyncVault’s efficiency and productivity, enabling its small team to categorize their repertoire faster and more consistently.

Furthermore, users experienced an enhanced music search, allowing them to filter and find the perfect soundtrack for their creative needsmore quickly. The partnership with Cyanite transformed SyncVault’s platform, fostering a thriving community where music resonates with listeners.

Picture 2: A look at how Syncvault’s curation team uses Cyanite tags in the backend.

Additionally, Cyanite’s AI technology enhanced SyncVault’s music analytics, providing valuable insights into song structure, tempo, genre, key, mood, and more.

A Promising Future: Expanding Horizons

SyncVault is experiencing a steady expansion of its service as it adds more tracks to the Content ID management system. Its catalogue is growing month on month creating more opportunities for licensing tracks for its brand partners.

SyncVault envisions extending its music promotion services to Content ID clients, creating more opportunities for brands to discover the ideal songs for their creative campaigns.

As SyncVault continues its expansion, Cyanite’s AI search and recommendation tools such as Similarity Search or Free Text Search would work seamlessly with their catalogue further enhancing the customer experience and forging new frontiers in music promotion. Integrating auto-tagging was just the first step towards an even deeper partnership between two music-enthusiastic companies.

If you want to learn more about SyncVault, you can check out their platform here: https://syncvault.com/

If you want to learn more about our API services, check out our docs here: https://api-docs.cyanite.ai/

Guest post for Hypebot: How AI can generate new revenue for existing music catalogs?

Guest post for Hypebot: How AI can generate new revenue for existing music catalogs?

Our CEO Markus Schwarzer has published a guest post on UK-based music industry medium Hypebot.

In this guest post, our CEO Markus elaborates on how AI can be used to resurface, reuse, and monetize long-forgotten music, addressing concerns about its impact on the music industry. By leveraging AI-driven curation and tagging capabilities, music catalog owners can extract greater value from their collections, enabling faster search, diverse curation, and the discovery of hidden music, while still protecting artists and intellectual property rights.

You can read the full guest post below or head over to Hypebot via this link.


by Markus Schwarzer, CEO of Cyanite

AI-induced anxiety is ever-growing.

Whether it’s the fear that machines will evolve capabilities beyond their coders’ control, or the more surreal case of a chatbot urging a journalist to leave his wife, paranoia that artificial intelligence is getting too big for its boots is building. One oft-cited concern, voiced in an open letter from a group of AI-experts and researchers calling themselves the Future of Life Institute calling for a pause in AI development, is whether, alongside mundane donkeywork, we risk automating more creative human endeavors.

It’s a question being raised in recording studios and music label boardrooms. Will AI begin replacing flesh and blood artists, generating music at the touch of a button?

While some may discount these anxieties as irrational and accuse AI skeptics of being dinosaurs who are failing to embrace the modern world, the current developments must be taken seriously.

AI poses a potential threat to the livelihood of artists and in the absence of new copyright laws that specifically deal with the new technology, the music industry will need to find ways to protect its artists.

We all remember when AI versions of songs by The Weeknd and Drake hit streaming services and went viral. Their presence on streaming services was short-lived but it’s a very real example of how AI can potentially destabilise the livelihood of artists. Universal Music Group quickly put out a statement asking the music industry “which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.

“there are vast archives of music of all genres lying dormant and thousands of forgotten tracks”

However, there are ways that AI can deliver real value to the industry – and specifically to the owners of large music catalogues. Catalogue owners often struggle with how to extract the maximum value out of the human-created music they’ve already got.

But we can learn from genAI approaches. Recently introduced by AI systems like Midjourney, ChatGPT or Riffusion, prompt-based search experiences are prone to creep into anyone’s user behavior. But instead of having to fall back to bleak replicas of human-created images, texts, or music, AI engines can give music catalogue owners the power to build comparable search experience with the advantage of surfacing well-crafted and sounding songs with a real human and a real story behind it.

There are vast archives of music of all genres lying dormant, and thousands of forgotten tracks within existing collections, that could be generating revenue via licensing deals for film, TV, advertising, trailers, social media clips and video games; from licences for sampling; or even as a USP for investors looking to purchase unique collections. It’s not a coincidence that litigation over plagiarism is skyrocketing. With hundreds of millions of songs around, there is a growing likelihood that the perfect song for any use case already exists and just needs to be found rather than mass-generated by AI.

With this in mind, the real value of AI to music custodians lies in its search and curation capabilities, which enable them to find new and diverse ways for the music in their catalogues to work harder for them.

How AI music curation and AI tagging work

To realize the power of artificial intelligence to extract value from music catalogues, you need to understand how AI-driven curation works.

Simply put, AI can do most things a human archivist can do,but much, much faster; processing vast volumes of content, and tagging, retagging, searching, cross-referencing and generating recommendations in near real-time. It can surface the perfect track – the one you’d forgotten, didn’t know you had, or would never have considered for the task in hand – in seconds.

This is because AI is really good at auto-tagging, a job few humans relish. It can categorise entire music libraries by likely search terms, tagging each recording by artist and title, and also by genre, mood, tempo and language. As well as taking on a time-consuming task, AI removes the subjectivity of a human tagger, while still being able to identify the sentiment in the music and make complex links between similar tracks. AI tagging is not only consistent and objective (it has no preference for indie over industrial house), it also offers the flexibility to retag as often as needed.

The result is that, no matter how dusty and impenetrable a back catalogue, all its content becomes accessible for search and discovery. AI has massively improved both identification and recommendation for music catalogues. It can surface a single song using semantic search, which identifies the meaning of the lyrics. Or it can pick out particular elements in the complexities of music in your library which make it sound similar to another composition (one that you don’t own the rights to, for example). This allows AI to use reference songs to search through catalogues for comparable tracks.

The power of AI music catalog search

The value of AI to slice and dice back catalogs in these ways is considerable for companies that produce and licence audio for TV, film, radio and multimedia projects. The ability to intelligently search their archives at high speed means they can deliver exactly the right recording to any given movie scene or gaming sequence.

Highly customisable playlists culled from a much larger catalogue are another benefit of AI-assisted search. While its primary function is to allow streaming services such as Spotify to deliver ‘you’ll like this’ playlists to users, for catalogue owners it means extracting infinitely refinable sub-sets of music which can demonstrate the archive’s range and offer a sonic smorgasbord to potential clients.

“the extraction of ‘hidden’ music”

Another major value-add is the extraction of ‘hidden’ music. The ability of AI to make connections based on sentiment and even lyrical hooks and musical licks, as well as tempo, instruments and era, allows it to match the right music to any project with speed and precision only the most dedicated catalogue curator could fulfil. With its capacity to search vast volumes of content, AI opens the entirety of a given library to every search, and surfaces obscure recordings. Rather than just making money from their most popular tracks, therefore, the owners of music archives can make all of their collection work for them.

The tools to do all of this already exist. Our own solution is a powerful AI engine that tags and searches an entire catalogue in minutes with depth and accuracy. Meanwhile, AudioRanger is an audio recognition AI which identifies the ownership metadata of commercially released songs in music libraries. And PlusMusic is an AI that makes musical pieces adaptive for in-game experiences. As the gaming situation changes, the same song will then adapt to it.

Generative AI – time for careful reflection

The debate on the role of generative AI in the music industry won’t be solved anytime soon and it shouldn’t. We should reflect carefully on the incorporation of any technology that might potentially reshape our industry. We should ask questions such as: how do we protect artists? How do we use the promise of generative AI to enhance human art? What are the legal and ethical challenges that this technology poses? All of these issues must be addressed in order for the industry to reap the benefits of generative AI.

Adam Taylor, President and CEO of the American production music company APM Music, shared with me that he believes it is vital to safeguard intellectual property rights, including copyright, as generative AI technologies grow across the world. As he puts it: “While we are great believers in the power of technology and use it throughout our enterprise, we believe that all technology should be used in responsible ways that are human-centric. Just as it has been throughout human history, we believe that our collective futures are intrinsically tied to and dependent on retaining the centrality of human-centered art and creativity.

The debate around the role of generative AI models will continue to play out as we look for ways to embrace new technologies and protect artists, and naturally there are those like Adam who will wish to adopt a cautious approach. But while there are many who are reluctant to wholeheartedly embrace generative AI models, andthere are many more who are willing to embrace analysis and search AI to protect their catalogues and make them more efficient and searchable.

Ultimately, it’s down to the industry to take control of this issue, find a workable level of comfort with AI capabilities, and build AI-enhanced music environments that will vastly improve the searchability – and therefore usefulness – of existing, human-generated music.

If you want to get more updates from Markus’ view on the music industry, you can connect with him on LinkedIn here.

 

More Cyanite content on AI and music

Debating the upsides of Universal Music Group’s recent AI attack (guest post on Music Ally)

Debating the upsides of Universal Music Group’s recent AI attack (guest post on Music Ally)

Our CEO Markus Schwarzer has published a guest post on UK-based music industry medium Music Ally. In the post, Markus addresses the concern that major labels and other large music companies have shown recently about the use of Artificial Intelligence in music and business – and the importance of stepping back and thinking carefully about as-yet unknown repercussions, before moving into a future where AI benefits us all.

You can read the full guest post below or head over to Music Ally via this link.

In recent months, Universal Music Group has become the ringleader of a front that has formed against generative music AI companies – and latterly all AI companies.

After news made the rounds of UMG’s recent actions, people everywhere (including myself) spoke out about the positives of AI. AI has the potential to improve art, create a better environment for DIY artists, and foster new musical ecosystems. However, whilst the industry was debating the prosperous future of music fuelled by AI, with leveled playing fields, democratised accesses, and transparency, we forgot one thing. All of these positive outcomes might be true in the future, but the current reality of generative AI is different.

Currently, it is an uncontrolled wild west where new models have shown that they’re not just some game for the tech-interested individuals among us, but an actual threat to the livelihoods of artists.

Reading through and experimenting with recent generative music AI advancements, I can’t help but feel reminded of Pause Giant AI Experiments: An Open Letter, which was directed at developers of large language models (LLMs) like Open•AI’s GPT-4 or Meta’s LLaMA. It urged them to halt their developments and think about the implications of their projects for at least six months.

The open letter made some requests which are equally applicable to the music industry. Just like LLMs, some generative music startups see themselves “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Just like LLMs we may run into the risk that “no one – not even their creators – can understand, predict, or reliably control” them. Just like LLMs, we need to ask ourselves “Should we automate away all the jobs, including the fulfilling ones?”

The latter is a question that we at Cyanite and other AI companies also have to ask ourselves frequently. Do we automate meaningful jobs, or just tedious unloved chores to free up time for creative work?

But unlike LLMs, the music industry has copyright law to enforce the temporary halt of new training models (at least in those areas where it is enforceable). So what if the UMG-attempted halt of new generative AI training allows us to take a step back and try to get an objective perspective on recent developments? This is something that is not possible with LLMs, because training data is so much more accessible and less controllable. Which is the reason people have to write open letters in the first place – a strategy which has somewhat questionable expectations of success.

Many in the industry have criticised UMG’s approach as a general barrage of fire launched at any company working with AI, in the hope of hitting some of their targets; one that will ultimately also harm companies working on products beneficial for the industry, while also eventually forcing advancements in the generative space into the uncontrollable underground.

Despite this being undoubtedly true, we can’t deny that it has sparked a very important debate on whether we need to slow down the acceleration of AI. I would argue that if UMG’s actions will let us pause AI for a second, take a deep breath, imagine the future of music AI and then start developing towards exactly that goal, their actions would have a hugely positive effect.

If you want to get more updates from Markus’ view on the music industry, you can connect with him on LinkedIn here.

Key Take-Aways From MUSEXPO 2023 In Los Angeles – Part 2

Key Take-Aways From MUSEXPO 2023 In Los Angeles – Part 2

Written by our CMO Jakob Höflich

This is the second part of my take-aways from Musexpo 2023. If you have missed the first part, you can read it here.

Besides a noisy market and the importance of back catalog, those were further topics that stuck with me when I travelled back to Germany.

AI & Data

Of course, as an AI representative I would have loved to see more AI players on stage such as Beatoven.ai, but on the other hand it was refreshing to have this hype-topic not so in the fore-front as is the case at many other conferences, but rather bringing actual applications and use cases up while discussing proper real-world challenges. Nevertheless, it became clear that the current AI discussion is dominated by AI generated music. There was fear issued by industry representatives that it will take away creativity and replace it. But then there was also the beautiful quote that in music, “only hearts will touch hearts“ – unfortunately I forgot who said it but I think that is very true. Still, the entire Castaway in Burbank, where the conference was held, held its breath when Dennis Hausammann, CEO of iGroove put it out upfront: “Guys, AI is here and it’s here to stay. It will change the industry and you can either embrace it or decide not to. But let’s face it, it is here to stay and it is happening right now“. As you can imagine, I loved that.

What I also experienced in my conversations is that the value and benefit of AI for tagging and searching music, such as we do at Cyanite.ai, is not yet fully leveraged by music publishers. So even though this technology already proves hands-on benefits such as saving money on tagging and licensing more music by leveraging the depths of a catalog with AI, everything is still young. I feel we are really at the beginning of a new wave of tech-driven publishers, supervisors and sync teams who are super data and music-savvy and leverage the huge opportunity of data and play it back to their artists and teams meaningfully.

Internationalization

I really loved the panel “Market Discovery India“. We deal with quite a lot of requests from India and I can really see this market blowing up. What was fascinating to hear is that 5-10 years ago, around 90% of the popular music in India came from movie soundtracks. There was no separate film and music industry, it was one big industry with no separation of video and audio. Today, that number has dropped to 30-50%, which is still very high compared to other markets, but also shows that a new Indian music industry is on the rise.

But it’s not only about India. One panelist spoke of an exceptionally famous artist from South Africa who is not represented on a single streaming service. There are new, emerging markets that not only have the opportunity to transform the global music industry, but also to redefine streaming payout models as they are currently applied in the Western world.

What was also dropped here was the importance of subtitles. With good subtitles, regional music is not limited to its countries of origin anymore. But with subtitles, Chilean kids can enjoy K-Pop and Japanese teenagers can dig underground Macedonian rap.

Bottom line was that we will see a change from a US and UK dominated music industry to something more international. I find this truly fascinating as it also opens the western dominated music industry model for new influences from new cultures which bring different business ethics, new ideas, and just more diversity to this fascinating industry.

MARKET FOCUS INDIA AT MUSEXPO 2023

Music For Mental Health

A little bit more niche but by no means less fascinating was the Alchemic Sonic Environment experience created by Satya Hinduja and her team. In a multi-sensory listening experience, they presented an intimate, spatial audio installation that demonstrated the potential of music for mental health. Personally, I am deeply convinced that music makes our inner walls permeable and better connects us to our true desires and needs, which is why it was so great to see and, more importantly, experience this outstanding work. They also easily won the award for the most beautiful setting and booth.

The most interesting question to me is if and how an industry that is primarily focused on entertainment is also able to tap into the healing aspects of music. A good example for that might be Endel which offer soundscapes for all kinds of scenarios from studying to sleeping, and also collaborate with artists like Grimes or James Blake to offer “functional” musical experiences designed by actual artists. I believe something very big is starting there that also contains lots of potential for new and innovative revenue streams for artists and their work.

BEAUTIFUL SETTING OF ALCHEMIC SONIC ENVIRONMENT

Conclusion

Honestly, I would have liked to have one or two more days at Musexpo to further connect with people and possibly have some hands-on workshops that could be initiated and led by delegates, working together on some of the topics discussed in the panels (as it’s done at Future Music Camp for example). It was an intimate setting that made it easy to share openly and meet people in person that you usually only see on screen. Although the focus is very much on A&Ring, I felt there was almost a 360-degree view of the music industry’s most pressing challenges, and I’m sure everyone enjoyed getting out of the usual bubble and enjoying other perspectives as much as I did.

It became so clear to me at the conference, that the biggest challenge in the music industry right now is not that AI will replace artists, but it’s about discovering the great music, the hidden gems, the outstanding artists that are out there, and to find ways to connect those artist with audiences that resonate with their music. At the end of each day, every single job of every single person attending the event goes back to human creativity and the artists who write and produce music. We need technology to help us navigate the masses; we need an open dialog between old and new music industry and we need events like Musexpo to bring all of this together.

Key Take-Aways From MUSEXPO 2023 In Los Angeles – Part 1

Key Take-Aways From MUSEXPO 2023 In Los Angeles – Part 1

Written by our CMO Jakob Höflich

I just came back to Berlin after visiting this year’s Musexpo on behalf of Cyanite after Covid closed the event down in 2020, when we originally planned to attend.

It was a four-day event packed with panels featuring some of the industry’s leading figures such as Adam Taylor (President APM Music), Evan Bogart (Founder & CEO, Seeker Music) and Kristin Graziani (President Stem Disintermedia Inc. ) as well as evening showcase performances at iconic S.I.R. Studios Hollywood by an international group of artists such as Caity Baser from the UK or Holly Riva from Australia. My first eye opener was when the German band KAMRAD played their hit song “Believe” on the first night of the showcase, which I definitely knew from radio and has been listened to over 70 million times on Spotify, but is still completely unknown in the American market. It made me realize again how isolated Western music markets can still be.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

The panels were mainly about “traditional” craft in the areas of sync, publishing, artist promotion and distribution. In addition, many artists had the opportunity to meet supervisors and A&Rs. Technology topics such as AI, NFTs and the Metaverse were not represented in the panel topics. However, on the panels themselves and in the audience Q&A, AI was a recurring topic brought up. Of course, as an AI company, I would have liked to see a bit more tech talk, but on the other hand, it was interesting to approach these topics from the “inside out”.

One thing that’s always refreshing to see is that everyone puts on their trousers one leg at a time. The challenges of mass content production, an extremely decentralized media and distribution landscape, and the future of creativity in the age of AI were topics to which no one had a perfect answer or a concrete solution. The challenges are obvious, and it became very clear at a conference like this that these challenges can only become solutions that benefit all players equally if they are worked on together and a dialog is cultivated between the music industry, artists and technology providers – as Cherie Hu recommends as well in this article.

Besides meeting really inspiring and genuine people in person, such as a leading NASA researcher turned music composer, here are the main take aways, that I brought back to Germany and that were interesting to see addressed.

Before I start, a huge thanks to Sat Bisla and his team who put together a fabulous event and provided a setting in which new and old relationships can evolve, nurture, and deepen.

Without further ado, here are my personal key take-aways – of course, there was much more and I won’t be able to cover the whole scope of the conference.

It’s noisy and crowded

The conference started off with taking a look at the industries most pressing problems and opportunities. It directly became clear that the biggest challenge for all players involved is the masses of content and the numerous outlets for them. It was said that “it is freedom and chaos at the moment“. It’s extremely hard to cut through the noise and in contrast to the times when there was MTV and your local record shop to distribute music, it is an extremely individualized case by case decision about which target groups to focus on, where to reach them, and what kind of content to produce for them.

Also, everything comes with lots of new challenges for artists who were often called “brands“ at the conference. Artist development is more and more in the hands of the artist themselves (and their teams) as the big players in particular focus on placing bets on single hits that often dominate today’s streaming landscape. However, it is said that fans engage with artists, not with songs, and that is where true fandom is created.

Lots of question marks in this space of freedom and chaos evolve around TikTok and Co. and how those platforms will be able to set up fair royalty payouts. And as we shift to poorly-paid licensing models such as Tiktok, artist teams need to find new revenue streams.

The importance of back catalog & sync

There were a couple of really amazing panels around sync, publishing, and music supervision. The Hello Group’s President Phil Quartararo said in the opening panel: “People have unlearned to work their back catalog“ and have forgotten how to maximize the use of it. And he subtly but directly addressed the majors with this statement. Apparently, the majors are so focused on breaking new artists and “going where the money is“, that they forget about all the brilliant music that’s in their back catalogs. According to him, the industry should pay more focus on the dusty corners of the catalogs where the real gems can be very well hidden.

What also became clear, that despite the fact that access to music has become so easy, the access to the influential people who recommend your music to the music directors at Netflix et al. or at the most influential radio stations, create a very tough bottleneck to pass through. Both radio stations and music supervisors have their so-called ”trusted sources“ that not only provide them with music that could work amazingly well in sync, but from whom they also know that they make sure that the music is easy to clear.

One thing that I found mind-blowing is that supervisors apparently often prefer to take older music where the rights don’t have to be cleared from 15 co-writers but maybe just 2 or so. Contemporary music takes more time to clear the breadth of songwriters that were involved. This is another motivation to all songwriter out there to pay meticulous attention to clean and neat metadata!

Last but not least, commercial music only produced to get attention in sync, is not really favored by supervisors. Yes, it can be a great fit sound-wise but the initial motivation might reveal a lack of authenticity. And authenticity is what supervisors are looking for when they connect music with movie productions and especially with brands. Here, again, people engage with the people behind the songs, not only the songs themselves.

More insights tomorrow in Part 2 on AI, data and the internationalization of the music industry.

SoundOut launches OnBrand in cooperation with Cyanite

SoundOut launches OnBrand in cooperation with Cyanite

We are proud to share the latest press release by UK-based company SoundOut which is the world leader in sonic testing for audio branding.

In the announcement below, read about the new OnBrand platform developed by SoundOut and powered by the AI of Cyanite and how it empowers marketers to build certainty into every music choice for campaign.

 

Campaign music search for brands and agencies

October 26th, 2022, London: SoundOut launches OnBrand, an entirely new approach to music search that revolutionises the process of selecting music for marketing campaigns.

Hugely scalable AI-powered music search platform removes uncertainty from every brand campaign music decision

• OnBrand ensures music choices always match brand personality and campaign goals
• Increases certainty of ROI from music choices
• Launch partners include Unilever, Scholz & Friends (WPP) and Global Radio
• Combines leading SoundOut brand personality technologies with the scalability of German music AI company Cyanite to transform commercial music selection

SoundOut, the world leader in sonic testing, has launched a revolutionary AI-powered music search and testing SaaS platform named OnBrand. It enables marketers to build certainty into every music choice for campaigns. OnBrand is powered by AI algorithms that predict the granular emotional impact of music, trained on feedback from half a million people.

OnBrand enables marketers to search across any number of music catalogues to identify campaign music that is both on-brand and campaign appropriate, using a combination of over 200 brand attributes, plus self-defined brand personality and brand archetypes. In this way, OnBrand delivers greater certainty of immediate impact and sustained ROI from their campaigns, by reducing subjectivity and risk from music selection.

Global companies Unilever, Global, the Media & Entertainment Group, and Scholz & Friends – part of the WPP Network – are among the first users of the OnBrand platform.

Stephanie Bau, Global Assistant Brand Manager at Unilever, said: “With the growth of social media platforms like TikTok, sound has become the ultimate tool in a marketer’s arsenal. Choosing the right sound for our future campaigns has never been more important and this technology will enable brands to amplify their personality and have greater certainty of ROI from campaigns during these economically challenging times.

Julian Krohn, Director Music & Audio, Scholz & Friends (WPP), said: “From an agency perspective, OnBrand is a uniquely powerful tool that will enable us to add significant value to our clients’ campaigns. Ensuring that music is both brand and campaign appropriate has never been easier – and OnBrand can only increase their return on marketing investment. We’re looking forward to working closely with the tool!

Powered by a unique double-stacked AI layer of algorithms trained entirely on human derived data, OnBrand first automatically tags music with up to 500 separate attributes thanks to a partnership with Cyanite, the world-leading AI music tagging company. Then it uses a further AI layer to map these tags to SoundOut’s emotional DNA map of music, created with the input of over 500,000 consumer surveys and over 12 million datapoints.

Jo McCrostie, Creative Director at Global Radio, Europe’s largest commercial radio group, commented: “OnBrand represents a truly seismic revolution in how companies find brand appropriate music for commercial use. A previous lack of objectivity in music choices has restricted investment in audio marketing such as radio ads. I’ve seen for myself the positive reaction from brands to the new platform and it looks set to be transformational for the audio advertising industry.

OnBrand can automatically rate any track against over 200 emotional attributes in a fraction of the time taken by people. It enables catalogues of millions of tracks to be emotionally indexed in under 24 hours with over 95% precision compared to human indexation.

David Courtier-Dutton, CEO of SoundOut, said: “Until now, choosing music for marketing has been a largely subjective exercise, with little in the way of objective metrics to confirm brand fit and emotional resonance. At a stroke, OnBrand introduces an objective, hugely scalable solution for brands worldwide. It enables data-informed music choices and provides robust cost/benefit analysis for any commercial music investment. OnBrand is not only totally brand-centric but it speaks brand language; enabling brands to enhance campaign performance whilst simultaneously strengthening their emotional bonds with consumers.

Markus Schwarzer, CEO of Cyanite, added: “AI music tagging technology has advanced significantly over the past few years and has now been adopted by many of the world’s leading music and entertainment companies. The additional AI brand centric layer that OnBrand delivers truly democratises catalogue search for brands, enabling them to find the perfect track for any campaign using brand language rather than musical attributes.

 

About SoundOut

SoundOut is the world leader in strategic sonic branding and audio marketing testing. It has achieved this lead position by combining three powerful capabilities.

  • Working with world leading music psychologists and over 500,000 consumers, it has mapped the explicit emotional DNA of sound and used this as the foundation for a suite of tools, such as BrandMatch, that can be used at various stages of sonic branding development to increase the certainty of a return on investment.

  • The development of a wholly owned consumer panel of over 3.5 million people, which enables brands to test their sonic assets at scale.

  • The testing and analysis of almost 200 in market sonic logos with over 400,000 consumers (The SoundOut Index) that reveals the key criteria that are essential to audio branding and audio marketing success.

SoundOut works with many of the most iconic brands in the world (such as TikTok, Amazon, Toyota, DHL, Ford, Unilever and GSK) as well as all the major record labels and many leading radio groups. SoundOut specialises in helping organisations trigger the right emotional response from their customers by matching brand personality and attributes to music. As a result, SoundOut provides the data and insight needed by clients to increase the certainty of achieving a strong ROI from their audio branding and marketing investments.

Clients use SoundOut’s unrivalled strategic sonic testing capabilities to identify the effectiveness potential of new sonic identities before they are launched and ensure that they resonate with the core brand personality.

OnBrand now scales these capabilities to all use of music in brand marketing, enabling brands to index huge music catalogues and search them based on the personality, attributes or archetype of their brand.