Our CEO Markus Schwarzer has published a guest post on UK-based music industry medium Music Ally. In the post, Markus addresses the concern that major labels and other large music companies have shown recently about the use of Artificial Intelligence in music and business – and the importance of stepping back and thinking carefully about as-yet unknown repercussions, before moving into a future where AI benefits us all.
You can read the full guest post below or head over to Music Ally via this link.
In recent months, Universal Music Group has become the ringleader of a front that has formed against generative music AI companies – and latterly all AI companies.
After news made the rounds of UMG’s recent actions, people everywhere (including myself) spoke out about the positives of AI. AI has the potential to improve art, create a better environment for DIY artists, and foster new musical ecosystems. However, whilst the industry was debating the prosperous future of music fuelled by AI, with leveled playing fields, democratised accesses, and transparency, we forgot one thing. All of these positive outcomes might be true in the future, but the current reality of generative AI is different.
Currently, it is an uncontrolled wild west where new models have shown that they’re not just some game for the tech-interested individuals among us, but an actual threat to the livelihoods of artists.
Reading through and experimenting with recent generative music AI advancements, I can’t help but feel reminded of Pause Giant AI Experiments: An Open Letter, which was directed at developers of large language models (LLMs) like Open•AI’s GPT-4 or Meta’s LLaMA. It urged them to halt their developments and think about the implications of their projects for at least six months.
The open letter made some requests which are equally applicable to the music industry. Just like LLMs, some generative music startups see themselves “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Just like LLMs we may run into the risk that “no one – not even their creators – can understand, predict, or reliably control” them. Just like LLMs, we need to ask ourselves “Should we automate away all the jobs, including the fulfilling ones?”
The latter is a question that we at Cyanite and other AI companies also have to ask ourselves frequently. Do we automate meaningful jobs, or just tedious unloved chores to free up time for creative work?
But unlike LLMs, the music industry has copyright law to enforce the temporary halt of new training models (at least in those areas where it is enforceable). So what if the UMG-attempted halt of new generative AI training allows us to take a step back and try to get an objective perspective on recent developments? This is something that is not possible with LLMs, because training data is so much more accessible and less controllable. Which is the reason people have to write open letters in the first place – a strategy which has somewhat questionable expectations of success.
Many in the industry have criticised UMG’s approach as a general barrage of fire launched at any company working with AI, in the hope of hitting some of their targets; one that will ultimately also harm companies working on products beneficial for the industry, while also eventually forcing advancements in the generative space into the uncontrollable underground.
Despite this being undoubtedly true, we can’t deny that it has sparked a very important debate on whether we need to slow down the acceleration of AI. I would argue that if UMG’s actions will let us pause AI for a second, take a deep breath, imagine the future of music AI and then start developing towards exactly that goal, their actions would have a hugely positive effect.
If you want to get more updates from Markus’ view on the music industry, you can connect with him on LinkedIn here.