An Analysis of Club Sounds with Cyanite

An Analysis of Club Sounds with Cyanite

If we asked you to describe the vibe of your favourite nightclub, could you? Today, we show you how we would describe the sounds of some of our favourite clubs, with the help of the Cyanite music AI analysis software. 

We analysed album compilations of 9 well-loved clubs across Germany. From Berlin, these were Berghain, Griessmühle, About Blank, Golden Gate and Kater Blau. In addition to these, we analysed music from Hamburgs Golden Pudel, Leipzigs Institut für Zukunft, and Omen and Robert Johnson in Frankfurt. 

The mood multi-label classifier provides the following labels:

tenseupliftingrelaxingmelancholicdarkenergetichappy

Each label has a score reaching from 0-1, where 0 (0%) indicates that the track is unlikely to represent a given mood and 1 (100%) indicates a high probability that the track represents a given mood.

Since the mood of a track might not always be properly described by a single tag, the mood classifier is able to predict multiple moods for a given song instead of only one. A track could be classified with dark (Score: 0.9), while also being classified with aggressive (Score: 0.8).

The mood can be retrieved both averaged over the whole track and segment-wise over time with 15s temporal resolution. In addition the score the API also exposes a list which includes the most likely moods, or the term ambiguous in case of none of the audio not reflecting any of our mood tags properly.

Insights from Instrumental and Voice analysis: Berlin clubs lead the way for greatest use of Instrumental in their tracks.

Based on results from the CYANITE Instrument and Voice machine learning analysis, we see that while most of these club compilationstracks are extremely dominated by instrumentals, the top four clubs which contained the most instrumental-heavy tracks were from Berlin. 

About Blank contained the highest amount of instrumental, followed by Griessmühle, Berghain and Golden Gate. For all of these clubs, our analysis showed that instrumentals made up more than 80% of the tracks. 

When we look at results of the voice analysis, we see that the clubs with the most use of female voices in their tracks are clubs outside of Berlin. In first place, we have Institut für Zukunft, followed by Omen, and then Golden Pudel. Funny enough, we also found that the four clubs with the least use of female vocals in their tracks were from Berlin! These clubs are: Kater Blau, About Blank, Griessmühle, and Golden Gate. 

When looking at the presence of male vocals, we see that Golden Pudel is the club using the most male vocals amongst the clubs we are studying today. This is followed by Omen and Golden Gate. 

Based on the results from this analysis, data from Golden Pudel intrigued us the most. We observe that Golden Pudels music, unlike the rest of the clubs, has a slightly more even balance between instrumental and vocals that is almost a 50/50 split. 

Insights from the CYANITE mood tagging technology: Berghain is the gloomiest of them all. 

Looking at the results, heres what we found: 

With its grim industrial aesthetic, its fitting that our analysis found Berghain to be the most melancholic club. Berghain ties with Institut für Zunkunft for having the most Dark sound. 

Golden Gate, a favourite of ours for a good night of House music, takes the prize for being the most uplifting club. Our mood analysis also showed that Frankfurts Omen club is at once the most tense and most energetic, while Golden Pudel in Hamburg was found to be the most happy and relaxing. Our mood analysis also showed that the compilation from Frankfurt’s now-defunct Omen club, is at once the most tense and most energetic out of all the clubs’ compilations. A very apt result indeed- Omen was a prominent symbol of the unrestrained, pure fervour of 90’s rave culture, and one whose sound we definitely miss greatly.

Talking about the Berlin Sound

Comparing the clubs, we see that clubs in Berlin have a distinct, extreme skew towards the dark and melancholic, indicating the very characteristic moodiness that we so love and miss in these times!

Looking at the clubs elsewhere, we see that while dark and melancholic moods are still very much present, there isn’t as clear of a skew towards these two only. Instead, our data from the 4 clubs outside of Berlin show more diverse moods, with no clear skew in a certain direction. 

Genre Tagging with our music AI: Some interesting insights 

We see that for Berlin clubs, the CYANITE AI analysis of their club compilations reveals a strong skew towards Techno and Tech House. The top 3 places with the most amount of Techno in their songs are: in top place, About Blank, followed by Berghain, and then Griessmühle. For Tech House, the top 3 clubs are Golden Gate, Kater Blau, and About Blank. 

Outside of Berlin, we see a more varied mix of genres in the club compilations. Omen ranks highest in the amount of trance in the selection, a genre that was almost not found at all in the Berlin clubs we studied.  

You can listen to some of the compilations we analysed here:

About blank: :// About Blank ( 2018) , :// About Blank 002 (2017), :// About Blank 004 (2018), :// About Blank 006 ( 2019) and :// About Blank 007 (2019) 

Kater Blau: Katermukke 150 Compilation (2017)

Berghain / Panorama Bar: Ostgut Ton – Zehn (2015)

Golden Gate: Compilation (2012)

Golden Pudel: Operation Pudel (2001)

Omen: Moka DJ Compilation (1996)

Institut Fur Zukunft : Various 5IVE ( 2019)

Robert Johnson:  Livesaver Compilation 2 (2015) & Livesaver Compilation 3 (2017)

 

Overall, our quick research into these clubs with AI showed us some very interesting things. It seems that with a larger data set, it might be possible to quantify the Berlin sound and perhaps also sounds for other key party cities.

Cyanite Talks #4 with Sarah Mibus

Cyanite Talks #4 with Sarah Mibus

The guest for this #CyaniteTalks is Sarah Mibus. 

Sarah is an expert in music selection and planning and she works in close contact with radio and TV broadcasting companies. She regularly holds seminars of different topics tied with the editorial world and she is a music journalism and planning lecturer at various universities.

In this interview you will get acquainted with Sarah’s profession and get her perspective on the current state of the music industry in relation to one of our hottest topics – technology.

 

CyaniteHi Sarah, you are an expert when it comes to music choice and planning. What does your job revolve around and what was your latest highlight in your professional work?

Sarah MibusAs a freelancer I mainly support Radio and TV editors in all kinds of music queries. On one side, I always work closely around the program: I select the music and plan playlists. On the other side, I give advice to radio directors on music strategies and market orientation too. I also give seminars about these topics on a regular basis. Last year I experienced, as well as many others, a full digitalisation boost.  In 2019 I still drove around for each seminar and meeting with the ICE; now everything easily goes out into the world from my top floor apartment via Zoom. I am thankful that through my job I can, in this new normality, still be in touch with many editors, radio hosts, seminar participants and students. Even though Zoom meetings cannot replace human contact – the sudden switch to digital and the realisation that it works so well has been for me the highlight of my job throughout these past few months.

 CyaniteDue to Covid-19, advertising budgets have dropped significantly and costs are being cut quickly in music. How do you deal with this?

SarahAmong my clients there are mostly public TV and Radio editors. Budget cuts have been a recurring theme for years, everybody is encouraged to save money and funds have to be used very conscientiously. It has been like this before Covid-19. Event transmissions and presentations have not taken place since last March, for this reason a lot of money has probably been saved.

Cyanite: Imagine if you had an assistant robot, what would she/it/he do for you?

Sarah: I would urgently need someone who could tidy up my hard drive. Otherwise, I enjoy doing everything concerning my self-employment – even taxes!

Music perceptions are so different: sad music makes some even sadder, others find comfort in it. For this reason I see a lot of potential in personalised formats in the music field “.

© Photo by Jr Korpa – Unsplash

CyaniteArtificial Intelligence in the music field – where do you see the biggest potential for its application?

SarahArtificial Intelligence and algorithms are very exciting and I am curious to see where the society is going with this. Music perceptions are so different: sad music makes some even sadder, others find comfort in it. For this reason I see a lot of potential in personalised formats in the music field, we are only at the beginning. For providers such as Amazon or Spotify, each of us is a gigantic data set that we constantly feed with social media, online shopping and the daily clicks on the internet. This created a clearer consumer profile, which can be precisely served. Gadgets such as the Apple Watch give providers information even about our bodies’ biochemistry. I would not be surprised, if Spotify knew about my cycle at some point and delivered the most fitting soundtrack to my PMS mood. I find this impressive and scary at the same time. Mass programs on TV and on the radio represent the opposite. It is not about individual moods, but about creating or depicting a collective feeling, which has its own charm next to the individualised offers.

CyaniteCreating a collective feeling sounds very exciting and also very difficult: are there any patterns or tips you’ve identified about how to work out the right music for a larger group of individuals?

Sarah: Music directors need a gut feeling for certain moods or social feelings that many people experience.  For instance, this year Carnival was cancelled – a great tragedy for the inhabitants of Cologne. They now had to spend these days mostly alone at home. In such situations, it is up to the editors to ponder whether and how to depict this social feeling of sadness, nostalgia and disappointment in their programmes. This transports itself through emotive presenters and contributions that show compassion and sense of community – and of course through the right music. For example, you can deliver the carnival to the listeners’ living room or play motivating songs that give hope for a better future. One of the strengths of mass programmes is to convey the feeling of not being alone and to listen to music with others and feel like you belong with them. An important topic especially at this time!

“I think it’s important to learn the rules of music planning and music use in order to then break them in a meaningful way. In this way, individual and creative ideas can emerge.”

CyaniteWhat tips would you give to DIY content producers looking for music for their films or productions and that often cannot rely on a budget to consult experts?

SarahIn my workshops I always advise the participants to listen more to their gut feeling. By this time, content creators have access to so many data to backup their decisions and to make sure to meet the taste of their audience. In this process, the personal touch and an artistic aspect often fall by the wayside. But I think it is also important to learn the rules of music planning and music use to be able to break them in a meaningful way. This way individual and creative ideas can emerge.

“Numbers and surveys alone are not enough to set trends […]

you need expertise, decisiveness and fun while working.

CyaniteIf you could change one thing in the German-speaking context in the professional handling of music, what would it be?

Sarah: I would like more courage! Many music editors in this country make decisions very reluctantly or do not make them at all. At times, you blindly rely on market research results and bluntly translate the numbers into the program, without being aware of your own music strategy. This happens out of habit and / or fear. All program managers should have the goal of their daily work on hand in one sentence. I would like to see and hear that much more often! And that’s what I’m competing for! Numbers and surveys alone are not enough to set trends and create extraordinary good programs. More than ever now you need expertise, decisiveness and fun while working. 

CyaniteYour job in 10 years?

Sarah: With the best will, I don’t know how this path will look like in 10 years. But my goals is clear: to touch viewers and listeners through music and through music choices in TV and Radio formats, and to inspire and motivate my seminar participants.

Thank you Sarah for taking the time and helping us shaping our outlook to the music industry and tech.

WISE Panel Video: AI – Musician’s Friend or Foe?

WISE Panel Video: AI – Musician’s Friend or Foe?

WISE hosted a virtual panel moderated by Kalam Ali (Co-Founder, Sound Obsessed) to connect music industry experts and have an open discussion about AI technologies adoption for artists. Among the guests there were Rania Kim (Creative Director, Sound Obsessed & Portrait XO), Harry Yeff/Reeps One (Director, Composer, and Artist, R1000 Studios), Heiko Hoffmann (VP Artist, Beatport) and Markus Schwarzer (CEO, Cyanite). 

 

All united by their interest in music and the future ahead of it, they shared their views on the different access points for AI to be embraced for what it is in the bigger picture: a solution to improve performances, to enhance the UX and to give inspiration within music production.

Education is the means to ensure a deeper understanding of this technology, now still highly questioned as damaging to connection people have with music. A realistic assessment of what opportunities there are for artists in implementing AI and, at the same time, what the risks of improper use are, can break these fear barriers.

Finding a middle ground between men and the autonomy of AI is key, especially in these days where a digital approach is often the only feasible way to make life feel as normal as it should be.  

The extended video of the talk is available on Youtube.

 

 

 

You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

Translating Sonic Languages: Different Perspectives On Music Analysis

Translating Sonic Languages: Different Perspectives On Music Analysis

For this guest post we are glad to share Benjamin Doubali’s analysis on how to visualize sound. Benjamin studied sociology in Mainz and Paris. Through his work and research, he aims at exploring shifts in society, knowledge and everyday interactions under the conditions of digitalization. He is passionate about artistic concepts in regards of the relationship between culture and technology.  

 

The article was written by Benjamin Doubali.

Let’s say music is a code. 

This call may seem a little confusing. Isn’t music an aesthetic experience, isn’t it dynamic, fleeting? Isn’t it everything that code usually cannot be? Sure, music is unique, it is art. Nevertheless, allow that thought for a moment: music is systematically structured, categorised, it follows a strict “grammar”. It is not mysterious, but enigmatic. Music is auditory code. A code that needs to be deciphered and translated. And we can process this code by technological means, like any other sign system. Unlike other codes, however, the code of music is not stable and predictable, but surprising and diverse. 

 

Songwriters are translators – and so are music lovers

Consider the matter from the songwriter’s point of view: she has an experience to share, a story to tell or a musical idea she can’t let go of. Songwriters seek to express feelings from the depths of the human experience, like the confusion after a break-up, missing the person that is now just somebody you used to know. They have the knowledge and the tools (literally “instruments”) to transform and condense ideas into sound. For this purpose, they use established symbol systems, tonal grammar, musical code. A songwriter expresses her feelings in a sonic language, which is a term used by the musician Claudio in addressing this issue. The songwriter becomes the translator of her own emotional world. 

Later, someone will hear the sonic language, its tones, rhythms, lyrics and translate it once again, perhaps feel something, associate situations, or images with it. How does emotion translate into a great song? And how does it “turn back”?

Admittedly, this is a very broad concept of translation: I refer to the mere interpretive, consistent transmission from one thing to another. One could call it intersemiotic translation; this is a term from philology, the cultural study of languages, indicating the translation between totally different sign systems (or modes of expression). This is what happens, for example, when novels are adapted for cinema. 

Sounds are symbols – and they’re able to touch us

How does a great song turn into emotion? It’s not so easy to determine: Sound does not “carry” meaning in some magical way. In other words, the emotion, and images we associate with an auditory impression are not surfing on the sound waves, they’re not transported to us. Sound itself is a meaningless symbol of a complex code. The solution can be found elsewhere: Emotion has not been transmitted into our consciousness – it is already there. 

Music can resonate in places of our inner world; it touches and moves us. By listening to music, we feel sadness, joy, and ecstasy – fundamental components of the human experience. These are not plainly inscribed in the sonic language. We should rather think of music as a way to stimulate impressions which are deeply intertwined with our existence. 

David Anderson © Unsplash

When music looks like twisting shapes

Our interpretations of the sonic language encompass elusive associations, slight toe tapping, wild dancing, and unrestrained singing. Another unique mode of musical perception is called synaesthesia. Synaesthesia is a cognitive phenomenon, describing involuntary combinations of perception. In a common form of synaesthesia, people perceive numbers as inherently coloured. For others, sounds have shapes. Illustrating this, a synesthete described an example to me: When two people sing together, she perceives two lines that either run harmoniously or repel each other. The cognitive perception of music can thus become a dance of geometric forms.

Su-san Lee © Unsplash

This is fascinating and underlines that the full meaning of the sonic language isn’t part of the physical sound, but only evolves through individual perceptual processing of its structures, sometimes creating surprising effects. The listener’s perception processes the material entity of sound into an experience. According to this, a melody is like a sequence of data that requires a “processing unit” to be meaningful (for more on this, I recommend the book “Muster” by the sociologist Armin Nassehi).

Using digital technology for translation

It may hardly come as a surprise that the power to process and therefore translate the sonic language is not an exclusively human ability. Digital technology can also access the sonic language. Cyanite’s AI is trained to analyse it by recognising recurrent patterns. 

The process to successfully analyse music with neural networks takes several steps. Following my reasoning, we can picture these steps as translatory tasks. When it comes to data pre-processing, the team at Cyanite generates a visual representation of music (namely spectrograms); an activity we can call a “strategic rearrangement” of music: The characteristics of music are translated into graphical patterns, which can therefore be subject to pattern recognition. With the help of strategic rearrangements, the musical code reveals itself. After thorough training procedures, the AI learns to “read” the sonic language and to ascribe, how it resonates in us.

Going a step further: Creative Coding

It is well known that there are bittersweet ambiguities in music; a song can be both uplifting and sad. Cyanite’s music analysis tries to do justice to such contingencies by giving probability values for its attributions and by allowing “overlapping” mood categories.

In the context of inherent ambiguities, the independent art project vi · son tries a different, creative approach to digitally translate music. The project is working on audio-reactive digital art and engages with the question: Can we make music visible? Not just metaphorically, but truly? 

To translate the sonic language visually, the group applies methods of creative coding. Particularly so-called Generative Art enables data-based artworks such as moving sound sculptures that accentuate specific features of music. The curator and digital art expert Jason Bailey writes: “Generative Art is art programmed using a computer that intentionally introduces randomness as part of its creation process.” This doesn’t imply the complete autonomy of the machine nor total command over it: “The truth is that generative artists skillfully control both the magnitude and the locations of randomness introduced into the artwork.” Generative Art is a way to explore portrayals of sound-data, creating visual suitable representations of music. The resulting artworks interpret and reflect the spirit and aesthetics of the sonic language.

Guido Schmidt & vi · son: Sound Data Sculpture Sketch

One example is the digital scene aurora from the series Sound Data Sculpture Sketches. The creation process starts with a set of dots that move on a sphere. Over time their path is traced to form tubes, this produces an organic appearance. A representation of the underlying song’s frequencies is texture-mapped onto the geometry of the tubes and used to generate colour gradients that react to music. From this interpretative, digitally mediated translation of the original song, a dreamy audio-sculpture is created. By interpreting the musical parameters, this artwork goes further than a mere technical analysis. It thereby contemplates the poetry and beauty of the sonic language, seeking to visually formulate an accurate translation.

The project presents further examples of creative music visualizations in an ongoing digital exhibition

The whole theme of “translation” points to the fact, that music is socially formalised and follows symbolic structures. Music is deeply connected to our human experience because it works like a language, because it translates into emotion and bodily reactions. The notion that music is tangible and rests upon patterns that we can calculate and process with digital technologies is not as weird or scary as it might seems. Music is a code – and that is a beautiful thing.

Visit vi · son ‘s digital exhibition here

You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

How to Use Cyanite to Find Music for Your Videos

How to Use Cyanite to Find Music for Your Videos

Communicating through video is better with music. Especially if you want audiences to feel what you’re saying. Without music, there’s no emotive hook for the viewer to tie onto. It’s just moving images that might say something, but your viewer also might not get it.

Some of the best marketing campaigns are built around the right music. The launch of the Apple iPod became synonymous with the Jets’ hit single, “Are You Gonna Be My Girl?” (the then unknown Australian band sold 3.5 million copies of their album thanks to the exposure). The song created a feeling of excitement beyond the campaign.

McDonald’s iconic jingle, “I’m Lovin’ It”, did the same. The phrase became part of everyday language, and still feels as happy as the brand’s imagery looks. And then there was Cadbury’s now-classic ad: a gorilla playing the drums to Phil Collins’ smash hit “In the Air Tonight.” Although it had nothing to do with chocolate, the musically memorable visuals produced a 10% spike in sales – three times the normal level.

You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

So how do you find music for your video? Music that connects and communicates as effectively as those listed above? Good news is, there’s a process – one that’s intuitive and data-driven. And Cyanite’s tools are designed to take you through each step.

Inspiration – Similarity Search

Finding the right music for your video can be hard if you don’t know where to start. Usually, though, you or someone involved with the project will have an idea of what you want. This inspiration might come from the mood of your video or the desired reaction from your audience. But usually, you have a reference track in mind; a way of saying, “it should sound like this“.

For example, you might say, “I want something like, ‘Arlo Parks – Cola’. That’s already a great starting point, because a reference track is specific; it’s different enough from most other songs, and similar enough to some, to find sensible matches. This is where Cyanite’s Similarity Search comes in.

The Similarity Search function creates a musical mood board – a collection of songs that are similar to your reference. You can search within a database of popular Spotify songs or within your own library of uploaded songs. It does this through algorithms that deeply analyse your song to find similar patterns. ( To learn more, check out this article.)

Now you’ve got a selection with a similar sound and feel. You can even refine your search using Cyanite filters – voice, mood, genre or timbre. That means you can tell the AI to find songs with voices like Arlo Parks’, while excluding genre or mood. 

All of this narrows down your search, giving you precise suggestions based on your inspiration. But you’re still looking for the right song – one that matches the emotional journey of your video.

Comparison – Track Mood Analysis

Enter Cyanite’s Track Mood Analysis – a tool that fine-tunes your Similarity Search results. With the generated selection in front of you, you’re able to compare each song’s emotional qualities.

A circular diagram maps each song to the following mood states: happy, relaxing, calm, melancholic, dark, tense, energetic and uplifting. Our AI measures how much of each mood state is present in a song. The happier it is, for example, the more area it takes up under ‘happy’. This means you can immediately see which songs fall into which mood states. View each song separately, or layer them for a visually effective comparison.

That’s exactly what we did for the soundtrack of I May Destroy You, a British comedy-drama television series. Interestingly, of the twenty-five songs analysed, the AI identified none as “uplifting”. This seems to fit the darker tone of the traumatic story (about a woman trying to start over after being raped in a nightclub).  You can find the results of our analysis here.

 

Now you know how much each mood is present in each song. Time to pick what’s right for your video.

Decision – Dynamic Emotion Analysis

Making the final call on which song to use deserves arguably the most care. The song you choose – whether licensed commercial music or production music from a library – is associated with your video forever. That’s where data comes in. . The more detailed and relevant, the more information you have to make a better decision. 

Dynamic Emotion Analysis gives you that depth of data. This tool provides a second-by-second analysis of the emotion in a song. You’ll see the exact value for each mood state at any point. The results are displayed together with the song’s full audio (presented as a waveform). Move the mouse along the waveform while listening to discover the mood at that specific moment. 

You can follow the fluctuation of each mood state throughout the whole song. This makes it quick and easy to find the parts that are happy, tense, or melancholic as you need them to be. Just jump to the point with the highest value for an emotion to see if that part is perfect for your video.

Now you’ve got the right data at the right moment. It’s up to you to choose the right song. With the right song in place, your editing has a better shot at being a hit.

This was our simple, structured approach to picking the best music for your video content. Of course, AI can’t do all the work for you – music selection is still too complex to leave out the humans that made it – for now. But the algorithm can certainly assist you in the song-finding process. Or even point you in a few, new data-based directions you might’ve never considered. You’ll just have to explore it for yourself.

To put these steps into action, check out our platform here. Or reach out to our team via mail@cyanite.ai for support or help with your specific needs. 

LESS OF THIS

LESS OF THIS

LESS OF THIS

LESS OF THIS

LESS OF THIS

LESS OF THIS

MORE OF THIS