Cyanite Talks #3 with Josephine Geipel – Music Therapist & Researcher at SRH Heidelberg

Cyanite Talks #3 with Josephine Geipel – Music Therapist & Researcher at SRH Heidelberg

For the third part of our interview series #CyaniteTalks we sat down with Josephine Geipel, music therapist and researcher at the SRH University Heidelberg. Josephine’s insights show us the power of music far beyond its use for entertainment and leisure purposes. 

Learn more in this interview about the healing effect of music for depressive teenagers and how all of us can actively use music as a tool for emotional and mental stability. Enjoy the read.

 

CyaniteHi Josephine, you are a music therapist and teach at the SRH University of Applied Sciences in Heidelberg, to begin we would like to ask how you found your way into this profession and how can music therapy be defined?

Josephine GeipelFirst of all, thank you very much for the invitation to the interview. My journey to music therapy is actually a very classic one, which most of our students also can tell. I made a lot of music at school, went to a musical high school and when it came to choosing a profession, I thought that a social job would be nice. I could imagine myself as a pediatric nurse, a midwife, or special education teacher, and in my investigation I discovered that you can study music therapy and I thought: how crazy is that? I took a small detour, studied theatre studies and worked in cultural management. But then I realized that I needed direct contact with music again and that I didn’t just want to take care of the administrative part of the music business. Finally, when I studied for the second time, I added a master’s degree in Music Therapy on top.

There are actually only 5 universities in Germany where you can study music therapy, which is certainly the reason why the subject is not so well known. Music therapy is now also listed as a small discipline at German universities – a list of small disciplines that are particularly worthy of support and protection.

We define music therapy in Germany as: the use of music within a therapeutic relationship to restore, maintain and promote mental, spiritual or physical health. And what is very important is that this happens within a therapeutic relationship. This distinguishes music therapy from music medicine, which uses music for the same purpose, but does not do so within a therapeutic relationship. Instead, a health professional turns on the jukebox and the music plays. There is no exchange of the effect of the music with the patient and there is no playing of music together.  And this offers a good demarcation of the two areas, since these terms are often confused.

 

“We define music therapy in Germany as: the use of music within a therapeutic relationship to restore, maintain and promote mental, spiritual or physical health.” 

 

CyaniteIs music therapy already an accepted field in medicine or do you still have to fight to justify it?

JosephineIt is actually a very, very old field. Music has been used in medicine for thousands of years. Both with indigenous people and with the advanced ancient civilizations like the ancient Greeks. It is not so much anchored here in our Christian culture. Illness was long seen as God’s punishment and music was used to proclaim the word of God. Only since the 17th/18th century has ‘music as a remedy’ been discussed again. And that is why it is not yet as deeply rooted in our culture as it is in other parts of the world. Nevertheless, today’s music therapy is present in many guidelines for the inpatient care of patients and is an relevant part of the treatment of psychiatric and psychosomatic illnesses. Psychiatric and psychosomatic clinics are the places where most Music Therapists work. However, they are also found in acute medical areas or in rehabilitative institutions. In Germany neurological music therapy, for example, is a growing field where music is used very functionally, e.g. to improve the postural control of stroke patients who have lost certain bodily functions or Parkinson’s patients, where rhythm is used to restore motor functions.

Further, I also work practically in the field of neonatology, i.e. with premature and sick newborns and their families. Here, the main aim is to encourage parents to hum and sing for their child to strengthen their relationship and promote relaxation. Other areas of application are in oncology, palliative care and also in curative education context or in the field of community music.

” In many hospitals music therapy is a relevant part of the treatment of psychiatric and psychosomatic illnesses.” 

 

© Photo by George Coletrain – Unsplash

 

Cyanite: What can music do that other forms of therapy cannot? What makes music so special in therapy?

Josephine: Well, I think the most pronounced thing is that music therapy is one of the therapy methods that also enables the treatment of non-verbal patient groups that cannot come to psychotherapy. These can be people who, due to a limitation, can no longer understand or produce speech, e.g. after a stroke or disability. Or people who no longer have the strength, e.g. in palliative care at the end of life. For them, music can be a different approach to thoughts and feelings that they are dealing with. Or groups of patients who have literally lost their power to speech, e.g. after traumatic experiences or people with depression and anxiety disorders who find it difficult to talk about their feelings and thoughts – to put them into words at all. 

I am mainly researching music therapy with depressive teenagers. Young people are already going through such a difficult phase of change. The brain is being remodelled and might lead to slight mood swings; if an illness such as depression is added, they often find it difficult to access, express and regulate their emotions. Active music-making is a great way to express those feelings that cannot be expressed verbally and then find the words to express them. Music is a kind of opener.

If we look at the symptoms of depression – people withdraw, have little social contact, a depressed mood, low self-esteem and a low level of activity – and then look closely at what actually happens when I make music with a young person; write a song and then record it: We have a common activity in a social relationship. It is something active to make music, it increases the level of activity and music is fun. We make music because we enjoy music. And in the case of a depressive mood, it is doing something that is fun and encourages people to open up. Music picks people up quite well, especially young people. There is no age group that listens and creates music as much as young people. 

Active music-making is a great way to express those feelings that cannot be expressed and then find the words to express them ” 

© Photo by Hans VivekUnsplash

CyaniteWhat does music do to us, that it touches our inside, that it can trigger us or bring certain things to light?

JosephineWell, I am not a neuroscientist who can explain this in detail. But the regulation of mood or one’s own activity level is one of the most important reasons for people to listen to music. Music activates many different areas of the brain that are important in terms of emotional reactions. It directly addresses the limbic system, which is responsible for processing emotions: The body’s own reward system is activated. Music can therefore cause the release of dopamine and endogenous opioids – similar reactions we see with sex or certain drugs. These substances increase our drive, motivation and mood. 

 

CyaniteCan we then also generalize that certain music triggers a positive mood and a high energy level? Or does it differ from person to person?

Josephine: Well, there are certain musical parameters that cause similarities. Music can trigger certain emotions in us, but there are many, many different mechanisms that can underlie this. Some are universal and some are individual. Universal would be the mechanism of musical contagion, i.e. when a song is in a very slow tempo and in a minor key and perhaps the song also has a sad lyric. Through musical contagion, the mood in the music can be transferred to our mood. Scientist discuss whether this really happens via the mirror neurons. Imagine I have a patient who had a car accident – therefore a traumatic experience, and during the car accident ‘Dancing Queen from Abba’ was playing. It’s a very positively charged upbeat song, which most of us would perceive as a happy song, and therefore puts many people in a good mood. However, for the patient who experienced the accident listening to this exact song could trigger a flashback which could brings them directly back into the difficult emotional state experienced during the accident. Then playing this song would be absolutely contraindicated.

There is no music that works the same for everyone, but it depends on the situation you are in, what your current state is, what experiences you have had and so on. There are multiple variables at play in this process, which makes the process highly complex.

 

Cyanite: Algorithms try to make exactly such generalizations. To what extent do you come into contact with artificial intelligence in your profession, and where do you see the greatest potential for integrating this technology into music therapy and medical applications?

JosephineIn my practical work as a music therapist I have less contact with artificial intelligence, but of course both patients and I are surrounded by our environment and therefore also by AI. Patients use health monitoring apps with sleep and movement trackers and are reminded by the app: ‘Now is the time to get up to benefit your health’, so we are already getting in touch with AI. If you look at research projects in the field of psychotherapy, it is also very exciting for the field of music therapy. For example, an embodied AI – like a robot, can be useful for interactions with elderly people who often suffer from social isolation, or for children with autistic children to practice social interaction. Also apps that are used as virtual therapists can, for example, chat with people with depression and thus simulate a therapeutic conversation. AI development is not directly affecting my work, but I can see its presence in fields around me: Research projects are also taking place in our sister disciplines of music medicine and psychology. 

For example, there are many who try to explore the correlation between psychological and physiological parameters to music listening behaviour – that they then analyse and implement in machine learning models. I think that we are still at the beginning, and there is certainly potential for us music therapists to be open to – or at least we should know about what is being developed. In the end our patients will use the products that are developed with the help of these research results, so we have to stay informed. 

There are very exciting projects. There is a research group in Finland at the University of Jyväskylä that is developing a machine learning model to support affect regulation of young people through listening to music via an app. And this is, of course, a topic I am very much involved with, because I often develop playlists with the young people who are in therapy with me. Not based on AI, but completely human. I also believe that in the long term such apps could be included as a support for music therapy treatment. But as with the use of AI in the diagnosis of cancer, in the end the doctor has the last word. And I also think that in music therapy treatment, the music therapist and the patient should participate in the process and have the last word on what is being listened to. Think of the example before with the car accident and the woman, the machine didn’t know the individual case. 

 

CyaniteWhat do you wish for – from a music therapy point of view, for the developers of modern algorithms?

Josephine: Keep the human being in mind.

From my own dealings with technology I know the enthusiasm of: “Wow, what you can do with it!” I think you just have to be careful not to get carried away, to put the machine above the human being. Apart from the fact that the machine does not know the individual case, there are also ethical and social aspects, and social consequences which are not yet foreseeable. We do not yet know how we – humanity – will react to them. With machine learning, it’s madness the speed at which it is developing. If we look at how slowly evolution is proceeding, the question is ‘how quickly we can adapt to these new developments?’.  And I think we have to take a good look at this and, despite all the research in the technical field, we must not lose sight of the ethical, social and data protection issues.

© Photo by Fixelgraphy – Unsplash

CyaniteAs a last question:  What are your tip for everyday people on how to use music at this moment in time where isolation, home office, and lockdowns are still realities for a lot of us?

Josephine: Well, I found the balcony music which has taken place in many cities very nice, because it makes a typical music-psychological phenomenon visible: making music together creates a feeling of community, solidarity and cohesion. I find highly exciting that in such an exceptional situation, we humans intuitively use music functionally as social cement.

For personal listening to music: pay attention to what you put on your ears! Pay attention to what the music you listen to triggers in you – especially in times when you are not feeling so well. Take care that you do not get into a loop. I see this often with depressed patients, that when they are not doing so well, they listen to songs that relate the depressed mood they are in. They have to be careful not to get caught up in this and end up in a rumination loop with musical accompaniment.

And for all of us: start where you are right now and make a playlist that gets you out of a bad mood. The first song can be a song that picks you up out of a depressed mood and then think about what kind of mood you want to be in? Search for a song that reflects this mood, put it at the end of the playlist and then gradually fill it up. 

Thank you Josephine for sharing your insight with us and for your valuable contribution in the field of music therapy!

If you are interested in knowing more about music in relation to therapy, psychotherapy and brain functions, here’s a list of recommendation on the topic: 

Books:

“This Is Your Brain on Music: The Science of a Human Obsession” 

by Daniel J. Levitin

” Good Vibrations”

by Prof. Stefan Kölsch

“Handbook of Music, Adolescents, and Wellbeing”

by Katrina McFerran, Philippa Derrington, and Suvi Saarikallio

Podcasts:

Clinical BOPulations

Instru(mental)

Musical Health

The European Music Therapy Confederation

Deutsche Musiktherapeutische Gesellschaft

Article on Playlists

Music Therapy ( M.A) – SRH Heidelberg

Analyzing Music Using Neural Network: 4 Essential Steps

Analyzing Music Using Neural Network: 4 Essential Steps

As written in the earlier blog article, we at Cyanite focus on the analysis of music by using artificial intelligence (AI) in the form of neural networks. Neural networks in music can be utilized for many tasks like automatically detecting the genre or the mood of a song, but sometimes it can also be tricky to understand how they work exactly.

With this article, we want to shed light on how neural networks can be deployed for analyzing music. Therefore, we’ll be guiding you through the four essential steps you need to know when it comes to neural networks and AI audio analysis. To see a music neural network in action, check out one of our data stories, for example, an Analysis of German Club Sounds with Cyanite. 

The 4 steps for analyzing music with neural networks include:

1. Collecting data

2. Preprocessing audio data

3. Training the neural network

4. Testing and evaluating the network

Step 1: Collecting data

Let’s say that we want to automatically detect the genre of a song. That is, the computer should correctly predict whether a certain song is, for example, a Pop, Rock, or Metal song. This seems like a simple task for a human being, but it can be a tough one for a computer. This is where deep learning in the form of neural networks come in handy.

In general, a neural network is an attempt to mimic how the human brain functions. But before the neural network is able to predict the genre of a song, it first needs to learn what a genre is.

Simply put: what makes a Pop song a Pop song? What is the difference between a Pop song and a Metal song? And so on. To accomplish this, the network needs to “see” loads of examples of Pop, Rock or Metal, etc. songs, which is why we need a lot of correctly labeled data.

Labeled data means that the actual audio file is annotated with additional information like genre, tempo, mood, etc. In our case, we would be interested in the genre label only.

Although there are many open sources for this additional information like Spotify and LastFM, collecting the right data can sometimes be challenging, especially when it comes to labels like the mood of a song. In these cases, it can be a good but also perhaps costly approach to conduct surveys where people are asked “how they feel” when they are listening to a specific song.

Overall, it is crucial to obtain meaningful data since the prediction of our neural network can only be as good as the initial data it learned from (and this is also why data is so valuable these days). To see all the different types metadata used in the music industry, see the article an Overview of Data in the Music Industry.

Moreover, it is also important that the collected data is equally distributed, which means that we want approximately the same amount of, for example, Pop, Rock, and Metal songs in our music dataset.

After collecting a very well labeled and equally distributed dataset, we can proceed with step 2: pre-processing the audio data.

A screenshot from a data collection music database

Step 2: Pre-processing audio data

There are many ways how we can deal with audio data in the scope of music neural networks, but one of the most commonly used approaches is to turn the audio data into “images”, so-called spectrograms. This might sound strange and counterintuitive at first, but it will make sense in a bit.

First of all, a spectrogram is the visual representation of the audio data, more precisely: it shows how the spectrum of frequencies that the audio data contains varies with time. Obtaining the spectrogram of a song is usually the most computationally intensive step, but it will be worth the effort. Spectrograms are essentially data visualizations – you can read about different types of music data visualizations here.

Since great successes were achieved in the fields of computer vision over the last decade using AI and machine learning (face recognition is just one of the many notable examples), it seems natural to take advantage of the accomplishments in computer vision and apply them to our case of AI audio analysis.

That’s why we want to turn our audio data into images. By utilizing computer vision methods, our neural network can “look” at the spectrograms and try to identify patterns there.

Spectrograms from left to right: Christina Aguilera, Fleetwood Mac, Pantera

Step 3: Training the neural network

Now that we have converted the songs in our database into spectrograms, it is time for our neural network to actually learn how to tell different genres apart.

Speaking of learning: the process of learning is also called training. In our example, the neural network in music will be trained to perform the specific task of predicting the genre of a song.

To do so, we need to split our dataset into two subsets: a training dataset and a test dataset. This means that the network will be only trained on the training dataset. This separation is crucial for the evaluation of the network’s performance later on, but more on that in step 4.

So far, we haven’t talked about how our music neural network will actually look like. There are many different forms of neural network architectures available, but for a computer vision task like trying to identify patterns in spectrograms, so-called convolutional neural networks (CNN) are most commonly applied.

Now, we will feed a song in form of a labeled spectrogram into the network, and the network will return a prediction for the genre of this particular song.

At first, our network will be rather bad at predicting the correct genre of a song. For instance, when we feed a Pop song into the network, the network’s prediction might be a metal song. But since we know the correct genre due to the label, we can tell the network how it needs to improve.

We will repeat this process over and over again (this is why we needed so much data in the first place) until the network will perform well on the given task. This process is called supervised learning because there’s a clear goal for the network that it needs to learn.

During the training process, the network will learn which parts of the spectrograms are characteristic of each genre we want to predict.

Example architecture of how a CNN can look like

Step 4: Testing and evaluating the network

In the last step, we need to evaluate how good the network will perform on real-world data. This is why we split our dataset into a training dataset and a test dataset before training the network.

To get a reasonable evaluation, the network needs to perform the genre classification task on data it never has seen before, which in this case will be our test dataset. This is truly an exciting moment because now we get an idea of how good (or maybe bad) our network actually performs.

Regarding our example of genre classification, recent research has shown that the accuracy of a CNN architecture (82%) can surpass human accuracy (70%), which is quite impressive. Depending on the specific task, accuracy can be even higher.

But you need to keep in mind: the more subjective the audio analysis scope is (like genre or mood detection), the lower the accuracy will be.

On the plus side: everything we can differentiate with our human ears in music, a machine might distinguish as well. It’s just a matter of the quality of the initial data.

Conclusion

Artificial intelligence, deep learning, and especially neural network architectures can be a great tool to analyze music in any form. Since there are tens of thousands of new songs released every month and music libraries are growing bigger and bigger, music neural networks can be used for automatically labeling songs in your personal music library and finding similar sounding songs. You can see how the library integration is done in detail in the case study on the BPM Supreme music library and this engaging interview video with MySphera. 

Cyanite is designed for these tasks, and you can try it for free by clicking the link below.

I want to integrate AI in my service as well – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

Case Study: How Mediengruppe RTL / i2i Music decreases searching time for music with Cyanite’s AI

Case Study: How Mediengruppe RTL / i2i Music decreases searching time for music with Cyanite’s AI

About RTL / i2i Music

 

The Mediengruppe RTL GmbH is one of the largest German media companies. Part of which are the TV-Channels RTLRTL IIVOX, and n-tv as well as the music publisher i2i Music. RTL owned i2i Music is an interface and service provider between producers, editors, and marketing experts on the one hand and composers on the other. They publish commissioned compositions for film, television, and radio and have music produced for the advertising sector. The production music offering of i2i Music is called FAR MUSIC and is aimed at filmmakers, editors, and producers of trailers, advertising, and online content. The platform offers a wide variety of musical styles and provides tracks of all genres for download. The FAR MUSIC catalog includes international labels from Germany, Great Britain, and the USA.

 

Catalogue size of FAR MUSIC: 8,200+ songs

Alarm for Cobra 11 is just one of many series supported by music from i2i Music.

Challenge

In the content production process, RTL’s editors and journalists have access to the company’s own music catalog FAR MUSIC, where the rights are pre-cleared for all uses, internal and external. Due to usability issues with the music catalog interface and ineffective search tools, RTL employees can find it easier to use external music sources. This costs the company unnecessary licensing fees and subjects them to copyright infringement issues.

FAR MUSIC being RTL’s own music library

Solution

Cyanite’s automatic tagging and Similarity Search drastically increase the usability of RTL’s music library FAR MUSIC. Cyanite delivers the expected music through intuitive search options using a vast range of tags as well as input tracks from Youtube, Spotify, and their proprietary music databases. The solution is delivered via Cyanite’s own API.

Cyanite’s API docs

Benefits

+ Projected 86% decrease in searching time.

+ Projected 40% increase of usage of pre-licensed copyrights and 26% decrease in licensing fees.

Lutz Fassbender

Lutz Fassbender

Managing Director of i2i Music

Lutz Fassbender is the managing director of i2i Music and responsible for all copyright affairs. He is part of Mediengruppe RTL for more than 15 years.

“We have so much unused potential in our catalogue that we can now exploit much better with the searching algorithms by Cyanite.”

I want to integrate AI in my service as well – how can I get started?

Please contact us with any questions about our Cyanite AI via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get the first grip on Cyanite’s technology, you can also register for our free web app to analyze music and try similarity searches without any coding needed.

Case Study: How Filmmusic.io optimizes its search with Cyanite’s AI

Case Study: How Filmmusic.io optimizes its search with Cyanite’s AI

About Filmmusic.io

Filmmusic.io is a marketplace from Hannover exclusively for Creative Commons music. It is primarily aimed at amateur musicians and serves media professionals, photographers, producers of independent films, game developers, educational institutions, aid organizations and other institutions with low or hardly any budgets. Also amateur filmmakers and YouTubers will find a wide selection of free music, without having to forego the monetization of their videos. Filmmusic.io pays 60%-70% to the artists.

Catalogue size: 3,500+ songs

 

Usage: 100,000 plays / day

 

Registered users: 100,000+

You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

Kevin MacLeod not only has 232,000 YouTube followers, but is also the biggest music contributor on Filmmusic.io.

Challenge

Clean tagging and constant improvement of search filters are key to delivering music and finding it easy on the platform. The steady growth of the catalog makes tagging more and more difficult, while new features like bpm or key are requested by the active Filmmusic.io community.

A screenshot of Filmmusic.io – a Creative Commons music heaven for content creators.

Solution: Automatic metadata via API integration

Filmmusic.io implemented Cyanite’s music intelligence to automate song tagging especially in the fields of bpm, moods, and key. Next, a Similarity Search will be implemented in a new major update, allowing Filmmusic.io users to search the platform by reference tracks. The technology is seamlessly integrated via the Cyanite API, which means that every new song on Filmmusic.io is automatically tagged and added to the Similarity Search.

 

The new bpm search filter on Filmmusic.io is based on Cyanite’s algorithm.

Results

+ 15% increase in session time

 

+ 35% increase in filter options

 

+ 70% time-saving in the tagging process

 

Sascha Ende

Sascha Ende

Founder and Developer of Filmmusic.io

Sascha Ende is the creative and technical brain behind Filmmusic.io. He has a long history of producing music before launching his own platform.
 

“The team and technology from Cyanite help me handling the constant growth of Filmmusic.io and improving the user experiences with modern algorithms.”

 

I want to apply AI to my app as well – how can I get started?

Contact us with any questions about our frontend and API services via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get a first grip on how Cyanite works, you can also register for our free web app to analyze music and try out similarity searches without any coding needed.

New SWR music app uses recommendation algorithms from Cyanite

New SWR music app uses recommendation algorithms from Cyanite

About SWR’s new radio app

A music/radio app developed jointly by Südwestrundfunk (SWR) and the Berlin digital agency TBO will enable listeners to fast-forward and rewind through radio programs and to skip songs. For the first time, the decision as to which song is played after a skip is not made by humans but by a machine: by the recommendation algorithm from Cyanite. This is SWR’s response to the competition from streaming providers and is the release of a user-centered radio of the future.

SWR’s promotion video for their new radio app

How Cyanite’s algorithms come into play

Using a logic specially developed for SWR, the algorithm selects the song that the user is most likely to like. Step by step, past skip decisions of the user are then to be incorporated into the music recommendations, thus personalizing the music program on the app. The collaboration between SWR and Cyanite has developed in the SWR audio lab, where future technologies for radio and the involvement of listeners in the radio experience are the focus.

The shuffle button that makes Cyanite’s algorithm come into action

I want to apply AI to my app as well – how can I get started?

Contact us with any questions about our frontend and API services via mail@cyanite.ai. You can also directly book a web session with Cyanite co-founder Markus here.

If you want to get a first grip on how Cyanite works, you can also register for our free web app to analyze music and try out similarity searches without any coding needed.