
How GoogleStones and RollingDeep are getting into the music business
22. November 2023By Horst Buchwald
How GoogleStones and RollingDeep are getting into the music business
11/21-2023
A few weeks ago, Google’s DeepMind brought together its technical experts from all departments of the company with numerous world-famous artists and songwriters. They should investigate “how generative music technologies can responsibly shape the future of music creation.”
The result received much praise. In a statement it said: “We are excited to develop new technologies that can improve the work of professional musicians and the artistic community and make a positive contribution to the future of music.”
Okay: “new technologies” and “future of music” that makes you curious. Are the initiators on the way to a completely new music style and product?
There is no need to argue about the importance of music for people. The music industry understood this early on. So it became a bomb business. Since the 1960s, global sales have doubled to over $30 billion. In 2022, 80% of musical products in Germany were digitalized. No other experience triggers such a cascade of emotions in people. It ranges from freaking out, enjoying and relaxing to the fullest, to dreaming and flying away. If Google’s DeepMind were to succeed in making a point here and establish a new style of music, the music market of the future would look different. So again: Don’t you have more material to confirm my suspicions, which I’m keeping to myself for now?
Now the melodious name “Lyria” comes up. Now it becomes more concrete. They start with a definition: Lyria is the “most advanced AI model for music generation.” Interesting, but do you have any more? Yes, they don’t hold back about it. The Lyria model developed by DeepMind stands out because it “generates high-quality music with instrumentals and vocals, performs transformation and continuation tasks, and gives users more granular control over the style and performance of the output.”
Now the amazed layman can imagine more. But how would it be if this became even more precise? Now pay attention:
“We’re testing Lyria in an experiment called Dream Track, designed to test new ways for artists to connect with their fans, developed in collaboration with YouTube. As part of the experiment, a limited group of creators will be able to use Dream Track to produce a unique soundtrack featuring the AI-generated voice and musical style of artists such as Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Sia, T-Pain, Troye Sivan and Papoose. Each participating artist has worked with us and will help us test and learn to shape the future of AI in music.
Dream Track users can simply enter a theme and select an artist from the carousel to create a 30-second soundtrack for their short film. Using our Lyria model, Dream Track simultaneously generates the lyrics, backing track and AI-generated voice in the style of the selected participating artist.”
WOWW! The Awesome! Everyone is excited. As a parting gift there was this recommendation: Here are a few examples created in the style of Charlie Puth or T-Pain.
We will soon know whether my suspicions are confirmed. Stay tuned.