top of page

Music

DeepMind’s new AI generates soundtracks and dialogue for videos

DeepMind, Google’s AI research lab

DeepMind’s new AI generates soundtracks and dialogue for videos
Image Credits: Google DeepMind
By
Kyle Wiggers
17 June 2024
less than 3 min read
Become smarter in just 5 minutes

Ai Onion delivers quick and insightful updates about the most important and impactful Ai news and insights from careers to crime

Thanks for subscribing!

DeepMind, Google’s AI research lab, says it’s developing AI tech to generate soundtracks for videos.


In a post on its official blog, DeepMind says that it sees the tech, V2A (short for “video-to-audio”), as an essential piece of the AI-generated media puzzle. While plenty of orgs, including DeepMind, have developed video-generating AI models, these models can’t create sound effects to sync with the videos that they generate.


“Video generation models are advancing at an incredible pace, but many current systems can only generate silent output,” DeepMind writes. “V2A technology [could] become a promising approach for bringing generated movies to life.”

DeepMind’s V2A tech takes the description of a soundtrack (e.g. “jellyfish pulsating under water, marine life, ocean”) paired with a video to create music, sound effects and even dialogue that matches the characters and tone of the video, watermarked by DeepMind’s deepfakes-combating SynthID technology. The AI model powering V2A, a diffusion model, was trained on a combination of sounds and dialogue transcripts as well as video clips, DeepMind says.

bottom of page