close
close

Meta, challenging OpenAI, announces new AI model capable of generating video with sound

Meta, challenging OpenAI, announces new AI model capable of generating video with sound

Sample Movie Gen creatives provided by Meta showed videos of animals swimming and surfing, as well as videos using real photos of people to depict them performing actions like painting on a canvas.

Reuters

October 5, 2024, 1:05 p.m.

Last modification: October 5, 2024, 1:05 p.m.

The Meta AI logo is seen in this illustration taken on May 20, 2024. Photo: REUTERS/Dado Ruvic/Illustration/File Photo

“>
The Meta AI logo is seen in this illustration taken on May 20, 2024. Photo: REUTERS/Dado Ruvic/Illustration/File Photo

The Meta AI logo is seen in this illustration taken on May 20, 2024. Photo: REUTERS/Dado Ruvic/Illustration/File Photo

Facebook owner Meta announced Friday that it has built a new AI model called Movie Gen that can create realistic-looking video and audio clips in response to user prompts, saying it can rival tools leading media generation startups like OpenAI and ElevenLabs.

Samples of Movie Gen’s creations provided by Meta showed videos of animals swimming and surfing, as well as videos using real photos of people to depict them performing actions like painting on a canvas.

Movie Gen can also generate background music and sound effects in sync with the content of videos, Meta said in a blog post, and use the tool to edit existing videos.

The Google News Business Standard
Stay informed, follow The Business Standard’s Google news channel

In one of these videos, Meta had the tool insert pom poms into the hands of a man running alone in the desert, while in another it transformed a parking lot where a man was skateboarding from a dry ground into a parking lot covered in a splashing puddle.

Videos created by Movie Gen can be up to 16 seconds long, while audio can be up to 45 seconds long, Meta said. He shared data showing blind testing indicating the model performs favorably compared to offerings from startups such as Runway, OpenAI, ElevenLabs and Kling.

The announcement comes as Hollywood grapples with how to leverage AI generative video technology this year, after Microsoft-backed OpenAI first showed in February how its Sora product could create long-form videos footage in response to text prompts.

Entertainment industry technologists are eager to use such tools to improve and speed up filmmaking, while others are concerned about adopting systems that appear to have been trained without permission on copyrighted works. copyright.

Lawmakers also highlighted concerns about how AI-generated fakes, or deep fakes, are being used in elections around the world, including in the United States, Pakistan, India and Indonesia.

Meta spokespeople said the company was unlikely to release Movie Gen for open use by developers, as it did with its Llama series of large-language models, saying it was considering risks individually for each model. They declined to comment on Meta’s rating for Movie Gen specifically.

Instead, they said, Meta was working directly with the entertainment community and other content creators on using Movie Gen and would integrate it into Meta’s own products next year.

According to the blog post and a research paper on the tool published by Meta, the company used a mix of licensed and publicly available datasets to create Movie Gen.

OpenAI met with Hollywood executives and agents this year to discuss possible partnerships involving Sora, although no deals have yet been reached from those discussions. Concerns about the company’s approach grew in May when actress Scarlett Johansson accused the creator of ChatGPT of imitating her voice without permission for its chatbot.

Lions Gate Entertainment, the company behind “The Hunger Games” and “Twilight,” announced in September that it was giving AI startup Runway access to its library of films and television to train a model of ‘AI. In return, he says, the studio and its filmmakers can use the model to enrich their work.