
[ad_1]
Hours after learning that Google has hired one of the top people responsible for the video-generating AI Sora, Meta has announced the launch of its own alternative in this field: Movie Genan AI designed to generate videos from text or still images, as well as to edit existing audiovisual content (altering specific elements of the scene).
The videos generated by Movie Gen can be up to 16 seconds long, incorporate sound, and adapt to different aspect ratios (the latter a first in the video generative AI industry).

Movie Gen gives us the option of using a photo of our face as a model for a video
Accessibility. This feature allows users to create unique visual content without the need for cameras or recording equipment, making video editing much more accessible for content creators of all levels. In the words of AI expert Carlos Santana,
“The versatility to manipulate video is brutal and begins to approach what one imagines will be the future of audiovisual editing.”
Copyright. Regarding the already recurring debate around the intellectual property of the original works, Meta has indicated that it trained the model with a combination of licensed data and public content… although it has not specified details about which data sets were used.
How to compare. If you’re wondering where Meta got the numbers comparing the capabilities of Movie Gen and OpenAI’s Sora, this tweet He explains it very well, citing the company’s paper:
“Apparently they took public samples of Sora, which were presumably hand-picked, then made multiple videos from the same ‘prompt’ with Movie Gen, and hand-picked here as well. It’s not exactly scientific, but what can you do?”

Figures with which Meta claims to have ousted Sora
You look at it, but you don’t touch it (yet)
Despite the promise of this and other similar platforms, public access to tools such as Movie Gen remains limited. Meta has clarified that although its model is the most advanced to date, it will not be launched as a product this year due to the technical and ethical challenges involved. One of the biggest obstacles is the enormous amount of processing capacity necessary to generate these videos on a large scale.
And yet, Meta is far ahead of other companies in that regard…
The importance of infrastructure
Last April we explained how the company Meta had achieved a dominant position in the field of artificial intelligencel (AI), specifically in the “Compute Index” metric, which measures the growth, precisely, of the processing capacity used in AI research, after making a large investment in high-performance computing systems (HPC) and in the use of specialized GPUssuch as the Nvidia A100 and H100, essential for training AI models.
In fact, Meta had more than 350,000 H100 GPUs, a figure much higher than the rest of the companies combined. Although its advantage is somewhat smaller when it comes to the use of A100 GPUs, Meta is still a leader in this area as well.
Zuckerberg has acknowledged that Meta’s computing infrastructure plays a fundamental role even in the development of services like Reelsapparently far from AI. Despite initial criticism over infrastructure spending, Meta now benefits greatly from these investments.
Experts such as Andriy Burkov stated that Meta’s mastery of AI infrastructure is a clear advantage in the “AI race”, placing the company in a privileged position to lead the development of advanced technologies in this field.
Image | Goal
In Genbeta | These are the alternatives to Sora from OpenAI with which you can now create videos using AI. This is what they offer
[ad_2]
Source link