Home / News / Technology / OpenAI’s Sora: A New Era of Automation—Implications For the Film Industry?
3 min read

OpenAI’s Sora: A New Era of Automation—Implications For the Film Industry?

Last Updated February 16, 2024 4:55 PM
James Morales
Last Updated February 16, 2024 4:55 PM

Key Takeaways

  • The number of text-to-video engines has proliferated in recent weeks.
  • Following similar announcements from Google and ByteDance, OpenAI is the latest Big Tech player to enter the space.
  • Sample clips from the firm’s new AI video generator, Sora, showcase potentially game-changing multi-shot videos.
  • With the technology advancing rapidly, how long before AI can generate entire films?

In the past year, a string of text-to-video engines have hit the market, letting users generate short animated clips from text prompts. 

Just weeks after Google announced its take on the technology – Lumiere – OpenAI has followed up with Sora. Sample generations shared by the AI developer are certainly impressive, with one video extending to a full minute of animation. As technology evolves, could the world be on the cusp of an AI-generated film revolution?

OpenAI Enters Text-to-Video Race With Sora

The first AI-powered text-to-video platforms emerged in 2022. Following a similar logic to image generation tools like Midjourney, they use so-called diffusion models that start with something like static, or formless noise, and over many iterations adjust the video until it resembles the prompt.

Some of the first video diffusion models were developed by Pika, Stability AI, and Runway, but, major Big Tech players have recently entered the space.

Last month, TikTok owner ByteDance debuted MagicVideo V2, boasting that the new video generation engine could create crisper, higher definition outputs than existing alternatives.

Just 2 weeks later, Google announced Lumiere, showcasing outputs from the text-to-video platform that at first glance resembled actual film footage.

Was OpenAI’s announcement on Thursday a kneejerk reaction to its rivals? Perhaps, but the firm successfully demonstrated a key feature of Sora that could give it the edge.

Sora Debuts Multi-Scene Video Generation

While Google and Bytedance have both teased a leap forward in the quality and fidelity of AI-generated video, neither has broken from the general mold of existing text-to-video generators, which output short clips from a single, usually unmoving perspective.

In contrast, promotional videos for Sora include moving camera angles, cinematic cuts and multiple scenes – all generated by a single prompt.

As OpenAI boasted , “Sora can create multiple shots within a single generated video that accurately persist characters and visual style.”

The development is crucial. After all, no matter how slick the animation may be, videos generated by existing solutions are often little more than high-definition GIFs. 

Paving the Way for AI Film

By enabling greater continuity across extended clips, Sora anticipates a future in which AI is capable of generating entire films, with each scene the result of a different prompt, but characters and style remain consistent throughout. 

For now, OpenAI hasn’t announced a release date for the platform. Neither Google nor Bytedance said when they will make theirs available. 

But less than 7 weeks into 2024, between them, the Big Tech firms have demonstrated significant advances in the field that point to the technology’s transformative potential in film, advertising, and other videographic media industries.

Was this Article helpful? Yes No