OpenAI announces instant video generator – Sora

AI
-

OpenAI announces its development of Sora, an AI model capable of creating realistic and imaginative video scenes from simple text instructions.

Sora’s primary aim is to understand and simulate the physical world in motion, with the ultimate goal of assisting people in solving real-world problems that necessitate interaction with the physical environment.

Sora’s capabilities include generating videos of up to a minute long while maintaining high visual quality and adhering closely to the user’s prompt.

The new model can create complex scenes with multiple characters, specific types of motion, and accurate details of subjects and backgrounds. It demonstrates a deep understanding of language, accurately interpreting prompts and generating characters that express vibrant emotions.

Additionally, Sora can produce multiple shots within a single video, maintaining consistency in characters and visual style.

images courtesy of OpenAI

Despite these strengths, Sora has limitations. It may struggle with accurately simulating the physics of complex scenes and understanding specific instances of cause and effect. For example, it may overlook details like bite marks on a cookie after someone takes a bite. Additionally, the model may confuse spatial details or struggle with precise descriptions of events that occur over time.

OpenAI have been keen to emphasise its safety measures in deploying Sora. They are working with experts in areas like misinformation, hateful content, and bias to adversarially test the model. Tools are being developed to detect misleading content generated by Sora, including a detection classifier and plans for including metadata in future deployments. OpenAI also plans to leverage existing safety methods developed for their other products, such as text and image classifiers to enforce usage policies.

Sora utilises a diffusion model, starting with static noise and gradually transforming it into a video over many steps. It can generate entire videos at once or extend existing ones, solving challenges like maintaining consistency when a subject goes out of view temporarily. Similar to GPT models, Sora employs a transformer architecture for superior scaling performance. Videos and images are represented as collections of smaller units called patches, akin to tokens in GPT, enabling training on a wider range of visual data.

The model builds on past research in DALL·E and GPT models, incorporating techniques like recaptioning from DALL·E 3 to generate highly descriptive captions for visual training data.

Why is this important?

Sora can generate videos solely from text instructions, animate still images accurately, and extend existing videos or fill in missing frames.

Sora is still only currently available in testing for assessment with limited access to a number of visual artist and designers.

Overall, Sora represents a significant milestone in AI research towards understanding and simulating the real world, potentially laying the groundwork for achieving Artificial General Intelligence (AGI).

Author spike.digital