Video Generated AI: Open AI Sora

OpenAI the company behind ChatGPT and Dall-E have announced their latest generative artificial intelligence Sora. Sora can generate realistic videos that are up to a minute long from text prompts. Sora is not yet available to the public. OpenAI only allows access to individuals employed to look for issues that will assess potential risks associated with the model’s release. OpenAI has also granted access to some visual artists, designers, and filmmakers to gain feedback on how to add the AI model to be most helpful to creative professionals.

 

The video shown by Open AI shows the realistic potential of Sora with many videos showing great detail in videos such as the dogs in the snow. The animations showcased looked like they were produced by a professional animator with great details in textures. The Tokyo street scene looks lifelike but some details were not produced such as the footprints in the snow made by the pedestrians. The guy reading in the clouds looks life-like with the pages waving in the air however, there are small errors with the pages turning. Suddenly the pages start to multiply outwards.

Impact of Video Generated AI

Video generative AI can have a huge impact on video creation from YouTube content to marketing. AI can help companies save money and time from going on-site to film scenes. Sora uses OpenAI large language models which are massive models trained on tons of data from the internet to generate new content.

Artificial intelligence can help save money or generate new content in the media world by spending huge amounts of money on agencies. However, it poses a risk to creators who earn a living such as filmmakers, writers, and animators. Another issue for video-generative AI is deep fakes which could be used to harm a person’s image for political or personal reasons. Companies such as OpenAI will need to be responsible for implementing measures for how AI-generated tools are used and distributed. With every generated video there is a waveform watermark in the bottom right indicating the footage was created by AI. Although creators could crop the watermark out to get around this companies will need to find ways to ensure transparency between AI-generated content and authentic content.  There is a long way to go to create a framework that helps alert audiences of AI-generated content that could be fake or authentic content from reliable sources.

Previous
Previous

Human Capital: The Biggest Investment We Need to Do

Next
Next

Why Failure is the Key to Success?