A clip of American actor Will Smith eating spaghetti has become an unofficial benchmark for the capability of generative AI models.
Two years ago the spaghetti barely rendered and Smith's face was frighteningly distorted.
But by July this year, a new video of the AI-generated dish was so realistic that Smith himself called it "Absolutely perfect and photo real."
A recent wave of videos from Open AI’s new Sora 2 application inundated social media with highly realistic videos of celebrities, dead people and copyrighted characters.
Experts are concerned the technology could be used for scamming, deepfake imagery and political disinformation, and there is debate over whether Australia is too far behind in regulating new AI platforms.
Are AI videos getting too realistic? And what are the risks for Australians?
'A little scary'
Sora 2 is not currently available in Australia.
But I asked a friend in the United States, Raychel Ruiz, to try a few test videos.
The app scanned Raychel’s face and got her to say a few words.

She gave it a prompt to make an SBS News-style video about Sora 2 and it delivered a lifelike report with this quote: "OpenAI has unveiled Sora 2, its next-generation video model. It turns a written prompt into realistic footage. The system can generate up to 2 minute clips in full HD or higher."
Ruiz said the end result was a little uncomfortable.
"It is interesting to see that it can create all that from just a sentence, but I don't love it. It's a little scary, I think."
I then asked her to make a video using a photo of 'SBS News reporter Shivé Prema', but the app wouldn't allow it.
Sora 2 now only lets users make videos of themselves, some historical figures or public figures who have given their consent, in response to people making AI-generated videos of celebrities and copyrighted characters.

But, prompting the app to scrape the internet for my digital footprint worked to some extent, creating two different versions of me.
Humorously, it swapped my gender and added an extra letter to the trademarked SBS News logo, suggesting some safeguards are in place, but raising other questions.
Google's Veo 3.1 app doesn't have the same requirements — it created an AI version of me from a picture and a written prompt (the same prompt given to Sora 2).
It animated the image, adding facial expressions and hand gestures, along with dialogue describing its competitor Sora 2.

Concerns about the truth
Toby Walsh, chief scientist at UNSW.ai, told SBS News there's a risk that this kind of content distorts people's relationship with the truth.
He suggests people could watch so much AI content that they stop believing people who are telling the truth.
"We're going to be entertained to death, and there's gonna be lots of fun memes circulating, but I'm not really sure it's that valuable for us," he said.
"It's going to consume a huge amount of energy, and I'm actually very worried that it's going to be used for a lot of mischief, that people are going to make fake videos, and maybe we're going to believe them, and we're gonna perhaps then stop believing many of the videos we believe, even the things that are real."
Similarly, he said Australia is better than some others at regulating AI.
"Technology is advancing very rapidly, and it's very hard for regulation, but we, we're actually, you know, we, we're compared to some other countries, we're actually in a reasonably good space," he said.
"We have, for example, the eSafety Commissioner, we are the first country in the world to have an eSafety commissioner and I think they're doing a good job of starting to address some of the harms."
Australia does not have AI-specific legislation but the technology is regulated through other laws. The government is currently developing a framework for AI.
There are concerns about the social media feed in the Sora 2 app, which some say has been designed to be addictive.
"It's like TikTok on steroids, which you can generate AI content... I think they want to create a whole social media platform which will obviously will be a lot bigger than what we have already" according to Seyedali Mirjalili, a professor of AI at Torrens University.
Mirjalli said it's concerning how quickly the new platform works, where previously it would be a painstaking, hours-long process to make a deepfake edit.
"You can now upload a video of someone just half a minute, as short as a half a minute, and then, you know, impersonate them and add them to different contexts and create different content around them."
He said regulation is "both lagging and lacking at the moment," pointing to the fact that without rules here around watermarks, it's become almost impossible to tell what's real and what's not, and getting even harder.
"The problem is that this is Sora 2. Imagine what Sora 20 would look like."
— With reporting by Madeleine Wedesweiler.
SBS does not use AI to enhance or generate its content. More information about SBS' AI guiding principles for the use of AI is available here.
For the latest from SBS News, download our app and subscribe to our newsletter.



