A clip of American actor Will Smith eating spaghetti has become an unofficial benchmark for the capability of generative AI models.
Two years ago the spaghetti barely rendered and Smith's face was frighteningly distorted.
But by July this year, a new video of the AI-generated dish was so realistic that Smith himself called it "Absolutely perfect and photo real."
A recent wave of videos from Open AI’s new Sora 2 application inundated social media with highly realistic videos of celebrities, dead people and copyrighted characters.
Will Smith eating spaghetti has become an unofficial benchmark for how AI generated video apps perform. Credit: LinkedIn
Are AI videos getting too realistic? And what are the risks for Australians?
'A little scary'
Sora 2 is not currently available in Australia.
But I asked a friend in the United States, Raychel Ruiz, to try a few test videos.
The app scanned Raychel’s face and got her to say a few words.

Raychel Ruiz used Sora 2 to make a lifelike version of herself as an SBS presenter. Source: Supplied
Ruiz said the end result was a little uncomfortable.
"It is interesting to see that it can create all that from just a sentence, but I don't love it. It's a little scary, I think."
I then asked her to make a video using a photo of 'SBS News reporter Shivé Prema', but the app wouldn't allow it.
Sora 2 now only lets users make videos of themselves, some historical figures or public figures who have given their consent, in response to people making AI-generated videos of celebrities and copyrighted characters.

Sora 2's conception of SBS News reporter Shivé Prema in a news studio. The app imagined him as a woman. Source: Supplied
Humorously, it swapped my gender and added an extra letter to the trademarked SBS News logo, suggesting some safeguards are in place, but raising other questions.
Google's Veo 3.1 app doesn't have the same requirements — it created an AI version of me from a picture and a written prompt (the same prompt given to Sora 2).
It animated the image, adding facial expressions and hand gestures, along with dialogue describing its competitor Sora 2.

The real Shivé Prema compared to the AI Shivé Prema, as generated by Google's Veo 3.1application. Source: SBS News
Concerns about the truth
Toby Walsh, chief scientist at UNSW.ai, told SBS News there's a risk that this kind of content distorts people's relationship with the truth.
He suggests people could watch so much AI content that they stop believing people who are telling the truth.
"We're going to be entertained to death, and there's gonna be lots of fun memes circulating, but I'm not really sure it's that valuable for us," he said.
"It's going to consume a huge amount of energy, and I'm actually very worried that it's going to be used for a lot of mischief, that people are going to make fake videos, and maybe we're going to believe them, and we're gonna perhaps then stop believing many of the videos we believe, even the things that are real."
Walsh is not entirely negative about AI-generated content though, pointing out it's a positive that Sora uses a watermark on its videos, which is a requirement in the European Union.
Similarly, he said Australia is better than some others at regulating AI.
"Technology is advancing very rapidly, and it's very hard for regulation, but we, we're actually, you know, we, we're compared to some other countries, we're actually in a reasonably good space," he said.
"We have, for example, the eSafety Commissioner, we are the first country in the world to have an eSafety commissioner and I think they're doing a good job of starting to address some of the harms."
Australia does not have AI-specific legislation but the technology is regulated through other laws. The government is currently developing a framework for AI.
There are concerns about the social media feed in the Sora 2 app, which some say has been designed to be addictive.
"It's like TikTok on steroids, which you can generate AI content... I think they want to create a whole social media platform which will obviously will be a lot bigger than what we have already" according to Seyedali Mirjalili, a professor of AI at Torrens University.
Mirjalli said it's concerning how quickly the new platform works, where previously it would be a painstaking, hours-long process to make a deepfake edit.
"You can now upload a video of someone just half a minute, as short as a half a minute, and then, you know, impersonate them and add them to different contexts and create different content around them."
He said regulation is "both lagging and lacking at the moment," pointing to the fact that without rules here around watermarks, it's become almost impossible to tell what's real and what's not, and getting even harder.
"The problem is that this is Sora 2. Imagine what Sora 20 would look like."
— With reporting by Madeleine Wedesweiler.
SBS does not use AI to enhance or generate its content. More information about SBS' AI guiding principles for the use of AI is available here.