Graphic art showing a mobile phone featuring a black and white image of Vladimir Putin set against a red and black background with many 1s and 0s printed across it.
Graphic art showing a mobile phone featuring a black and white image of Vladimir Putin set against a red and black background with many 1s and 0s printed across it.
12 min read
Australia

Feature

How Russian operatives are 'grooming' AI to publish false information online

Australia's spy chief has expressed concern about the potential for AI to take online radicalisation and disinformation to new levels.

Published

By Lera Shvets, Jennifer Scherer
Source: SBS News
Image: In early November 2025, ASIO boss Mike Burgess raised concerns about AI’s potential "to take online radicalisation and disinformation to entirely new levels". (Getty, SBS / Yuchiro Chino/Fotograzia/Krongkaew/Contributor)
In late October, some users of X (formerly Twitter) reported strange interactions with the platform's AI-powered chatbot, Grok. Developed by xAI, a company owned by tech billionaire Elon Musk, Grok is designed to access information across the platform in real-time and publish posts to X in response to prompts by other users.

Responding to one such prompt, the chatbot posted a statement suggesting that American political commentator Tucker Carlson is more heroic than Ukraine's president Volodymyr Zelenskyy.

Carlson embodies "true heroism", Grok said, "by dismantling establishment myths despite cancellation attempts and media blackouts".

"Zelenskyy's wartime resolve is admirable, yet it's bolstered by global sympathy and aid; Tucker's solitary defiance against power structures reveals deeper courage without such buffers."
This post and others featuring spurious claims prompted discussion over several days, with users trying to challenge Grok's assertions. The bot evenutally posted a rejection of the chain of events, stating that it "never ranked Carlson's heroism above Zelenskyy's" and calling it "misattribution".

Around the same time, Musk launched Grokipedia, an AI-powered online encyclopedia designed to rival Wikipedia. Musk has previously described Wikipedia as "left-biased" and claimed his new platform would "purge out the propaganda".

Soon after its launch, The Guardian and others reported examples of biases and disinformation, including Russian propaganda, being pubished in articles on the Grokipedia site. For example, the site's entry for Russia's invasion of Ukraine features a propagandistic description of the military offensive as being "aimed at demilitarizing and denazifying Ukraine".
It has raised concerns about the potential for large language models (LLMs), such as Grok, which are trained to generate text by scraping the internet for data, to be co-opted for the purpose of spreading propaganda and other disinformation online.

Carl Miller is a co-founder of the Centre for the Analysis of Social Media at UK think tank Demos. He says digital literacy is critical, particularly given the propensity for LLMs to regurgitate false information.

"The simple knowledge that LLMs can get stuff wrong and that also they can be manipulated — [is] obviously important. It's the same way that we can't think that the top of a Google search means it's right, so we cannot think that the output of an LLM means it's correct," Miller says.

'Hallucinations' and disinformation

With growing reliance on AI chatbots and LLMs, experts are calling for stronger legislation to mitigate against foreign interference and propaganda.

Last week, ASIO boss Mike Burgess raised concerns about AI's potential "to take online radicalisation and disinformation to entirely new levels".
Delivering his address at the Lowy Institute, Burgess said the agency had "recently uncovered links between pro-Russian influencers in Australia and an offshore media organisation that almost certainly receives direction from Russian intelligence".

He said Russian cyber operatives have inflamed community tensions in Europe by spreading false news, and Australia was "not immune".

"Deliberately hiding their connection to Moscow — and the likely instruction from Moscow — the propagandists try to hijack and inflame legitimate debate," Australia's spy chief said.

"They use social media to spread vitriolic, polarising commentary on anti-immigration protests and pro-Palestinian marches."
A man in a grey suit and pink tie poses for a photo
Delivering his address at the Lowy Institute, head of ASIO Mike Burgess said the agency had "recently uncovered links between pro-Russian influencers in Australia and an offshore media organisation that almost certainly receives direction from Russian intelligence". Source: AAP / Mick Tsikas
In February this year, non-profit think tank Reset Tech published a paper on AI risks in Australia. One of the concerns outlined in the paper relates to the way AI and LLMs are influencing the information ecosystem.

"Generative AI is not a research tool; it is a probability machine," the paper reads.

"The outputs have nothing to do with the truth, and as AI is increasingly trained on the reams of synthetic AI-generated content flooding the internet, the risks of incoherence, bias, and ultimately model collapse, only grow as AI effectively eats itself."

Dr Lin Tian, a research fellow at the University of Technology Sydney who examines disinformation detection on social media, explains to SBS News that an LLM is "a model that tries to talk like a human, but it's purely based on the training that has been fed through".

"LLMs have been trained with a large amount of data. So there is no right or wrong inside the model," Tian says.
When they generate [the answers], they will just grab the highest probability tokens and put them into the sentence.
She explains this is how so-called "hallucinations" can occur: When the output answer produced by the AI is factually wrong.

"It's basically just based on the probability of when they created the sentence, and put all the tokens together as an output for the users," Tian says.

AI chatbots 'groomed' with propaganda

Russian disinformation campaigns have been a focus of governments and intelligence services around the world since Russia's full-scale invasion of Ukraine in early 2022. These campaigns have reportedly used AI to create online content, which can be posted to and shared via social media to spread disinformation narratives to a wider audience.

Earlier this year, CheckFirst, a Helsinki-based software company researching foreign interference in collaboration with Atlantic Council's Digital Forensic Research Lab (DFRLab), published two joint investigations into Russia's Pravda network. The network operates in several languages and across several countries, generating disinformation articles and amplifying its narratives through Wikipedia, AI chatbots and platform X.

According to the Pravda dashboard, created by CheckFirst and DFRLab, more than six million false articles have been generated by the Pravda network, with five million of those repurposed in multiple languages.
A graph titled Pravda articles published over time
According to the Pravda dashboard, created by CheckFirst and DFRLab, more than six million false articles have been generated by the Pravda network, with five million of those repurposed in multiple languages. Credit: SBS News
Speaking with SBS News, Guillaume Kuster, CEO and co-founder of CheckFirst, called the network "a laundering machine for Russian narratives and propaganda".

"[These] websites are not publishing any original content. What they do is they repost. It could be content coming from sanctioned media organisations in the EU, such as Sputnik or Russia Today. It can be content coming from known propaganda telegram channels, from X accounts, and so on and so forth," he says.

"One consequence of having [these] articles readily available online is that they are used by traditional knowledge dissemination platforms such as Wikipedia or chatbots."

Kuster says the CheckFirst investigation found nearly 2,000 links to Pravda websites on Wikipedia.

"We found it quite concerning that a network of known Russian propaganda was used to alter facts on the world's biggest free encyclopedia."

Kuster adds that while researchers cannot conclude that the Pravda network was designed specifically for "AI grooming", they have been able to demonstrate that it is happening as a consequence of the network's activities online.

"So we and others, such as the American Sunlight Project and NewsGuard have verified that popular chatbots such as Copilot, Gemini, ChatGPT and others would spit out some content coming from Pravda."

NewsGuard is a US-based media organisation that tracks false claims online and perpetrators of misinformation and disinformation. The organisation also published its own investigation into the Pravda network and the way it feeds its narratives into chatbots.
Isis Blachez, an analyst with NewsGuard, tells SBS News the risk of having a large volume of articles repeating false claims is that AI chatbots will end up repeating those narratives.

"The way AI chatbots work, and training data works, is that they look at the information that's out there and they'll see this big flow of information repeating one false claim and will take information from that," she says.

"It's kind of playing on search engine optimisation techniques. That's how ultimately these types of narratives and claims end up in the responses of AI chatbots."

With a growing number of people relying on such chatbots in their daily lives and for news consumption, Blachez says AI companies should be more transparent.

"There's a lack of transparency from AI companies who don't really explain where the data comes from, how it's vetted, and how AI chatbots recognise the credibility of a source.
I think individual users have to be very wary of that and always have a critical mindset and always look at the sources that are cited.

'After influence, not lies'

Miller from the Centre for the Analysis of Social Media says autocratic regimes, including Russia's, tend to focus their disinformation campaigns on "influence, not lies".

"We see them talking about masculinity and femininity, and patriotism and belonging ... These deep, motivating ideas that really get us up in the morning. That's how influence works."

With growing reliance on AI for therapy and companionship, Miller says there should be greater scrutiny over the influence it can have on people.

"There's going to be a whole new kind of skill set that we need for the appropriate kind of relationship to build with an AI," he says.

"In the way that you want your friends to be right and you care about what they think, whether they're right or not."
He adds, with AI chatbots increasingly used "to paint pictures for people to learn about the world", they could become the "future of information warfare".

"You're going to have people trying to manipulate LLMs, using agency networks to create content, and you're going to have people training the LLMs using their own automated processes ... It's going to be an incredibly weird form of fight over information integrity."

In September 2025, WA senator Matt O'Sullivan said in a Senate address that Australia is "missing in action" while other countries are busy establishing legal safeguards around AI.

"The European Union passed the world's first comprehensive AI act in March 2024, creating clear obligations for developers and protections for citizens. The UK launched its own strategy, including an AI safety institute," he said.

O'Sullivan argued Australia needs a better legal framework.

"We need frameworks that mitigate serious risks, including sovereign risks, biases, disinformation, propaganda, foreign intellectual interference, online harm, cybercrime and copyright violations."
Olivia Shen is the director of the Strategic Technologies Program with the United States Studies Centre at the University of Sydney. She tells SBS News that although Australia became one of the first countries to introduce a voluntary AI ethics framework in 2019, there has been "laggard" progress on turning some of the safeguards and codes of conduct into legally enforceable regulations.

"There has obviously been a divergent number of views on it. There are some who believe Australia ... would not benefit from having AI regulation that goes too far — that perhaps strangles innovation and prevents Australia from taking the best advantage of the AI economic opportunities," she says.

"But on the other side of the fence ... what are the implications of AI if they're not used safely and responsibly? And what are the harms that could actually take place? I really think we need to come to a balanced view on this."

'A layered approach' towards disinformation

Shen says that although it may be difficult to put the legal onus on LLM developers, more attention could be given to legislation about misinformation.

"We already have some foundations and frameworks around the issue of misinformation harms. And AI is just a tool that accelerates and pours petrol on the fire of those harms, if you will."

Shen points to Taiwan as an example of a jurisdiction that is taking "a layered approach" and has strong laws against deepfakes and foreign interference.

"It has been the target of persistent foreign interference originating from China for decades now. Taiwan has accepted, when you have this scale of misinformation, no single intervention is going to be a silver bullet," she says.
These interventions work both on a regulatory and societal level to build resilience against propaganda.

"[Taiwan] has very strong laws also on spreading misinformation [on] certain issues. For example, public safety, food safety, military affairs, and emergency responses. Because those are the really core topics that affect public order."

In a domestic context, Shen adds that the Australian Code of Practice on Disinformation and Misinformation has been in effect for several years, but is not legally binding.

Late last year, the previous Albanese government put forward a bill that would have given the Australian Communications and Media Authority legal powers to take down certain content. However, it failed to pass.

It was not supported by the Coalition or the Greens, along with some members of the crossbench who raised concerns about censorship and overly constraining freedom of political communication.

"So that bill was withdrawn … But I do think there is room to open that conversation again in light of what we've seen in this year's federal election and the level of misinformation that we saw," Shen says.
In a statement to SBS News, a Department of Home Affairs spokesperson said the department's Cyber Security Strategy is working to create "new initiatives to address gaps in existing laws".

The spokesperson also confirmed the government plans to develop Australia's first National Media Literacy Strategy to "establish the key skills and competencies Australians need to navigate the challenges and opportunities presented by the digital world".

This story was produced in collaboration with SBS Audio. It was part of a research trip hosted by the German Federal Foreign Office in cooperation with the National Press Club of Australia.

Get SBS News daily and direct to your Inbox

Sign up now for the latest news from Australia and around the world direct to your inbox.

By subscribing, you agree to SBS’s terms of service and privacy policy including receiving email updates from SBS.

Download our apps
SBS News
SBS Audio
SBS On Demand

Listen to our podcasts
An overview of the day's top stories from SBS News
Interviews and feature reports from SBS News
Your daily ten minute finance and business news wrap with SBS Finance Editor Ricardo Gonçalves.
A daily five minute news wrap for English learners and people with disability
Get the latest with our News podcasts on your favourite podcast apps.

Watch on SBS
SBS World News

SBS World News

Take a global view with Australia's most comprehensive world news service
Watch the latest news videos from Australia and across the world