This story was produced in collaboration with SBS Audio and as part of a research trip hosted by the German Federal Foreign Office in cooperation with the National Press Club of Australia.
TRANSCRIPT
The use of large language models like Chat GPT is becoming increasingly popular, but there are growing concerns that some AI chatbots are unwittingly pushing disinformation narratives.
Carl Miller is the founder of the Centre for the Analysis for Social Media at Demos in the UK.
"These models, these chatbots that are increasingly going to be used to kind of interrogate the internet and paint pictures for people to learn about the world, they are vitally important and how they get trained and what they end up therefore saying, – this is going to be basically the kind of crucible of a whole new kind of really powerful and quite unseen influence and power."
Large language models are trained using huge data sets to predict and generate human-like text.
Following Russia’s full-scale invasion of Ukraine in 2022, some experts warn pro-Russian actors are increasingly flooding the internet with fake news to influence chat-bots.
This is Isis Blachez, analyst with the US-based research agency, NewsGuard.
"Many different sites will repeat the same false claim multiple times on each site and on different sites. And the way AI chatbots work and training data works is that they look at the information that's out there and they'll see this big flow of information repeating one false claim and will take information from that."
Carl Miller again.
"It's going to be the manipulation of large language models as a future information warfare."
The Pro-Kremlin Pravda Network is an example of this.
It's published more than 6 million pro-Russian articles, with 5 million of those repurposed in multiple languages.
Guillaume Kuster is the CEO and co-founder of CheckFirst.
"You can think of it as a laundering machine for Russian narratives and Russian propaganda. What they do is they repost. It could be content coming from sanctioned media organisations in the EU, such as Sputnik or Russia Today. It can be content coming from known propaganda telegram channels, from X accounts, and so on and so forth.”
Last week, Mike Burgess, Director General at ASIO revealed pro-Russian influencers in Australia have drawn the attention of the agency.
"The Australians publish and push extreme online narratives justifying the invasion of Ukraine and condemning Australia’s support for Kyiv. Deliberately hiding their connection to Moscow – and the likely instruction from Moscow – the propagandists try to hijack and inflame legitimate debate."
Some experts suggest a growing reliance on AI Chat-bots highlights a need for better media scrutiny and legislation as well as a focus on curbing misinformation.
Olivia Shen is the Director of the Strategic Technologies Program with the United States Studies Centre at the University of Sydney.
"What we have seen is perhaps more laggard progress on turning voluntary safeguards and codes of conduct and guidelines into legally enforceable regulation. "
According to a Department of Home Affairs spokesperson, its Cyber Security Strategy, aims to create “new initiatives to address gaps in existing laws”.
It also outlined a plan to develop Australia’s first National Media Literacy Strategy... to "establish the key skills and competencies Australians need to navigate the challenges and opportunities presented by the digital world".
READ MORE ABOUT THIS STORY

How Russian operatives are 'grooming' AI to publish false information online













