Stephen Hawking's fear the development of full artificial intelligence could spell the end of the human race, has rekindled old fears.
There was the psychotic HAL 9000 computer in 2001: A Space Odyssey.
The humanoids which attacked their flesh-and-blood masters in I, Robot.
And, of course, The Terminator, when a robot is sent into the past to kill a woman whose son will end the tyranny of the machines in the future.
Never far from the surface, a dark, dystopian view of artificial intelligence (AI) has returned to the headlines, thanks to British physicist Stephen Hawking.
"The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race," Hawking told the BBC.
"Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate," he said.
But experts interviewed by AFP were divided.
Some agreed with Hawking, saying that the threat, even if it were distant, should be taken seriously. Others said his warning seemed overblown.
"I'm pleased that a scientist from the 'hard sciences' has spoken out. I've been saying the same thing for years," said Daniela Cerqui, an anthropologist at Switzerland's Lausanne University.
Gains in AI are creating machines that outstrip human performance, Cerqui argued. The trend eventually will delegate responsibility for human life to the machine, she predicted.
Nick Bostrom, director of a program on the impacts of future technology at the University of Oxford, said the threat of AI superiority was not immediate.
Bostrom pointed to current and near-future applications of AI that were still clearly in human hands - things such as military drones, driverless cars, robot factory workers and automated surveillance of the internet.
But, he said, "I think machine intelligence will eventually surpass biological intelligence -- and, yes, there will be significant existential risks associated with that transition."
Since the field was launched at a conference in 1956, "predictions that AI will be achieved in the next 15 to 25 years have littered the field," according to Oxford researcher Stuart Armstrong.
"Unless we missed something really spectacular in the news recently, none of them have come to pass," Armstrong says in a book, "Smarter than Us: The Rise of Machine Intelligence."
Jean-Gabriel Ganascia, an AI expert and moral philosopher at the Pierre and Marie Curie University in Paris, said Hawking's warning was "over the top."
"Hawking said there would be autonomous technology which would develop separately from humans. He has no evidence to support that. There is no data to back this opinion."
"It's a little apocalyptic," said Mathieu Lafourcade, an AI language specialist at the University of Montpellier, southern France.
"Machines already do things better than us," he said, pointing to chess-playing software. "That doesn't mean they are more intelligent than us."
Allan Tucker, a senior lecturer in computer science at Britain's Brunel University, took a look at the hurdles facing AI.
Recent years have seen dramatic gains in data-processing speed, spurring flexible software to enable a machine to learn from its mistakes, he said. Balance and reflexes, too, have made big advances.
Tucker pointed to the US firm Boston Dynamics as being in the research vanguard.
It has designed four-footed robots called BigDog and WildCat, with funding from the Pentagon's hi-tech research arm.
"These things are incredible tools that are really adaptive to an environment but there is still a human there, directing them," said Tucker. "To me, none of these are close to what true AI is."
Tony Cohn, a professor of automated reasoning at Leeds University in northern England, said full AI is "still a long way off... not in my lifetime certainly, and I would say still many decades, given (the) current rate of progress."
Despite big strides in recognition programs and language cognition, robots perform poorly in open, messy environments where there are lots of noise, movement, objects and faces, said Cohn.
Such situations require machines to have what humans possess naturally and in abundance - "commonsense knowledge" to make sense of things.
Tucker said that, ultimately, the biggest barrier facing the age of AI is that machines are... well, machines.
"We've evolved over however many millennia to be what we are and the motivation is survival," he said.
"That motivation is hard-wired into us. It's key to AI, but it's very difficult to implement."