Hundreds of the worlds leading scientists, engineers and business people are coming together to warn of the dangers of weapons controlled by Artificial Intelligence.
Physicist Stephen Hawking and Apple co-founder Steve Wozniak have released an open letter urging the United Nations to ban lethal autonomous weapons systems.
Leading researchers and experts in robotics and artificial intelligence, including Tesla Motors CEO Elon Musk and high profile activist Noam Chomsky, have signed the letter that warns "starting a military AI arms race is a bad idea".
Distinguishing Artificial Intelligence weapons from drones and cruise missiles, whose targets are selected by humans, the letter points out while AI weapons could make warzones safer, it could kick start a "a global AI arms race".
The open letter published by the Future of Life Institute - a research group advocating human control of technology – describes Artificial Intelligence weapons as "the third revolution in warfare, after gun powder and nuclear arms".
The letter was unveiled as researchers gathered for the International Joint Conference on Artificial Intelligence in Buenos Aires.
"It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators," the signatories warn.
"There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people."
The Institute is also concerned about the production of researching technology that can be used to remotely kill humans without anyone telling the weapon to do so.
They are also worried about easily replicable technology that can search and kills humans based on "pre-defined criteria".
"Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group."
Read the full letter below:
Autonomous Weapons: an Open Letter from AI & Robotics Researchers
Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.
Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.
In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.