More than a thousand prominent thinkers and leading AI and robotics researchers have signed an open letter calling for a ban on “offensive autonomous weapons beyond meaningful human control.”
The letter, which was put together by the Future of Life Institute, was recently presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina. The letter has been signed and endorsed by such notable thinkers as physicist Stephen Hawking, Tesla’s Elon Musk, linguist Noam Chomsky, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, physicist Max Tegmark, Nobel Laureate Frank Wilczek, and philosopher Daniel C. Dennett.
The prospect of militarized AI is a frightening one, indeed. Fears are emerging that an AI arms race could spell catastrophe for humanity. We’re only a few years or decades away from having the capability of deploying autonomous systems on the battlefield, a historical turning point some experts have described as the third revolution in warfare (the advent of gunpowder and nuclear weapons being the first two).
As the text from the open letter reads:
Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.
The FLI is appealing to AI and robotics researchers to refrain from building any system that would contribute to the militarization of autonomous systems.
“Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” concludes the letter.
Earlier this year, the FLI issued an open letter warning about AI risks, while calling for responsible oversight to ensure that these systems work with humanity’s best interests in mind.
H/t The Guardian!
Contact the author at firstname.lastname@example.org and @dvorsky. Top image: Terminator Genisys.