X

AI Researchers Take Pledge Not To Create Autonomous Weapons, Put Pressure On World Governments

As artificial intelligence continues to advance it becomes applied to more and more tasks, systems, and devices. One of these applications is the use of AI to create autonomous weapons. In response to the growing interest in autonomous weapons systems, over 2400 scientists and engineers have pledged not to develop autonomous weapons.

A Pledge For The Future

A pledge created by the Future of Life Institute is intended to not only create a list of those who refuse to develops lethal autonomous weapons systems (LAWS) (thereby putting pressure on others not to do so), but also to discourage nations and military bodies from creating LAWS themselves. The pledge was announced at the recent International Joint Conference on autonomous weapons. The anti-LAWS pledge is just the most recent attempt by concerned AI developers and AI ethicists to call attention to the dangers associated with using AI to operate weapons systems. In November of last year, concerned organizations and researchers asked for a preemptive ban of LAWs, with specific reference to a fear that new weapons of mass destruction could be created if the ban was not pushed through.

In accordance with the desires of the proposed ban, the recent pledge wants governments to create a system of agreed-upon regulations and norms that would outlaw, punish, and stigmatize those who develop LAWS. Due to the continued lack of regulations and policies to prevent harm from LAWS, the signatories of the pledge have all agreed that they will refuse to participate in, or even support, the creation and trade of LAWS. The pledge quickly garnered much support and over 150 different AI development organizations signed the pledge.

The pledge’s goal is to get a consensus around the banning of autonomous weapons systems and to get public support for the idea. The hope is that if the pledge can succeed in shaming developers and companies who produce LAWS, the opinion of the public will shift against these companies. Yoshua Bengio, an AI researcher from the Montreal Institute for Learning Algorithms, explains the importance of swinging public opinion in achieving regulations for weapons. Bengio explains:

This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the US did not sign the treaty banning landmines. American companies have stopped building landmines.

Signatories And Developers

There are currently militaries working on the development of technology that will be used in LAWS. Military systems with autonomous components include drones and even a new fighter developed by the UK’s RAF. The worry is that if regulations are not passed through now, in the near future there will be weapons systems capable of identifying targets and firing on them without any input from human operators.

BAE Systems Corax, Unmanned Aerial Vehicle. Photo: Public Domain

This isn’t the first time that a pledge to prevent the development of LAWS has been attempted, yet this pledge has substantially more signatories than previous pledges, and some of the people and organizations who signed the pledge are heavyweights within the AI development sector.

Signers of the anti-LAWs pledge include notable companies and AI development firms like Google DeepMind, the Center for Human-Compatible AI, and universities like University College London. A number of notable individuals have also signed the pledge, including DeepMind cofounder Demis Hassabis, Stuart Russell, the director of the Center for Intelligent Systems, and Max Tegmark, professor of physics at MIT.

Google has recently created a new set of guiding principle for the company in regards to the use of AI. Google’s principles state that it will not create of deploy AI for use in any weapons system “whose purpose contravenes widely accepted principles of international law and human rights.” Google has also recently elected not to renew a contract with the Department of Defense. Other tech companies have created similar sets of guiding principles with the goal of limiting the use of their technology for LAWS.

Paul Scharre, a military analyst, has said that the document could afford to do better job of explaining to military leader and policymakers exactly why they should be concerned about the development of autonomous weapons systems and that the real debate to come will center around the difference between defense and offensive weapon systems.

What Steps To Take Next?

An unfortunate truth about the pledge is that even if the pledge can get world governments, world leaders, and militaries to agree not to develop LAWS, that wouldn’t stop individuals or small groups form developing LAWS in secret. Toby Wals, an AI professor from the University of South Wales, Sydney acknowledges that it isn’t possible to stop determined people from creating autonomous weapons systems, just like it’s difficult to stop terrorists from creating chemical weapons. Yet if the world does not want terrorist groups or rogue states to have access to LAWS, it must ensure that the weapons systems are not sold or created in an open market.

Another sobering fact is that the pledge, even if successful, doesn’t guarantee that governments will not develop LAWS in secret. Nick Bostrom, writer of Superintelligence and one the scientific advisors to the Future of Life Institute, has described the problem of an AI arms race, where even if countries outwardly agree to develop AI safely and responsibly, one rogue actor can cut AI safety corners to get the power of AI at their disposal quicker, which would, in turn, cause the other entities to start cutting corners.

Ultimately, while individual researchers may be able to refuse to associated themselves with the support or creation of LAWS, their research can still be used to create LAWS. Signatory to the pledge and anthropology professor of science at Lancaster University, Lucy Suchman, explains that while researchers can’ determine how their research will be used in every event, they can intervene and voice their concerns about the misuse of their research. Suchman says that it’s important for researchers to track how their research is used and immediately speak out against the use of their research for things like automated target recognition. In addition, researchers should refuse to advise LAWS developers on the use of their research in the creation of weapons systems.

View Comments

  • This is very foolish. Russia and China will develop autonomous weapons system regardless of your moral concerns. Western countries should do the same otherwise we will lose the next upcoming global conflicts which might result in the abolition of democracy and freedom.