International

Rules of war need rewriting for the age of AI weapons


Whoever becomes the leader in artificial intelligence “will become the ruler of the world”, Vladimir Putin said in 2017, predicting future wars would be fought using drones. Even then, for all the Russian leader’s own ambitions, China and the US were the frontrunners in developing the technology. Yet four years later, the vision of autonomous fighting units is becoming a reality, with potentially devastating consequences. The computer scientist Stuart Russell — who will devote a forthcoming Reith Lecture on BBC radio to the subject — met UK defence officials recently to warn that incorporating AI into weapons could wipe out humanity.

AI promises enormous benefits. Yet, like nuclear power, it can be used for good and ill. Its introduction into the military sphere represents the biggest technological leap since the advent of nuclear weapons. While atomic bombs were used on real cities in 1945, however, it took more than two decades before the first arms control treaties were signed.

Nuclear weapons are also difficult and expensive to develop or obtain. By contrast, AI-aided arms — used at scale — could combine the power of weapons of mass destruction with the scope for cheap production of the AK-47. That opens the possibility of their use, even if not in their most sophisticated forms, not just by advanced economies but by “rogue” states and terrorists. And the world is starting to wrestle with how to control them while the technology is still evolving at lightning speed.

The most immediate concern is “lethal autonomous weapons systems” (Laws), often dubbed “killer robots”. In fact, the term means any mobile platform — drone, android, self-flying plane — carrying a machine that can perceive its environment, make decisions on tactics and targets, and kill. Rudimentary versions exist today. The UN says Turkish-made Kargu drones incorporating image-processing capabilities were used in Libyan conflicts last year to home in on selected targets.

Academics warn of swarms of cheap miniature drones armed with facial recognition and tiny bombs being used as mass killing machines. Many experts have demanded a ban on developing lethal autonomous weapons. A UN body has drawn up guidelines and worked on a potential embargo. Several military powers oppose a ban, fearing the loss of a chance to gain a military edge or that others would ignore a prohibition that would be near impossible to enforce.

Yet many countries have joined conventions on biological and chemical weapons, though these also offer cheap routes to mass lethality. The scientific community says it has ideas and lessons from other arms control efforts on how to devise and police a Laws ban.

Beyond killer robots, AI could be used to enhance or replace human skill in everything from operating weapons to intelligence gathering and analysis, early warning systems, and command and control. Dialogue is needed not just between the biggest military powers but more broadly on rules of engagement, what sort of wars countries are prepared to countenance in an AI era, and how to impose some transparency and constraints. Agreements are needed to keep humans “in the loop” in all forms of military decision-making.

Establishing such contacts will not be easy; China is reluctant to engage with the US even on nuclear arms. But past leaders agreed on “rules” of war, with at least some limited success, because they saw it as in their mutual interests to do so. It should be more than a naive hope that those rules can be updated for an age when humans are combining awesome destructive force with machines that can calculate faster than they can.



Source Link

businesscable.co.uk

Leave a Reply