AI-Influenced Weapons Need Better Regulation

The weapons are error-prone and could hit the wrong targets



With Russia’s invasion of Ukraine as the backdrop, the United Nations recently held a meeting to discuss the use of autonomous weapons systems, commonly referred to as killer robots. These are essentially weapons that are programmed to find a class of targets, then select and attack a specific person or object within that class, with little human control over the decisions that are made.

Russia took center stage in this discussion, in part because of its potential capabilities in this space, but also because its diplomats thwarted the effort to discuss these weapons, saying sanctions made it impossible to properly participate. For a discussion that to date had been far too slow, Russia’s spoiling slowed it down even further.

I have been tracking the development of autonomous weapons and attending the UN discussions on the issue for over seven years, and Russia’s aggression is becoming an unfortunate test case for how artificial intelligence (AI)–fueled warfare can and likely will proceed.

The technology behind some of these weapons systems is immature and error-prone, and there is little clarity on how the systems function and make decisions. Some of these weapons will invariably hit the wrong targets, and competitive pressures might result in deployment of more systems that are not ready for the battelfield.

To avoid the loss of innocent lives and the destruction of critical infrastructure in Ukraine and beyond, we need nothing less that the strongest diplomatic effort to prohibit in some cases, and regulate, in others, the use of these weapons and the technologies behind them, including AI and machine learning. This is critical because when military operations are proceeding poorly countries might be tempted to use new technologies to gain an advantage. An example of this is Russia’s KUB-BLA loitering munition, which has the ability to identify targets using AI.

Data fed into AI-based systems can teach remote weapons what a target looks like, and what to do upon reaching that target. While similar to facial recognition tools, AI technologies for military use have different implications, particularly when they are meant to destroy and kill, and as such, experts have raised concerns about their introduction into dynamic war contexts. And while Russia may have been successful in thwarting real-time discussion of these weapons, it isn’t alone. The U.S., India and Israel are all fighting regulation of these dangerous systems.

AI might be more mature and well-known in its use in cyberwarfare, including to supercharge malware attacks or to better impersonate trusted users in order to access to critical infrastructure, such as the electric grid. But, major powers are using it to develop physically destructive weapons. Russia has already made important advances in autonomous tanks, machines that can run without human operators who could theoretically override mistakes, while the United States has demonstrated a number of capabilities, including munitions that can destroy a surface vessel using a swarm of drones. AI is employed in the development of swarming technologies and loitering munitions, also called kamikaze drones. Rather than the futuristic robots seen in science-fiction movies, these systems use previously existing military platforms that leverage AI technologies. Simply, a few lines of code and new sensors can make a difference in whether a military system is functioning autonomously or under human control. Crucially, introducing AI into decision-making by militaries could lead to overrealiance on the technology, shaping military decision-making and potentially escalating conflicts.  

AI-based warfare might seem like a video game, but last September, according to Secretary of the Air Force Frank Kendall, the U.S. Air Force, for the first time, used AI to help to identify a target or targets in “a live operational kill chain.” Presumably, this means AI was used to identify and kill human targets.

More at:

https://www.scientificamerican.com/article/ai-influenced-weapons-need-better-regulation/




Comments

Popular posts from this blog

Interview of Elisabeth Bik – Scientific Editorial Director at uBiome

The troll Leonid Schneider & ‘For Better Science’: crank or cure?