Why do robots have the right to kill humans on the streets of San Francisco? Why police should not allow bomb-carrying robots, or why the cops can’t do that
A week is a long time in politics—particularly when considering whether it’s okay to grant robots the right to kill humans on the streets of San Francisco.
The use of bombs in a civilian context wouldn’t be justified, even if guns on therobots could be replaced with bombs. (Some police forces in the United States do currently use bomb-wielding robots to intervene; in 2016, Dallas Police used a bomb-carrying bot to kill a suspect in what experts called an “unprecedented” moment.)
The reversal was made possible by the public outcry and lobbying that took place after the initial approval. Humans are removed from certain important matters relating to life and death. On December 5, a protest took place outside San Francisco City Hall, while at least one supervisor who initially approved the decision later said they regretted their choice.
Gordon Mar, a supervisor in San Francisco’sFourth District, votes for the policy despite his own deep concerns. I regret it. I am now uneasy with our vote because it sets a precedent for other cities that don’t have as strong a commitment to police accountability. It is not a step forward to make state violence more remote, distanced, and less human.
The 2023 World War of Ukraine: How the State of War will be shaped by the Advancements in Technology and the People’s Liberation
The responses of Ukraine and Russia offer pointers for how the Pentagon can be improved. The Ukrainian military has been able to resist the larger power in part by moving quickly and adapting technology from the private sector, such as hacking commercial drones into weapons and 3D printing spare parts.
The story was in the WIRED World in 2023. Read more stories from the series here—or download or order a copy of the magazine.
The inevitable logic of using electronic countermeasures against remotely operated weapons is driving both sides towards increasing the level of autonomy of those weapons. That has made us even closer to a world where weapons of mass destruction are cheap and easily available for sale to anyone who wants them, including dictators or terrorists.
Despite this transformation, the nature of war will never change: It will be about killing people and breaking their stuff faster than they can do it to you. It will still be a contest of wills, an aspect of the human condition that is far from being eradicated for all its ferocity, irrationality, and despair. The outcome will remain an unscripted mix of reason, emotion, and chance. Technology only affects the way we fight.
The Road to Independence: From Google to Istari Using Artificial Intelligence to War-Fighting Systems: The Case for Arms and Drones
Schmidt became CEO of Google in 2001, when the search engine had a few hundred employees and was barely making money. He stepped away from Alphabet in 2017 after building a sprawling, highly profitable company with a stacked portfolio of projects, including cutting-edge artificial intelligence, self-driving cars, and quantum computers.
Speaking from his office in New York,Schmidt lays out a grand vision for a more advanced DOD that can nimbly use technology from companies like Istari. In a cheery orange sweater that looks like it’s made of exquisite wool, he casually imagines a wholesale reboot of the US armed forces.
“Let’s imagine we’re going to build a better war-fighting system,” Schmidt says, outlining what would amount to an enormous overhaul of the most powerful military operation on earth. “We would just create a tech company.” He goes on to sketch out a vision of the internet of things with a deadly twist. It would build a large number of inexpensive devices, such as drones and highly mobile ones, that were attritable, and they would have weapons and other sensors.
It is important that governments start serious negotiations on a treaty to ban anti-personnel weapons. Professional societies in AI and robotics should develop and enforce codes of conduct outlawing work on lethal autonomous weapons. And people the world over should understand that allowing algorithms to decide to kill humans is a terrible idea.
A few nations already have weapons that operate without direct human control in limited circumstances, such as missile defenses that need to respond at superhuman speed to be effective. Greater use of AI might mean more scenarios where systems act autonomously, for example when drones are operating out of communications range or in swarms too complex for any human to manage.
The road to full independence begins with various types of weapons already in use. For example, Russia is deploying ‘smart’ cruise missiles to harsh effect, hitting predefined targets such as administrative buildings and energy installations. These weapons include Iranian Shahed missiles, nicknamed ‘mopeds’ by the Ukrainians owing to their sound, which can fly low along rivers to avoid detection and circle an area while they await instructions. Key to these attacks is the use of swarms of missiles to overwhelm air-defence systems, along with minimal radio links to avoid detection. The new Shaheds are being equipped with detectors that will let them to home in on nearby heat sources without having to ask the controller for target updates, if it’s true.
Elsewhere, lethal autonomous weapons have been on sale for several years. The Kargu, which is the size of a dinner plate and has 1 kilogram of explosives, was sold in Turkey by a government owned manufacturer. The company stated on its website that the drone could hit vehicles and persons, with targets selected on images, and that it could track moving targets. The UN reported in 2020 that Kargu drones were used by the Libyan Government of National Accord to hunt down retreating forces.
Another point often advanced is that, compared with other modes of warfare, the ability of lethal autonomous weapons to distinguish civilians from combatants might reduce collateral damage. The United States, along with Russia, has been citing this supposed benefit with the effect of blocking multilateral negotiations at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland — talks that have occurred sporadically since 2014.
Two claims are relied on in the case. First, that AI systems are less likely to make mistakes than are humans — a dubious proposition now, although it could eventually become true. And second, that autonomous weapons will be used in essentially the same scenarios as human-controlled weapons such as rifles, tanks and Predator drones. This seems unequivocally false. An advantage in distinguishing civilians from soldiers is irrelevant if there are different goals for different parties and in less clearcut settings. For this reason, I think the emphasis on the weapons’ claimed superiority in distinguishing civilians from combatants, which originates from a 2013 UN report3 pointing to the risks of misidentification, has been misguided.
There are many more reasons why developing lethal autonomous weapons is a bad idea. I wrote in Nature in summer 2015 that one can expect platforms deployed in the millions, the agility and lethality, which will leave humans utterly defenceless. The reasoning is illustrated in a 2017 YouTube video advocating arms control, which I released with the Future of Life Institute (see go.nature.com/4ju4zj2). It shows ‘Slaughterbots’ — swarms of cheap microdrones using AI and facial recognition to assassinate political opponents. Because no human supervision is required, one person can launch an almost unlimited number of weapons to wipe out entire populations. Weapons experts concur that anti-personnel swarms should be classified as weapons of mass destruction (see go.nature.com/3yqjx9h). Most of the community in the Artificial Intelligence is against self-guided weapons.
Nuclear-Test-Ban Treaty: The United States, Russia, and Costa Rica urged to stand up against non-state forces in the fight against chemical weapons
It is unlikely that there will be any more progress in the near future. The US and Russia refuse to negotiate a legally binding agreement. The United States is concerned that a treaty could lead other parties to circumvent a ban and create a risk of strategic surprise. Russia objects to being discriminated against because of the invasion of Ukranian.
Rather than blocking negotiations, it would be better for the United States and others to focus on devising practical measures to build confidence in adherence. These could include inspection agreements, design constraints that deter conversion to full autonomy, rules requiring industrial suppliers to check the bona fides of customers , and so on. The Organization for the Prohibition of Chemical Weapons, with its similar technical measures, was created to implement the Chemical Weapons Convention. These measures have not put more pressure on the chemical industry. There are eighteen on-site inspections of nuclear-weapons facilities allowed under New START with the United States and Russia. Had scientists from all sides worked together to make the International Monitoring System better, the Comprehensive Nuclear-Test-Ban Treaty might never have been formed.
On 23–24 February, Costa Rica is due to host a meeting of Latin American and Caribbean nations on the ‘social and humanitarian impact of autonomous weapons’, which includes threats from non-state actors who might use them indiscriminately. These same nations organized the first nuclear-weapon-free zone, raising hopes that they might also initiate a treaty declaring an autonomous-weapon-free zone.
Lauren Kahn, a research fellow at the Council on Foreign Relations, welcomed the new US declaration as a potential building block for more responsible use of military AI around the world.