Ex-Google staff fears that „killer robots“ could cause mass murders

Ex-Google staff fears that „killer robots“ could cause mass murders

19. September 2019 0 Von Horst Buchwald

Ex-Google staff fears that „killer robots“ could cause mass murders

New York, 19.9.2019

A new generation of autonomous weapons or „killer robots“ could inadvertently start a war or cause mass murder, a former top Google software engineer warned according to The Guardian.

Laura Nolan, who resigned last year in protest at work on a project to drastically improve Google’s U.S. military drone technology, has demanded that all non-human AI killing machines be banned. Nolan said killer robots that are not controlled by human remote control should be banned by the same kind of international treaty that bans chemical weapons.

Unlike drones, which are often deployed thousands of miles away there as the flying weapon, but are controlled by humans, killer robots, according to Nolan, have the potential to do „devastating things for which they were not originally programmed“.

There is no evidence that Google is involved in the development of autonomous weapons systems. Last month, a UN panel of government experts discussed autonomous weapons and found that Google avoids AI for use in weapons systems and applies best practices.

Nolan, who joined the campaign to end killer robots and informed UN diplomats in New York and Geneva of the dangers of autonomous weapons, said: „The likelihood of a disaster depends on how many of these machines are in a particular area at the same time. What you are looking at here are possible atrocities and illegal killings, even under martial law, especially when hundreds or thousands of these machines are used.

According to Nolan, „major accidents could occur because these things begin to behave in unexpected ways. Therefore, all modern weapon systems should be subject to meaningful human control, otherwise they must be banned because they are far too unpredictable and dangerous.“

Google recruited Nolan – the computer science graduate of Trinity College Dublin – to work on Project Maven in 2017 after working for the technology giant for four years and becoming one of Ireland’s leading software engineers.

She said she was „increasingly ethically concerned“ about her role in the Maven program because it was designed to help the U.S. Department of Defense drastically accelerate drone video recognition technology.

Instead of using a large number of military agents to spend hours rewinding drone video footage of potential enemy targets, Nolan and others were asked to build a system where AI machines could distinguish people and objects at infinitely faster speeds.

Google let the Project Maven contract expire in March of this year. This was exactly the time when over 3,000 employees had signed a petition in protest against the company’s involvement.

„As a Site Reliability Engineer, my expertise at Google was to ensure that our systems and infrastructure were still up and running, and I was supposed to help Maven do that. Although I wasn’t directly involved in accelerating video material detection, I realized that I was still part of the killing chain; that this would eventually lead to more people being targeted and killed by the U.S. military in places like Afghanistan.

After Nolan resigned, she argued that the development of autonomous weapons posed a far greater risk to humanity than remote-controlled drones. She designed a scenario in which external forces, such as changing weather systems, disrupt and confuse machine operations. Because killer robots cannot process complex human behaviour, these disturbed machines appear to be a danger, which is not one, they deviate from the given course of action and this can have fatal consequences.

„The other scary thing about these autonomous war systems is that you can only really test them if you use them in a real combat zone. Perhaps this will happen to the Russians who currently live in Syria, who knows? What we know is that Russia has rejected any treaty at the UN, let alone banning these weapons.

„If you test a machine that makes its own decisions about the world around it, it must run in real time. Besides, how do you train a system that runs exclusively on software, how do you recognize subtle human behavior or the difference between hunters and insurgents? How does the killing machine out there distinguish on its own between the 18-year-old fighter and the 18-year-old rabbit hunter?“

The ability to convert military drones, for example into autonomous non-human guided weapons, „is now just a software problem that is relatively easy to solve,“ Nolan said.