Are Amazon and Microsoft putting the world in jeopardy by using killer AIs?

Are Amazon and Microsoft putting the world in jeopardy by using killer AIs?

6. Oktober 2019 0 Von Horst Buchwald

Are Amazon and Microsoft putting the world in jeopardy by using killer AIs?

A survey concludes that leading technology companies like Amazon and Microsoft are putting the world at risk with killer AI.

PAX, a Dutch NGO, evaluated 50 companies according to three criteria for a survey of the industry’s key players:

1) If the technology they develop could be used for the killer AI.

2. their involvement in military projects.

3. if they have committed themselves not to deal with military applications in the future.

Microsoft and Amazon are among the world’s most risky technology companies to put the world at risk, while Google is the leader among large technology companies that take reasonable security precautions.

Google’s rank among the most secure technology companies may come as a surprise to some given the company’s reputation for mass data collection. Mountain View also got into trouble over the protest against the controversial Project Maven contract with the Pentagon. Project Maven was a contract Google had with the Pentagon to provide AI technology for military drones. Several high-profile employees resigned because of the contract, while over 4,000 Google employees signed a petition asking their management to stop the project and never again „build war technology.

Following the Maven project, Google CEO Sundar Pichai promised in a blog post that the company would not develop any technologies or weapons that would cause harm, or anything that could be used for monitoring purposes, if it violated „internationally recognized standards“ or „generally recognized principles of international law and human rights“.

Pichai’s promise not to participate in such contracts in the future seems to satisfy PAX in its rankings. Google has since tried to improve its public image around its AI developments with things like creating a special ethics panel, but that quickly backfired and collapsed after introducing a member of a right-wing think tank and a defense drone mogul.

„Why don’t companies like Microsoft and Amazon deny that they are currently developing these highly controversial weapons that could decide to kill people without direct human involvement,“ asks Frank Slijper, lead author of the report.

Microsoft, one of the riskiest technology companies on the PAX list, warned investors in February that its AI offerings could damage the company’s reputation. In a quarterly report, Microsoft wrote: „Some AI scenarios raise ethical questions. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we could suffer brand or reputation damage.“

Some of Microsoft’s forays into technology have already proven annoying, such as the chatbot „Tay,“ which became racist, sexist, and generally rather unappetizing after Internet users took advantage of its machine learning capabilities.

Microsoft and Amazon both currently offer a $10 billion Pentagon contract to provide the cloud infrastructure for the US military.

„Technology companies must be aware that their technology could contribute to the development of lethal autonomous weapons without action,“ says Daan Kayser, PAX project leader for autonomous weapons. „The establishment of clear, publicly available guidelines is an essential strategy to prevent this.

PAX’s complete risk assessment of companies can be found here