No chance for autonomous weapons!

No chance for autonomous weapons!

29. Dezember 2019 0 Von Horst Buchwald

No chance for autonomous weapons!

By Horst Buchwald

29.12.2010

It is only discussed in a small circle of experts – but this topic must not remain there, because once again it is about the future of mankind. The climate debate already shows how difficult it is to convince people that it is almost too late for the plan to reduce CO2 emissions. How difficult will it be with autonomous weapons?

Paul Scharre, the award-winning author of „Army of None: Autonomous Weapons and the Future of War“ was recently asked in an interview with „Forbes“ magazine: „Why should the average citizen be interested in autonomous weapons? His answer: „Everyone must live in the future we are building, and we should all have a personal interest in what that future looks like. The question of whether AI will be used in warfare is not a question of when, but of how. What are the rules? What is the level of human control? Who makes those rules? There is a real possibility that the military is moving into a world where human control over war is severely limited, and that could be quite dangerous. So I think it is necessary to engage in a broad international discussion and bring together nations, human rights groups and experts (lawyers, ethicists, technologists) to have a productive dialogue to determine the right course“.

Scharre is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He won the Colby Award 2019, and Bill Gates‘ book is among the top five books he has ever read. But is that enough: experts come together and engage in productive dialogue? Isn’t the consequent prohibition, i.e. exactly what many experts and peace fighters demand, the order of the day? How does Scharre see this?

„We cannot stop the development of the underlying technology.“ The reason: one could use these algorithms for warfare but also in an autonomous vehicle. And he continues: „The basic tools for building a simple autonomous weapon … …can be freely found on the Internet. If a person is a reasonably competent engineer and has a little free time, he could build a crude autonomous drone that could do damage for less than a thousand dollars.

So there’s nothing more we can do? I agree with him: „Historically, attempts to control technology have been a mixed bag, with some successes and many failures.

In the US, the export of ammunition and military technology is controlled by the International Traffic in Arms Regulations (ITAR). So could the export of AI also be covered? He does not consider this to be realistic either. “ By and large, the AI world is very open in terms of access to research. Papers are published online and breakthroughs are freely shared, so it is difficult to imagine that components of AI technology are controlled under strict restrictions. If anything, I could imagine sensitive data sets being controlled by ITAR, along with potentially specialized chips or hardware used exclusively for military applications when they are developed“.

But is there still the possibility of using ethical considerations as input for decision-making?

“ We want AI behavior that is consistent with human ethics and human values. Does this mean that machines think rationally or abstract ethical concepts…

have to understand? Not necessarily. But we must control the manifestation of external behaviour in machines and reassert human control in cases where ethical dilemmas arise“.

Is Scharre now of the opinion: that the Rubicon has practically already been crossed? Not quite. But he is not far from it.

The problem is not just control or the principle that all potential AI weapons must be inoculated with human ethics. It’s more complicated than that. Arati Prabhakar, the former head of the DARPA research agency, knows the limits of existing AI technology pretty well. The following example makes this clear: Stanford researchers had developed software for describing image content. During the tests she showed an impressive accuracy, but when asked to interpret the photo of a baby with an electric toothbrush, she saw a little boy with a baseball bat.

Where it led when trying to find the cause of this false statement, she describes it as follows: „If you look inside to see ‚Well, what went wrong?‘ they’re really opaque,“ which can be described as follows: the algorithm sees everything correctly a thousand times. But in 1001, the shadow of a tree is a Vietcong with a mini-Nuke. So the problem can be reduced to this statement: no software engineer is able to foresee all errors that occur in an autonomous system or to recognize them in time.

However you look at it, it seems as if the train has left the station.

What do you think, dear reader? Write us your opinion.

info@ki-news.online

Please also note our partner site: http://atomic-cyber-crash.de/