Nuclear war by mistake – is that possible?

Nuclear war by mistake – is that possible?

8. Januar 2020 0 Von Horst Buchwald

Nuclear war by mistake – is that possible?

By Karl Hans Bläsius and Jörg Siekmann

Early warning systems are used to detect possible nuclear missile attacks based on sensor data. Early detection of a nuclear missile attack is said to enable countermeasures to be taken before a devastating strike. In the event of an alarm message, there are usually only a few minutes to check and evaluate the situation. The end of the INF contract has already led to a new arms race, in which hypersonic rockets in particular have high priority. With these new weapons, the warning times will decrease further.

In the event of a false alarm message, the situation assessment also depends on the global political situation. For example, a crisis with mutual threats and accidental coincidence with other events (e.g. cyber attacks) can lead to an incorrect assessment and thus to a nuclear war by mistake.

For the classification of sensor data and the evaluation of an alarm situation, more and more computer-aided procedures, especially of artificial intelligence (AI), are required to make decisions automatically for certain subtasks. There are already demands to implement autonomous AI systems for the evaluation and processing of alarm messages, as there may not be time for human decisions.

The classification of sensor data and automatic situation assessments are uncertain. Recognition results are only valid with a certain probability and can be wrong in individual cases, which then also applies to automatic assumptions and conclusions that are based on these recognition results.*

A professional assessment of AI-based decisions by people is hardly possible in the short time available. This is because the automatic detection is often based on hundreds of features. As a rule, the AI ​​systems cannot provide simple, understandable justifications, and even if identifying features are issued by an AI system, these could not be checked in the time available. Humans can only believe what the AI ​​systems deliver.

In many applications, AI systems can make better decisions than humans. This will e.g. also expected for autonomous driving. This requires extensive learning data based on many tests, even under real conditions. Nevertheless, accidents do happen.

However, different conditions apply to AI decisions in early warning systems, since tests of such systems are hardly possible under real conditions. There will be no similarly large amounts of data as in autonomous driving. AI-based recognitions can also be realized on the basis of a few examples, but it is not possible to predict all variants and exceptional situations that may occur. This can lead to incorrect classification results.

An accident due to a wrong decision by an AI system in autonomous driving can also claim individual lives. But such a mistake in an early warning system resulting in a nuclear war would accidentally wipe out all life on this planet.

Weiter

Authors of the above text:

Karl Hans Bläsius (www.hochschule-trier.de/informatik/blaesius/)

Jörg Siekmann  (http://siekmann.dfki.de/de/home/)

*The following links to the requirements regarding autonomous AI systems

https://www.heise.de/tp/features/Amerika-braucht-eine-Tote-Hand-zur-nuklearen-Abschreckung-4519452.html
https://warontherocks.com/2019/08/america-needs-a-dead-hand/
https://www.icanw.de/neuigkeiten/hintergrund-diskussion-zu-atomwaffen-und-ki/