Does Putin have the better hackers? Part 12. March 2022
Does Putin have the better hackers?
“But we have AI and ML!”
Could it be that the Russians are superior to their adversaries in the West in the field of cyberware? If so, what would be the consequences?
I don’t think anyone can answer the first question at this time. But what has already become visible
But what has already become visible demands the full attention of all IT experts in companies and government offices.
It is well known that websites can be manipulated and shut down, and that companies and private individuals can be financially wiped out using various extortion methods. To effectively defend against it – not at all easier.
“But we have AI and ML!” retort those who are always less concerned. And they go one better: “Surely we have an unassailable lead in this field?
However such judgments come about – they are not true. Both AI and ML are software. Because they need to be trained, they are more vulnerable than normal products. What’s more, they can be attacked without access to the network. This is pointed out, among others, by Andrew Lohn, senior fellow at the Center for Security and Emerging Technology (CSET), a nonpartisan think tank affiliated with Georgetown University’s Walsh School of Foreign Service.
He says, “The risks of AI from hackers are not being adequately addressed by policymakers. “There are people pushing for the adoption of AI without understanding the risks they face along the way.”
He and his colleagues call attention to the growing body of research that shows precisely how AI/ML algorithms are being attacked – and he cites examples:
– White-hat hackers have already successfully demonstrated real-world attacks on AI-powered autonomous driving systems like those in Tesla cars.
– Researchers at Chinese e-commerce giant Tencent managed to get the car’s Autopilot function to change lanes into oncoming traffic using inconspicuous stickers on the road.
On the other hand, it is a fact that successful attacks are often not detected at all.
NSA Director Neal Ziring notes: “While attacks are on the rise, research into their detection is lagging behind. The deployment pipeline has proven to be a particular difficulty, he says. By this he means, AI/ML systems must be trained on huge data sets before deployment – for example, images of faces are used to train facial recognition software. Using millions of tagged images, AI/ML can be trained to distinguish cats from dogs, for example.
But this very training pipeline also makes AI/ML vulnerable to attackers who don’t have access to the network on which it runs.
Another method, he said, is data poisoning attacks. To do this, specially crafted images are fed into the AI/ML training sets . The human eye cannot distinguish them from a real image. Training then takes place with this false data – with the corresponding result. According to Dan Boneh, Stanford professor of cryptography, a single erroneous image in a training set can be enough to poison an algorithm and cause it to falsely identify thousands of images.