What crimes do criminals plan to commit in the next 15 years using AI technology?10. August 2020
What crimes do criminals plan to commit in the next 15 years using AI technology?
Berlin, August 10, 2020
Researchers at University College London (UCL) have explored the possibilities of how AI could be used by criminals over the next 15 years. To their surprise, they found that AI-synthesized media such as Deepfakes have the greatest potential for damage.
AI can be used for crime in several ways: as a tool for crime, as a target for crime, or as a context for crime. Computer scientists at the London UCL identified 20 AI-based crimes from newspapers, news, fiction and popular culture.
These include the use of autonomous cars as weapons, spearphishing*, disrupting AI-controlled systems, AI-synthesised fake messages and the collection of data for the purpose of large-scale blackmail.
These crimes were ranked over a two-day discussion period by a group of 31 experts representing academics, defense and law enforcement professionals, based on potential damage, potential for criminal gain, ease of execution and difficulty in stopping them.
“As the possibilities of AI-based technologies expand, so does their potential for criminal exploitation,” said lead author Professor Lewis Griffin. “In order to prepare adequately for possible AI threats, we must find out what these threats might be and how they might affect our lives.
AI-synthesized audio or video content (often referred to as “deepfakes”) was identified as the most worrying, with experts classifying it as extremely harmful, accessible and difficult to defeat. Basic deepfakes are very easy to create with open source tools, reducing the access barrier for criminals without technical expertise.
Although Deepfakes are popularly known as tools for satire or disinformation, they are also increasingly used to impersonate a trusted individual via phone or video call to gain access to funds or secure systems. According to UCL researchers, there are examples of criminals in Mexico who have used these techniques to gain access to bank accounts.
Last week a second manipulated video was released showing an allegedly intoxicated US politician, Nancy Pelosi (Democrats). While millions of Facebook fact-checkers called the video “partially false”, the company refused to remove the video from the platform.
AI-assisted crimes also include the abuse of military robots, the sale of fraudulent “snake oil” services, autonomous attack drones, AI-assisted stalking, and the use of small autonomous robots that enter homes through cat flaps and commit burglaries.
“People today are living large parts of their lives online,.. These activities can create prestige or destruction. An online environment in which data is property and information power is particularly suited to exploitation by AI-based criminal activities,” said author Dr. Matthew Caldwell, a scientist. “Unlike many traditional crimes, crimes in the digital realm can be easily exchanged, repeated and even sold, allowing criminal techniques to be marketed and crimes to be offered as a service. This means that criminals may be able to outsource the more difficult aspects of their AI-based crimes”.
It is against this backdrop that one must understand the demand of Professor Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL: “We live in an ever-changing world that creates new opportunities – good and bad. It is therefore essential that we anticipate future threats from crime so that policy makers and other stakeholders with the necessary capacity to act can do so before new “types of crime” emerge.
This is the first report in a series of studies that
– identify future new crime threats, and
– Describe solutions to prevent possible damage.
* Spearfishing: this is a scam with the help of an e-mail. It is always a matter of getting hold of confidential data that is particularly worth protecting. For this purpose, the criminal explores his personal environment and then presents himself as an “acquaintance of”, as a “friend of…” or as “the one you met in …”. Are you not careful and give him/her the password of… Or the date of birth as well as the bank details of … then you should not be surprised if von … soon complains that there are problems with the data.