International experts have sounded the alarm about the risks of malicious use of artificial intelligence by “rogue states, criminals, terrorists” in a report released Wednesday.
According to them, in the next ten years, the increasing effectiveness of AI may reinforce cybercrime but also lead to the use of drones or robots for terrorist purposes. It is also likely to facilitate the manipulation of elections via social networks through automated accounts (bots).
The 100-page report was written by 26 experts in artificial intelligence (AI), cybersecurity and robotics. They belong to universities (Cambridge, Oxford, Yale, Stanford) and non-governmental organizations (OpenAI, Center for a New American Security, Electronic Frontier Foundation).
These experts call on governments and the various stakeholders to put in place protocols to limit potential threats related to artificial intelligence.
“We believe that the attacks that will be allowed by the increasing use of AI will be particularly effective, finely targeted and difficult to attribute,” the report said.
To illustrate their fears, these specialists evoke several “hypothetical scenarios” of malicious use of AI.
They point out that terrorists could modify commercially available AI systems (drones, autonomous vehicles) to cause crashes, collisions or explosions.
The authors imagine the case of a tampered robotic cleaner who slipped surreptitiously among other robots responsible for cleaning up a Berlin ministry. One day the intruder would go on the attack after visually recognizing the Minister of Finance. It would move closer to her and explode autonomously, killing her target.
Moreover, “cybercrime, already strongly rising, is likely to be reinforced with the tools provided by AI”, says to AFP Seán ÓhÉigeartaigh, director of the “Center for the Study of Existential Risk” of the University from Cambridge, one of the authors of the report.
Targeted phishing attacks (spear phishing) could thus become much easier to carry out on a large scale.
But for him, “the most serious risk, even if it is less likely, is political risk”. “We have already seen how people are using technology to try to interfere in elections and democracy.”
“If AI allows these threats to become stronger, more difficult to spot and attribute, this could pose major problems of political stability and perhaps contribute to triggering wars,” said Seán Ó hÉigeartaigh.
With AI, it could also be possible to make very realistic fake videos and this could be used to discredit politicians, warns the report.
Authoritarian states will also be able to rely on AI to strengthen surveillance of their citizens, he adds.
This is not the first time that concerns have been expressed about AI. As early as 2014, the astrophysicist Stephen Hawking warned of the risks. Entrepreneur Elon Musk and others also sounded the alarm.
Specific reports on the use of killer drones or on how AI might affect US security have also been published.
This new report provides “an overview of how AI is creating new threats or changing the nature of existing threats in the areas of digital, physical and political security,” says Seán Ó hÉigeartaigh.
Appeared in the 1950s, artificial intelligence is sophisticated algorithms that solve problems for which humans use their cognitive abilities.
In recent years, AI has made substantial progress, particularly in areas related to perception, such as speech recognition and image analysis.
“Currently, there is still a significant gap between the advances in research and its possible applications. It is time to act,” says Miles Brundage, researcher at the “Future of Humanity Institute” at Oxford University.
This expert led a workshop in February 2017 in Oxford on the risks of malevolent use of AI, which gave birth to this report.
“AI researchers, robot designers, companies, regulators, politicians now need to work together to try to prevent” these risks, concludes Seán Ó hÉigeartaigh.