Content

The Advanced Threat Potential of Deep Learning

The objectives of Artificial Intelligence are to enhance the ability of machines to process copious amounts of data and to automate a broad range of tasks. While we’re currently enjoying many of the practical benefits of enhanced processing power such as language processing, autonomous cars or image recognition, like any capability, it can lend itself to nefarious ends. And in our increasingly digitizing world, deep learning has the potential to cause an unprecedented degree of damage.

In this article, we explore the expanded threat potential that has been ushered in by deep learning technology and how it has further expanded the scope of threats compared to previous computational capabilities.

“With great power comes great responsibility” and with that in mind, we consider the moral implications of advancing the frontier of deep learning technology.

Amplified Capability of AI

The original motivation for developing AI technology was to create a system that can perform any given task faster and more accurately than a human. This objective has been met. Human level performance is no longer the highest standard for which a task can be achieved.

The amplified efficiency of AI means that once a system is trained and deployed, malicious AI can attack a far greater number of devices and networks more quickly and cheaply than a malevolent human actor. Given sufficient computing power, an AI system can complete the one task in many more instances. This increased scalability is demonstrated by Google’s or Facebook’s facial recognition algorithms. If Google’s image net model can classify a million images in less than a few minutes, a human being, no matter how fast, is no competition.

The Democratization of AI

Access to software and new scientific findings is relatively easy. The AI industry has a culture of openness, where papers are published with source codes and AI developments often get reproduced in a matter of weeks, if not days. Thus, in the cat and mouse game between good guys and bad guys, much of the time they’re equally advancing the threat landscape in the share of information. For example, both have been known to publish newly discovered malware in the same communal forums. However, it’s not just the knowledge diffusion happening in AI that is potentially expanding cyber threats. Programs in robotics and the declining cost of hardware are making technology more accessible to malicious actors.

Enlarged Psychological Distance

Cyberattacks have always been characterized by psychological distance and anonymity. The malicious actor never comes face to face with their targets, nor sees the impact of what they have unleashed. AI can facilitate an even greater degree of psychological distance from the people that are impacted.

The Moral Implication

Knowledge of how to design and implement AI systems can be applied to both civilian and military purposes and likewise towards beneficial and harmful ends. In the same way that human intelligence can be used for positive, benign or detrimental purposes, so can artificial intelligence. How we, as a global community, choose to expand the AI frontier will become critical. Even when knowledge is pursued for wholesome purposes, there is no guarantee that its end application will maintain that same wholesome outcome.

Read the full article where we provide numerous demonstrations of this expanded threat potential that while, to date, have only been explored on an academic level. It is only a matter of time before we see events occur in the wild.

Nadav Maman, CTO and cofounder, Deep Instinct

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds