FREEDOM AND SAFETY

 

Unless you’ve had your head in the sand over the past few years, you’ll have heard about the unprecedented - and largely unexpected - advancement in Artificial Intelligence (AI). Perhaps the most public example of this was when Google’s company DeepMind used an AI called AlphaGo to beat one of the world’s top Go players in 2016. But that’s far from the only instance of AI breaking new ground.

 

Today, it plays a role in voice recognition software - Siri, Alexa, Cortana and Google Assistant. It’s helping retailers predict what we want to buy. It’s even organising our email accounts by sorting the messages we want to see from those we don’t.

 

Meanwhile, in the world of business, machine learning – an element of AI that focuses on algorithms that can learn from, and make predictions based on data – is pushing the boundaries of what computers can do. As a result, we’re seeing solutions such as Robotic Process Automation (RPA) and big data, driving efficiencies and boosting profits.

 

Overall, AI is doing a fantastic job at transforming the world for the better.

 

The dangers inherent in AI

 

But what about the other side of the coin? What negative impact could AI have? It’s clear that AI – like any technology – could be used for corrupt means. Adversarial AI (where inputs can be carefully crafted to trick AI systems into misclassifying data) has already been demonstrated. It could, for example, make an AI vision system that recognises a red traffic light, perceive a green one instead – which could have disastrous ramifications for an autonomous vehicle.

 

The Adversarial AI scenario is an example of AI getting hacked. But let’s take it further; what if we have AI itself doing the hacking? That’s not a worst-case scenario – it’s a likelihood.

 

Cyber criminals are all but sure to get their hands on AI tools, thanks to the fact that they’re widely available as open software already. OpenAI and Onyx, are two that immediately come to mind.

 

This highlights the need to ensure that AI systems – particularly those used in mission-critical settings – are resilient to such attacks.

 

 

A digital arms race

 

We’re left with a situation where the security industry and the cyber criminals (be they organised, state-sponsored or simply lone hackers) are engaged in an escalating arms race. So-called black hats are developing AI to break into systems and cause havoc. While the white hats are researching ways in which an AI can defend networks against its own kind.

 

Here’s where we get to the moral question: should we be using AI for these means? As a technology, we’re only beginning to understand its potential. Theoretically, AI could grow so intelligent that it becomes something completely beyond our control.

 

That thought makes the idea of an AI arms race sound particularly dangerous. Thankfully, intelligent people - Elon Musk and Stephen Hawking included - are thinking carefully about this topic, and I’m confident that they’ll come up with the necessary safeguards. Plus, companies such as Google and Microsoft have already stated that they feel the opportunities outweigh the risks.

 

Those opportunities are worth noting. There’s already an abundance of positive developments associated with AI and cybersecurity. AI, for example, can be used to augment (rather than replace) humans in the security space - improving predictive threat monitoring, dynamic response to cyber attacks, secure software development and security training. All tools and processes that will help the white hats get a step ahead.

 

The question we’re left with, however, is this: how will the AI arms race end? Well, one side will win, and there’s a chance that it might not be the good guys. Let that sink in for a minute.

Mark HughesPresident of security, BT

https://www.weforum.org/agenda/2017/11/cybersecurity-artificial-intellig...