Artificial Intelligence: The Obstacles Standing in its Way

Written by

In the past, Artificial intelligence (AI) has often been portrayed as something right around the corner, soon to be ready. For example, look at the US Navy’s experience with an IBM 704. In 1958 engineers inferred from a “complex math” machine that hardware will be “able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

More than 50 years later we still wait impatiently for this transition from math complexity to a self-replicating conscious hardware worthy of the intelligence title. AI is coming and I’d love to tell you it is almost here, yet the reality is there are some sizeable obstacles that stand in the way.

Without established and clear standards of care and guidance, literally anything tends to be put forward as being intelligent. Instead a clear bar for AI success, recognizing why we don’t yet have it and the specific failures, will improve our predictions of when it will be ready.

It is not hard to say every industry that uses human intelligence today would benefit in some way from AI. Typically machines are meant to reduce burdens on humans and that’s a nice low bar, because even the smallest load reductions offer benefits. The gesture alone can be enough. This measure of reduced burden, however, shouldn’t be confused with reaching some understanding of the full context of the world we live in. That gap is extremely important. It is like saying a horse can pull the plow while knowing you can’t expect horses to run the farm. Common sense derived from understanding the world is a necessary step for AI to reach “ready” status, although today that still is a very tall order.

Image recognition systems, very useful in security, are a great example of the common sense gap for AI. Recently I fed pictures taken in England to a state-of-the-art learning system developed in the same country. It correctly recognized trees, roads and buildings. The segmentation was so accurate, in fact, the developers rated their own learning system at 90% accurate. This is very impressive and sounds ready to trust for identifying threats and thus avoiding them. Yet when I fed pictures from a former colony of England (Botswana) the learning system failed spectacularly and dangerously, labeling empty spaces as buildings.

Common sense would have helped the system recognize an open field in different countries, to learn more effectively the properties of grass. If I were to guess why this system failed, why it was so easily broken and untrustworthy, I’d point to a very simple break in understanding. Too little foundational knowledge, a shortage of basic training, means a system neither had observed nor practiced the act of learning enough.

On the one hand a machine tends to be suited to intake far larger data sets than the human security analyst, process it more quickly, and run more consistently. It is incredibly tempting to forget the human world, dive into the machine view of things with emphasis on classifiers, training and key points as the criteria for decision-making.

On the other hand a machine’s lack of context or broader understanding that humans possess can mean decisions will be made without calculating real consequences. This is quite literally the opposite of what security needs or wants, like shutting down a network to block a known threat, cascading to an unforeseen major outage and business catastrophe.

There are serious ethical, if not legal, problems with getting epistemological assumptions wrong while simultaneously accelerating a decision making process. Machines can do more harm, faster and in more ways, which should be one of the main reasons we hold back on deploying AI. Banks were blocked from trading, in one case, and faced massive financial losses after their entire network links were taken offline by the good intentions of quickly addressing social media threats. “Don’t turn off our revenue streams to save us from a small expense” might sound like a simple calculation, yet in reality it can take a very complicated understanding of values for inputs and outputs.

Thus, a proper post mortem following bad AI decisions has to account for whether machines can see and learn in the far broader context of human wants and expectations. Machines need to be fed a sense of “state” accurately back into their foundations to be regularly updated for optimal outcomes, and since state is a shifting concept, “slices” need to be made and ranked or measured for integrity. It might sound fancy, but really this is like saying humans should study key moments in history to understand better today what actions they should take that won’t lead to disasters; good and bad memories.

That brings us to the final obstacle for AI when it comes to evaluating systems – the tallest order of all – to be able to build a longer-term strategic view of desired outcomes.

A security system that develops a plan without understanding choices it is being forced into, or the ability to break out of assumptions and challenge an environment, can be very insecure. AI, like humans, cannot predict a truly desired state when remaining unaware of how fragile their state is.

In conclusion, AI is still coming and every year we know why it has been taking so long to get here. We can be optimistic, yet remain realistic to help accelerate the feature development most needed to make AI safe for production use.

This is part of a point-counterpoint debate. The other article can be found here

What’s hot on Infosecurity Magazine?