Skip to main content

AI is not a threat, Artificial Stupidity is

Artificial Intelligence features a lot in news and popular culture. With its advances in practical applications, coverage has progressed from Skynet fantasies to more grounded tech-horror pieces about job replacements and building eerily accurate and intrusive profiles from big data. But how much of it is true and how much of it is clickbait?

First, let’s dispel some notions about AI. There will never be a point where if you add enough processing power an AI will somehow surpass some arbitrary threshold and turn “conscious”. There will never be an AI trapped on a system that somehow rewrites an OS it has no awareness of to take control of it, uploads itself to a network it doesn’t know exists, and then takes control of national infrastructure that still runs on floppy discs. An AI designed to classify banking transactions could be the most sophisticated in the world and hosted on a supercomputer, but it will never do more than give you really accurate fraud flagging.

The real threat of any system is its human component. “GIGO” (garbage in, garbage out) still holds true. The speed the system operates, failures to account for the effects of complex behaviours, a lack of understanding of the algorithms used, flaws in the original data it learned from, and good old-fashioned bugs could send any AI out of control.

stock market crash
Dow Jones at time of crash

Fundamentally, AIs are setup to continue running by themselves until they’re actively told to stop. This means if something goes wrong and it doesn’t result in it throwing an error, it will happily carry on until a human intervenes. The problem with this is, like any program, an AI can execute thousands of instructions per second. In 2010, malfunctioning automated traders conducted 27,000 trades between themselves in a few minutes – 50% of the day’s trades. This triggered the infamous Flash Crash, where more than a trillion dollars was wiped off the stock market in 15 minutes, before it almost completely rebounded in another 15.

Software is insanely complex, and it is impossible to account for all possible outcomes. It becomes even harder when AI software injects new variables into a situation through its own actions. This leads to simple, unconnected behaviours unexpectedly interacting to produce a new emergent behaviour. In the 2010 Flash Crash, the high number of automated trades made other automated traders think the market had become too volatile, so they shut down leaving remaining automated traders with nobody to sell to and further deepening the crash.

black box ai

Secondly, there is an increasing trend of relying on what’s called Black Box AIs. Black Box AIs use big data in conjunction with machine learning to come up with a classifying algorithm to sort the data into desired groups, e.g. facial recognition. This solution algorithm however is completely unknowable and locked inside a “black box”. It typically involves projecting the data across high-dimensional mathematical spaces to extract unique features, but it is very abstract and if you’re lucky the development technicians of the AI will understand the operating principles behind it, but that is unlikely to extend to the rest of the team. Black Box AIs also deceptively simple to use as you only need to setup the inputs, then the AI does its magic trick and outputs an impressively performing solution.

It is hard to pick out the flaws in something when you don’t understand how it works. The data is so broad, and the number of variables so vast, that it can be nigh-impossible to point to a cause even if you know something is going wrong. When a computer program can sort through a million points of data and output results with an 80% success rate instead of total gibberish, people are more likely to defer to the idea that the AI knows what it is doing.

Undiscovered flaws in the original data can see biases be propagated to AI decision making. For example, Amazon had to scrap its AI recruiting tool after it started penalising CVs for containing the word “women’s”, e.g. “women’s chess club captain”. The data did not contain the applicant’s gender. Instead the successful hires and rejected applicants data it was learning from was biased. In the male-dominated IT industry men had been recruited at a higher rate than women. Words unique to women’s CVs appeared much less in successful hires compared to words like “leadership”, ergo the AI concluded that these words must be of low value and started penalising them.

Similarly, an AI algorithm for predicting a patient’s healthcare needs under-estimated those of African-Americans by more than 50%. Like Amazon’s recruitment AI, the data it was learning from did not include race. Instead, systemic biases in the US healthcare system meant that African-Americans have always faced a far higher barrier to gaining medications, treatment, and appointments. The AI simply matched its predictions to the data that was informing them.

self driving car

And finally, when AIs are made in our own image there’s just human stupidity being passed on and transitioning into artificial stupidity. Last year, an Uber self-driving car hit and killed a pedestrian. The initial cause was found to be that Uber programmed the driver AI to think that pedestrians could only ever exist on crosswalks. Despite the car detecting the pedestrian 5.6 seconds before impact, its collision avoidance never kicked in because it was programmed to restart its object trajectory prediction every time it classified an object. Not being allowed to identify the pedestrian as a human, the car’s AI oscillated between identifying them as “bicycle” and “other”, with it restarting its prediction of where the pedestrian would be every time it changed classification. To cap it off, a one second delay had been manually inserted into the decision-making process because the AI was generating too many false positives and was being too sensitive to obstacles.

At the end of the day it is important to recognise what present-day AI is: automation, with the ability to self-correct/self-optimise in order to achieve its goal better. The spectre of the Singularity giving rise to human-like AIs are still far off. What this means is that mistakes made in the planning and development stages by people can now end up being applied on a much larger scale and at a much quicker rate.

You can see some these points distilled in this scene from the TV show Silicon Valley, where a programmer writes time-saving AI chatbots for himself and his slightly slower colleague to field the countless emails and help requests they get at work. ​