Skip to main content

Ethics in AI – 5 Responsibilities a Business has to the Consumer

“Killer Robots” scream the headlines. This unfortunate phrase is trotted out whenever the issue of controls or legislation on autonomous unmanned weapons is raised. This somewhat florid term belies a real issue. UAVs have become ubiquitous in warfare and it seems only a matter of time before the ability to fire without human input will be added. A South Korean turret made for the North Korean border was initially designed to be completely AI-controlled until customer demand forced them to change direction.

At the moment the chain of responsibility for these weapons’ decisions is unclear, some would say intentionally so. If something went wrong, who would be to blame? The human operator? Their organisation? The software developers? The hardware manufacturers? This admittedly extreme example won’t apply to many businesses but the clarity of what a mistake would mean helps highlight the ethical issues surrounding AI and automation. It is important to recognise what AI is: automation, with the ability for the program to self-correct/self-optimise in order to achieve its goals better. By recognising the limitations of a system, identifying the greatest risks, and not being afraid to challenge the elements involved, you can mitigate the risk for any product you use or sell that contains AI.

1. Install manual control breaks at the highest-risk points of the decision chain

Not to belittle our industry, but computer programs that can be shown to be 100% bug-free are considered worthy of scientific study. When you add in the ability for a computer to make changes to its own behaviour, or worse, having to parse our notoriously unordered and irregular world via image classification or navigation, the potential for unforeseen results are high. The biggest issue relating to this is the speed with which your software can propagate a mistake. Automated stock traders can execute thousands of deals per minute, and the times they’ve gone wrong their effects travelled round the world before they were stopped. Developers should identify the points in the process where the most damage would be done if something went wrong, and then install control breaks, ideally in the form of a person giving a manual sign-off. Users should be made keenly aware of what might happen should they click the wrong thing in the configuration.

2. Recognise user limitations

If you’ve ever provided software as a service or even helped a friend fix their computer, you will be aware of how clueless some users can be. This isn’t to put all the blame on the end-user; a program which an engineer believes has a strikingly obvious system of control might not register the same to consumers who haven’t spent the last year knee-deep in its development.

Make it idiot proof. Mark out clearly the inputs and options. Identify high-risk configuration options and put them behind confirmation messages. Have a full suite of logging services so issues can be traced back to their root cause. Provide documentation that your average user will be able to consume and understand.

3. Recognise data limitations

The software may be fool proof, but the same cannot be said about the data. Biases in the initial data the program is learning from will propagate to its outputs. For example, Amazon had to scrap its AI recruiting tool after it started penalising CVs for containing the word “women’s”, e.g. “women’s chess club captain”. The data did not contain the applicant’s gender. Instead the successful hires and rejected applicants data it was learning from was biased. In the male-dominated IT industry, men had been recruited at a higher rate than women. Words unique to women’s CVs appeared much less in successful hires compared to general words like “leadership”, ergo the AI concluded that these words must be of low value and started penalising them.

Identify gaps in the data and apply weightings so that demographics are equally represented.

4. Don’t assume it’s working right, even when it does

The fidelity with which an AI can classify massive amounts of data can even discourage looking for errors. Who’s going to argue with a program that can classify thousands of people’s faces with 98% accuracy via impenetrable mathematics? This is compounded by the fact that so-called Black Box AIs can never show their workings. Typically, it involves the software projecting the data across high-dimensional mathematical spaces to extract unique features, but it is very abstract.

Resist the temptation to outsource your thinking to the program or assume it knows what it’s doing. All it’s really doing is sorting data into statistically significant different groups. Poke holes in the data. Question why the program is using algorithm X instead of algorithm Y. There is no one-size-fits-all algorithm and the one best suited depends on the type of data.

5. Identify the risks to your consumers before you start processing their data

Your goals for your AI program may be the most harmless thing in the world, but the results could have unintentional consequences. For example, did your advert recommendation software just out someone’s medical condition based on past internet searches?

This has been compounded by new GDPR restrictions on data profiling. GDPR requires a Data Protection Impact Assessment before any automated decision making or profiling is carried out. Profiling extends to any form of grouping user records by economic situation, health, personal preferences, location or movements. The necessity and the proportionality of the AI solution should be assessed. You should always bear in mind how seemingly anonymous data might suddenly become not-so under AI scrutiny.

In summary, AIs are a great tool but they are not miracle panaceas that can be deployed unsupervised with minimal input. By recognising their limitations, knowing how they work, and identifying their risks you can greatly mitigate the chances of them misbehaving.