Most, if not all, of the people reading this article will have a certain level of understanding of artificial intelligence (AI) and its many different applications. Its rise and proliferation is unprecedented, despite still being in the very early days of its evolution.
- Other
Blindly using data to make decisions will only proliferate poor ethics in AI
Recent research by a leading software solution provider into the attitudes and strategies towards AI highlighted that 90% of the 600 business leaders polled have plans to implement AI in various parts of their business. 70% anticipate AI will lead to general productivity increases and 60% expect it to make their current workers more productive. So, whether motivated by a fear of missing out or clear-eyed optimism, business leaders seem sold on AI’s promise.
Yet, there’s still some uncertainty and fear amongst business leaders – and a high proportion of businesses are yet to grasp this promise.
Applications of AI
It’s no secret that decisions made by AI systems will increasingly impact our lives both professionally and personally. These automated decisions could be related to mortgage applications, our career paths, the way we interact with social media and even how we drive cars. AI will undoubtedly bring a multitude of benefits to the way we live our lives, but big decisions like these can often come with an ethical price tag.
For example, let’s look at AI in the Human Resources (HR) field. Many businesses are starting to use AI and machine learning tools to screen the hundreds if not thousands of CVs they receive when hiring new employees. In order to efficiently manage these applications, companies need to save time and human effort whilst also finding qualified and desirable candidates to fill the role.
However, even the best trained AI system will have its flaws. Not because it wants to, but because it’s been trained to – usually by us.
For example, a company has advertised a vacancy for a shop floor assistant. Historical data suggests that the large majority of people undertaking this role are male. While developing its learning capabilities, the AI is likely to only engage or respond to male applicants, thus female applicants have a higher chance of missing out on the position. This scenario occurred at Amazon, whose AI-based tool used for part of its HR process was discriminating against women.
As a general technology, AI can be used in many ways, with businesses deciding how and where it is applied. However, with so few examples of how it can go wrong (at least in the public domain), businesses are blindly feeding AI systems data with little to no regard of the ethical implications.
Why ethical AI is so important
Ethics are critical to automated decision-making processes where AI is used. Without some consideration for how decisions are naturally made by humans, there is no way we can expect our AI systems to behave ethically.
Take the Volkswagen emissions scandal as another example.
Back in 2015, thousands of diesel VWs were sold across the globe with software that could sense test scenarios and change their performance to show reduced carbon emissions. Once back on the road, they would switch back to ‘normal’, emitting up to 40% more carbon dioxide than the tests would have shown.
In this case the test engineers were following orders, so the question of who was responsible might have been unclear. However, the judicial response was that the engineers could have raised the issue or left the organisation, so liability lay with them. The same could apply to data scientists in another scenario. If there is the realisation that elements of decision-making could cause bias and harm, they have the option and obligation to flag or depart.
How biases are introduced and who’s responsible
Although humans are the main source of these biases, there can also be bias in data, and if we aren’t careful, AI will accentuate these biases. A lack of representation in industry is also increasingly being cited as the root cause of the problem in data and while the question of liability if still being widely debated, I believe it is important for business leaders to take more responsibility for unintentionally infusing bias in an AI system.
As humans, we will always be prone to making mistakes, but unlike machines we have ‘human qualities’ such as consciousness and judgement that come into play to correct the mistakes made over time. However, unless these machines are explicitly taught that what they are doing is ‘wrong’ or ‘unfair’, the error will continue.
In my view, and I’m sure the view of many others, blindly allowing these AI systems to continue making mistakes is irresponsible. And, when things do go wrong, which they inevitably will, we need to ask ourselves who is liable? Is it the machine, the data scientist or the owner of the data? This question is still being debated within industry but as errors become more public, we will start to learn and understand when they’re investigated.
How can we remove these biases?To ensure decision-making is fair and equal for all, businesses need to get better at thoroughly investigating the decision-making process to ensure there’s no bias on the part of the human, who will often act on an unintentional and unconscious basis. This reduces or eliminates the chances of the biases being misinterpreted by the AI and potential errors being proliferated.
To do this, I’d like to see a benchmark set for businesses, either through a series of questions or a thorough checklist, to guarantee any bias on the part of the human is eradicated at the outset. The checklist would ensure all decision-making is fair and equal, accountable, safe, reliable, secure and addresses privacy aspects.
The checklist could be used for in-house data science teams, especially as an induction tool for new recruits, or for external companies that are outsourced by businesses to build and manage their AI systems. If manufacturing companies do decide to outsource aspects of their machine learning capabilities, I think this checklist is especially pertinent as it acts as kind of contract, whereby any potential disputes over liability can more easily be resolved.
As we’re still in the early stages of AI, it is unclear whether these measures would be legally binding, but they may go some way to proving – to an insurance company or lawyer – where liability lies. If a manufacturing company can demonstrate that a checklist has or hasn’t been followed, depending on whether the work has been kept in-house or otherwise, they are more protected than they might have been before.
Another part of this benchmarking exercise could be to ensure all data scientists within a business, whether they’re new to the role or experienced technicians, take part in a course on ethics in AI. This could also help people understand or remember the need to remove certain parameters from the decision-making process, for example any gender biases. This way, when building a new AI system which takes male and female activity into account, they’ll know to deactivate the gender feature to ensure the system is gender neutral.
Conclusion
This article isn’t designed to scare people, rather, it is here to urge business leaders to stop overlooking potential biases in the decision-making process to ensure decisions are fair and equal for everyone.
There’s always a chance that human bias will creep in, but it is down to us to take the necessary steps to ensure processes are fair and transparent. And the quicker and more efficiently we set up a benchmark from which to work, the less likely we are as humans to be liable for the errors.