But AI adoption is being driven by the desire to maintain competitive advantage and different industries are using the technology in different ways. Two of the biggest drivers are improving quality and reducing cost. CFMS demonstrated that AI can significantly improve inspection times and fault identification accuracy when checking aircraft for defects, and this is just one use case.
Other examples are in retail, where companies are using AI to improve inventory forecasts and automated customer operations, while financial organisations are using AI to improve the accuracy and speed of fraud-detection systems.
Different market sectors are embracing AI at different speeds. Industries like aerospace, or manufacturing are ramping up their use of AI, but others will take longer. Organisations that have embraced digital systems are better placed to become early adopters, as they have the data on which to build AI systems.
The tasks most suited to automation are repetitive ones involving a lot of data that require less than a second of human thought to perform. The amount of data needed depends upon the task itself. But generally speaking AI models are data hungry and hence business processes that are automated both in terms of task performance as well as data collection and storage are the ones that are most likely to benefit in the short term.
There is a lot of misunderstanding, but it’s improving. AI or a robot won’t replace a human, but they can make them more efficient. Many people fear AI will make people redundant, but real examples show this won’t happen soon. Autonomous vehicles can move items from one place to another, but human drivers still manage the overall delivery process more effectively, from collection to delivery, when considering how many unexpected situations could arise.
Another example of the difference between human and AI capabilities is that we can read IKEA furniture assembly instructions, which are very simple and without any text, and can ‘convert’ the basic sketch images into ‘actions’. No AI in the world can currently do this!
Current AI-powered robots can only perform tasks they have been explicitly trained to do in a highly controlled environment, such as an Amazon warehouse. Yet the public perception is that human drivers will shortly be replaced. This is quite a few years off, as the scope of AI is currently limited to “narrow AI” and we are some time away from “Artificial General Intelligence” where robots can learn on their own and make decisions when faced with novel situations.
Most of the organisations that CFMS works with to adopt AI face the practical challenge of insufficient AI skills. Without the right skills it’s difficult to implement a proof of concept that validates a business case and it will be harder for UK organisations to adopt AI and reap the benefits. However, over time as the pervasive benefits of AI are seen in seemingly unlikely sectors, like agriculture, the collective know-how and maturity of digital infrastructure will improve, paving the way for widespread adoption of AI across many companies.
AI will undoubtedly bring a multitude of benefits to the way we live our lives, but big decisions like these can often come with an ethical price tag. Even the best trained AI system will have its flaws – not because it wants to, but because it’s been trained to – usually by us. For example, a company has advertised a vacancy for a retail shop floor assistant. Historical data suggests that the large majority of people undertaking this role are female. While developing its learning capabilities, the AI is likely to only engage or respond to female applicants, thus male applicants have a higher chance of missing out on the position. This scenario occurred at Amazon, whose AI-based tool used for part of its HR process was discriminating against women.
To avoid issues of bias or error, AI needs a benchmark set for businesses, either through a series of questions or a thorough checklist, to guarantee any bias on the part of the human is eradicated at the outset. This checklist would ensure all decision-making is fair and equal, accountable, safe, reliable, secure and addresses privacy aspects.
As humans, we will always be prone to making mistakes, but unlike machines we have ‘human qualities’ such as consciousness and judgement that come into play to correct the mistakes made over time. However, unless these machines are explicitly taught that what they are doing is ‘wrong’ or ‘unfair’, the error will continue.
Blindly allowing these AI systems to continue making mistakes is irresponsible. And, when things do go wrong, which they inevitably will, we need to ask ourselves who is liable? Is it the machine, the data scientist or the owner of the data? This question is still being debated within the industry but as errors become more public, we will start to learn and understand when they’re investigated.