Explainable AI (XAI) for Successful Industrial Application of AI

The manufacturing industry is constantly exploring ways of improving process efficiency and reducing costs. The emergence of Industry 4.0 and data rich cyber physical systems has opened up new opportunities to harness the data by Artificial Intelligence (AI) techniques and aid the aforementioned business objectives.

However, in many instances, manufacturing also has a requirement for the products and the processes generating the products to be certified. This means that the automated decisions made by an AI system in the business has a requirement to “explain its decision-making basis”, and to demonstrate that the results are traceable and reproducible. In such applications, the field of Explainable AI (XAI) is very important for the successful implementation of the AI system.

‘Explainable AI’ or ‘XAI’ is a relatively new field of AI which is rapidly growing as general AI technology matures from the “demonstrate the value” phase to the “operational” phase in an industrial application environment. The aim of XAI is to achieve a shared human and machine understanding of the business process i.e. for us to know what the AI system has really learnt. This will then enable us to understand certain aspects of the AI system e.g. why was that prediction made, is the prediction reliable, what are the “stable operating conditions” for the AI system, when is it likely to fail etc. Successfully answering these questions would then enable us to develop trust and confidence in the AI systems decision-making capabilities over a period of time.

Although the need for XAI is very clear, achieving this goal can be challenging. The main reasons behind this are the complexities of AI systems and difficulties in summarising it to aid human intuition. I can recall that for one of the first AI systems that I developed for an aerospace manufacturing process, I was asked to document its decision-making process. I generated a 50-page flow chart with all the steps the AI system was following to make the decision. I soon realised that although this document was “human readable”, the complexity of the decision-making process in the flow chart was too complex for “easy human comprehension”!

However, significant research is currently ongoing to effectively summarise the decision-making process of the AI system. The summarisation can be presented on two different levels. First, at a global level where we want to get an explanation from the AI system with respect to the entire dataset. The explanation at the global level is typically achieved by decision tree-based models and extracting the AI system’s decision drivers and plotting them as a descending histogram chart. This kind of representation is easily understood by a variety of stakeholders in the business. The second type of summarisation at a local level is where we try to understand the prediction of an AI system for one particular data point. This kind of explanation is especially important in a regulated industry or in the context of GDPR where one has the right to know the basis of a decision being made by computer software and to eliminate the “computer says no” scenario. With the local explanation, we get to know the key drivers behind a particular prediction/decision and this gives us an option to appeal against the decision if we think the decision drivers don’t apply for a particular case. I have had personal experience of this scenario when a bank using an automated system refused my remortgage application and when I asked about the drivers of the decision, initially, I was told the “computer says no” then on further appeal I was told that the decision driver was incorrect (“if the customer lives at the current address for less than 3 years then reject the application!”) and hence my application was subsequently approved!

The challenges around XAI gets even more compounded with a neural network based deep learning AI system. These are typically used in audio and video processing applications e.g. automated visual inspection of the manufacturing process. Because of the complex nature of the neural network structure, getting an understanding of the functioning of each element of the network is not straightforward. However, some techniques are available to address these challenges. One such technique involves converting the AI system decision making elements into a heat map and subsequently overlaying the heat map over the input image used by the AI system, for example, to find anomalies in a manufacturing process. The red areas of the heat map would be the key drivers behind the decision making of the AI system. This kind of heat map is useful to the Data Scientist to diagnose the AI system performance, and take corrective actions as necessary (including cases where the predictions are correct but for the wrong reasons and hence the systems needs correction). The heat maps are also useful to the person monitoring the AI system during the commissioning phase of the system, to approve or reject the decisions made by the AI system.

CFMS has developed a number of AI systems for industrial application and has tried to ensure that aspects of XAI are incorporated in all such applications. Examples of such applications include;

  • An AI system that not only predicts an aircraft wing structure performance but also provides the design drivers behind the predicted performance
  • An AI system that not only predicts an anomaly inside an aircraft wing structure but also points to its location by an overlaid heat map.

The more such examples we demonstrate going forward, we believe, it will lead to building greater trust and confidence in the AI system leading to successful adoption of robust AI system in the industry.

Sign up to our newsletter