Applicability of Artificial Intelligence for Industrial Inspection

A large number of visual inspections tasks are routinely carried out in the aerospace industry, though these activities are largely manual, involve recurring costs and can have health and safety implications. Artificial intelligence domain specialist at CFMS, Kiran Krishnamurthy, looks at how artificial intelligence can be used to automate these processes by building three custom-made demonstrators.

Artificial intelligence (AI) has the potential to transform industrial inspection. Within typical inspection workflows, assessment is carried out to verify whether a product, component, structure or process meets specified requirements. This decision is often made by a skilled engineer, but due to its manual nature, this type of inspection is often associated with high recurring costs, increased susceptibility to human error due to fatigue and can carry high operational risk. For example, the industrial asset could be difficult to access due to confined space or a hazardous environment.

The Centre for Modelling & Simulation (CFMS) has produced three demonstrators which aim to disseminate emerging AI technologies across the aerospace industry to automate industrial inspection tasks. It is hoped this kind of automation will promote a culture of data curation within the industry, in turn leading to the identification of further opportunities for the adoption of AI technologies in order to automate business processes.

The underlying AI model used in all the demonstrators was Deep Learning, consisting of neural networks that are vaguely inspired by the biological neural networks in that the connectivity pattern between neurons resembles the organisation of an animal visual cortex. Neural networks consist of a number of connected layers wherein each layer has a number of neurons. The connections pass signals from one neuron to another. The crux of training a neural network is finding out (i.e. self-learning) the strength of the various signals (also known as ‘weights’) that gets passed between the neurons from training examples. This process is analogous to finding out coefficients of an equation that maps input variables to corresponding outcomes in a typical empirical dataset.

Since all the demonstrators are based on image processing, a specific type of neural network called Convolutional Neural Network (CNN) was used to train the algorithm to recognise features and objects in the images. One of the key advantages of using CNN is that it uses less parameters for training which means it is computationally efficient. The video footage was split into individual frames and labelled with areas of interest for inspection task. The labelled frames were then used to train the CNN i.e. train them to ‘self-learn’ the rules for feature identification and to identify objects, and/or defects irrespective of their position or orientation in a real-world inspection scenario.

The first demonstrator relates to a typical manual inspection task of an aircraft wing either as part of pre-flight inspection at the airport or maintenance check at an aircraft hangar. The purpose of inspection in these cases is to detect imperfections such as foreign object damage, corrosion, cracks, dents and scratches, where the typical challenge associated with these types of inspections is accessibility due to aircraft size. The demonstrator was developed in collaboration with the National Composites Centre (NCC) using its drone to fly over a full-scale Airbus A400M Aircraft Wing at Airbus UK. The camera mounted on the drone captured the condition of the aircraft wing in a video format. This footage was then used to train the CNN to learn to detect defects (e.g. corrosion, dents) and features (bolts, brackets etc).

In the second demonstrator, a smart phone was used to capture video footage of the inside of the A400M wing box. This area needs to be routinely inspected during aircraft maintenance cycles, but it is quite a confined space which makes inspection physically challenging. The footage from the smartphone camera was again used to train the CNN to learn to detect the defects previously outlined. In the future, organisations could deploy a crawling robot, equipped with a camera and AI module to automate inspection in these cases.

Finally, the third demonstrator relates to the inspection aspect of the composite manufacturing process involving an Auto Fibre Placement (AFP) machine. This manufacturing process needs to be monitored to ensure there are no excessive gaps between composite tapes (tows) or no undesired overlaps or defects like wrinkles. These monitoring and documenting steps are currently manual, which not only means they involve high recurring costs but that they are also prone to variation in documentation – usually arising when different wording or language is used amongst inspectors – which in turn can lead to information being interpreted differently. The good news is that most of these machines are equipped with a video camera that is capable of recording video footage of the ongoing manufacturing process. However, cameras are used infrequently or switched off altogether, as the benefits of automating the inspection process using AI is not widely known. Hence, in this demonstrator, video footage from a fixed camcorder was used to train the CNN to detect defects like tow gaps or tape cutting failure and features of composite manufacturing (untrimmed) component like ‘bat ears’.

All of the above three demonstrators developed at CFMS aim to raise awareness and stimulate the identification of further opportunities for AI technologies, which could revolutionise inspection activities across the aerospace industry. As an independent, not-for-profit specialist centre, CFMS will continue to assess the latest AI developments and how they can help organisations to streamline processes, boost efficiencies and save time and cost.

Contact CFMS today to discover how AI technologies can work for you.

Sign up to our newsletter