“Technology now just allows you to make a bad mistake faster” – the importance of vigilance using AI in MRO

Certain cautions need to be applied to the use of Artificial Intelligence in aircraft maintenance, as an embedded human error can be quickly scaled, so ultimately it's on us when things go wrong.
As an IT journalist quipped in the early 1990s: "A computer lets you make more mistakes faster than any invention in human history, with the possible exceptions of handguns and tequila."
The key words in this quote, in regard to aviation, are not "computer" and "make more mistakes than any invention in human history". Rather they are "faster" and "lets you".
AI-powered tools enable Maintenance, Repair, and Overhaul (MRO) technicians to do their job at a much quicker rate, and they will yield vast savings – in cost, time and labour resources.
But for all the safety protocols and procedures followed by MRO operators, there will be human mistakes – although very rarely – which set off sequences that can potentially end badly, although more often they are detected and resolved.
Rise of the vertiports: a look at recent developments concerning the futuristic air hubs
Why using AI in MRO carries additional risks
With AI, a human error has the potential to fast-track the speed at which a mistake becomes an eventuality.
For example, when a human error is embedded into an AI system, it can be quickly scaled and then executed consistently. In this way, an isolated mistake can become a systemic issue.
And let’s not be mistaken: AI-powered predictive maintenance and data analytics are commonly used by MRO technicians servicing commercial aircraft. They are using AI-powered tools to diagnose problems and help inform the decision-making process.
It’s crucial the technicians can understand the data that the AI-powered tools base their decisions on – as their years of experience could cast the findings in a different light. Advanced AI is, after all, a learning process.
The AI-powered tools must be treated with extra vigilance.
So how can MRO technicians ensure they don’t make a serious mistake whilst using AI-powered tools?
KPIs in aviation: useful for informing data, but not for informing decisions
Much will depend on AI's role in the eye of the technician
The key to successfully using AI is in the nature of the relationship.
Instead of regarding the AI as an omniscient, omnipotent savant – indefatigable, indispensable and infallible – MRO technicians should see it more as an assistant that needs supervision.
As Bill Gates predicted in 2023: "In the near future, anyone who’s online will be able to have a personal assistant powered by Artificial Intelligence that’s far beyond today’s technology."
All communication with AI should be carefully considered: both the prompt and the results. The input of the technician’s expertise will be just as valid, and often the best results will be collaborations.
AI literacy across the workforce is paramount, therefore. Preferably, the whole company should undergo training – as continuous human oversight is crucial.
As McKinsey & Company report 'The state of AI in 2023' concluded, for example, businesses enjoying the most impact with AI were those investing in AI fluency and collaboration in their workforce, not just technology.
AI findings need to be scrutinised, questioned if need be and validated before they are acted upon.
Ultimately, the findings of the AI can raise your efficiency levels, detecting patterns in data that human scrutiny simply won’t be able to spot, but it cannot replace one of your colleagues.
Responsibility for its performance lies with the human who uses it.
Passengers and airlines see eye-to-eye on data’s power to simplify travel
SATAIR TAKEAWAY
Maybe it was via a conference, a webinar or an episode of 'The Big Bang Theory', but we've all heard how AI lets you make bad mistakes faster – but normally the quote is served up with a number of provisos. The most important of these is to remember that whatever happens, you are in charge. You are responsible for ensuring the AI is suitably utilised, its results adequately scrutinised, and any potential risks completely neutralised. After all, there has always been an element of risk in aviation, and AI is simply rewriting the rulebook a little.