All We Have to Fear is … AI?

All We Have to Fear is … AI?

Artificial intelligence (AI) is being talked about everywhere now, whether related to uses in business, investing, the future or more specifically, how AI will impact our corner of the world. How will AI impact the world of aviation maintenance? Some say it will be by enabling proactive and data-driven approaches to ensure aircraft safety, reliability and cost-effectiveness. As technology continues to advance, we can expect more sophisticated AI applications to be developed rapidly for use in aviation maintenance.

What is AI? AI refers to computer systems that can perform tasks that typically require human intelligence, such as understanding natural language, recognizing patterns, learning from experience and making decisions. AI encompasses a broad range of techniques and technologies, including machine learning, natural language processing, computer vision and robotics.

Some have asked, isn’t it the same as data analytics? Not exactly. Data analytics focuses on extracting insights and knowledge from data through techniques such as statistical analysis, data mining and predictive modeling. Data analytics involves processing large volumes of data to identify trends, patterns, correlations that can inform decision-making and drive business outcomes.

AI does rely on data analytics techniques for tasks such as learning from data or making predictions, but it extends beyond data analytics to include capabilities such as reasoning, planning and perception. In other words, AI encompasses data analytics as one component of its broader scope.

AI may bring significant advantages to aviation maintenance, but there are concerns about potential risks and challenges. One of those concerns is the reliability of the data quality. AI systems rely on data quality and accuracy for effective operation. Errors or biases in the data can lead to incorrect predictions or decisions, posing safety risks in aviation maintenance. AI has already caught off guard some folks in different business areas but here is one aviation adjacent example.

Recently, a lawsuit was filed by a passenger who claimed to have been injured by a drink cart on a flight on Avianca Airlines. The passenger’s lawyers asked the court to throw out the airlines’ defense because it contained references to precedented cases that, upon their research of them, proved to be non-existent. How did this happen? The attorney for the airline admitted to using ChatGPT to conduct his legal research. He even asked ChatGPT if the cases referenced in its response were real, to which ChatGPT replied that they were real. But in fact, ChatGPT had made up the cases it referenced.

Another concern is cybersecurity. AI systems used in aviation maintenance could be vulnerable to cyber threats, such as hacking or malware attacks, which could compromise the integrity of data and operations. This is a real threat that needs to be addressed but that threat already exists in all technology.

The use of AI has also raised ethical concerns related to privacy, accountability and transparency. Ensuring ethical AI practices in aviation maintenance is essential to maintain trust and safety. The public trust is crucial to our industry. Although EASA has a published document called “Artificial Intelligence Roadmap 2.0,” that “sets the pace for conceptual guidance deliverables and anticipated rulemaking activities … [and] serves as a basis for discussions with all of the Agency’s stakeholders,” a search of the FAA website yielded no policy guidance at all on the topic.

Then there is the possibility of job displacement. The automation of some maintenance tasks through AI may lead to job displacement but AI is more likely to augment human capabilities rather than replace them entirely. This may create new roles and opportunities in the field.

There is some fear of the unknown surrounding AI. AI represents a new, rapidly advancing technology. Portrayals in pop culture have shown AI in dystopian, apocalyptic scenarios where intelligent machines rise up against humans. Fear of the unknown can lead to anxiety about the potential implications of AI, but as President Franklin D. Roosevelt once said, “The only thing we have to fear is fear itself.” However, legitimate concerns about the implications of AI in aviation maintenance do exist.

AI systems can operate autonomously and make decisions based on complex algorithms and data analysis. This loss of human control over decision-making processes can feel unsettling, particularly when it comes to critical domains like aviation. People may fear the misuse of AI systems for surveillance, manipulation, discrimination or other unethical purposes, especially when decisions with significant consequences are delegated to algorithms.

Loss of human control over decision-making processes in critical domains like healthcare, transportation, or national security should be looked at carefully and systems designed with redundancies and checks and balances.

There is the pace of AI development that could lead to exponential growth. The potential of AI to surpass human intelligence in the future has raised concerns about the ability to control or contain AI systems once they reach a certain level of sophistication.

While these fears are understandable, it’s essential to approach AI development and deployment with a balanced perspective, acknowledging both the benefits and risks. Responsible AI development involves addressing concerns related to ethics, transparency, accountability and impact to society to ensure that AI technologies serve the best interests of people.

Even without policy guidance from aviation administrative agencies, proactive measures can be taken to mitigate the risks and maximize the benefits of using AI technology in our industry. What is your company doing right now to ensure it is ready and protected as the use of AI becomes de rigueur?