AI in aerospace: who’s liable when the algorithm crashes the plane?

Picture of Richard van Hooijdonk
Richard van Hooijdonk
Autonomous flight systems can react faster than any human pilot. But when they make split-second decisions that violate airspace or injure passengers, who pays the price?

Here’s a scenario: a commercial flight from Frankfurt to London hits unexpected turbulence at 10,700 metres. Sixteen passengers are injured, three of them seriously. The subsequent investigation reveals that an AI-powered route optimisation system had processed incoming weather data and recommended a flight path that, while shorter and more fuel-efficient on paper, cut directly through an atmospheric disturbance the system had either misinterpreted or deprioritised. The airline points to the AI vendor’s assurances about the system’s validation. The vendor points to gaps in the training data they received. The regulator questions why the airline deployed the technology without more robust oversight protocols. Lawsuits follow. Insurance premiums spike. And the question lingers: when an algorithm makes a decision that harms people, where does accountability actually live?

The aviation industry has always been a proving ground for advanced technology. By-wire steering systems, satellite navigation, predictive maintenance algorithms – the industry has a long history of embracing technologies that seemed radical at the time and making them routine. But AI is different enough that many executives are proceeding with caution. Unlike previous innovations, where failure modes could be mapped and tested exhaustively, AI systems can behave unpredictably when encountering ‘edge’ cases that exist outside their training parameters. An aircraft that relies on AI for critical functions becomes, in a sense, less knowable than purely mechanical or even software-based predecessors. That uncertainty sits uncomfortably in an industry where every last system must perform reliably across millions of flights, in countless conditions, with no room for the kind of iterative learning that works in less consequential domains.

The current state of the aviation industry

Could AI offer a solution for growing pilot shortages and help airlines meet the increased demand for travel?

When the pandemic grounded fleets in 2020, analysts predicted a long and grinding recovery for air travel. Some forecasts suggested that it would take years, perhaps even a decade, before passenger numbers returned to 2019 levels. As it turns out, they were wrong. By 2024, European airports welcomed over five billion passengers – a 7.4% increase from the previous year and a figure that surpassed even pre-pandemic traffic. Global travel demand is now projected to grow at 4.3% annually over the next two decades. Airlines that had prepared for a gradual rebuilding suddenly found themselves scrambling to add flight frequencies and expand routes to keep pace with demand they hadn’t quite anticipated. Meeting that demand would be challenging enough under normal circumstances. The growing pilot shortage, however, makes it considerably harder.

The pilot crisis

Training a commercial pilot requires significant time and money. In the US, the process takes several years and can cost upwards of US$100,000 – a serious investment that discourages many potential candidates before they even begin. Additionally, the Federal Aviation Administration (FAA) requires first officers at scheduled passenger airlines to log 1,500 flight hours before they can hold an Air Transport Pilot certificate, which typically adds another year or two on top of the initial training period. To make matters even worse, the pipeline of new pilots can’t keep pace with the departures. The FAA projects that approximately 4,300 pilots will retire annually through 2042, while similar trends have been observed in Europe as well. Boeing estimates that the global aviation industry will need 660,000 new commercial pilots by 2044.

Some airlines have responded by accelerating recruitment and relaxing certain hiring criteria. Legacy carriers that once insisted on fluency in a national language, for example, are now softening that requirement to widen the applicant pool. Even with these adjustments, the numbers don’t add up, and the gap continues to widen. This has unsurprisingly led some in the industry to consider whether AI and increased cockpit automation might help bridge the gap. But while other sectors have eagerly adopted AI to address labour constraints, airlines continue to move with notable caution, and the adoption of AI remains limited. Dan Bubb, who teaches commercial aviation at the University of Nevada, Las Vegas, doesn’t believe that AI can adequately replace human pilots: “I have no doubt that AI will make air travel more efficient, in terms of time and fuel burn, but not replace humans.”

AI in the cockpit

Despite the scepticism, some industry players are moving forward with their AI experiments anyway. GE Aerospace recently partnered with Merlin Labs to integrate AI directly into its avionics systems, a move that signals sincere confidence in autonomous flight technology. Merlin has been quietly testing its ‘aircraft-agnostic’ AI since 2019, and the technology has reached an impressive level of sophistication. The Merlin Pilot can listen to air traffic control instructions through natural language processing and translate those verbal commands straight into flight actions. It operates entirely onboard without relying on GPS or ground links, using its own sensors to make real-time decisions. For now, human pilots remain firmly in control, supervising the system and ready to override when necessary. But GE and Merlin are already looking ahead to single-pilot operations and – maybe eventually – to fully autonomous flights where humans monitor from the ground.

Meanwhile, in Europe, the DARWIN project recently completed the first manned flights of an AI-based digital co-pilot designed specifically to reduce pilot workload and improve safety in reduced-crew or single-pilot operations. During the trials, the DARWIN system handled a range of simulated emergencies that would normally require immediate human intervention. When the system detected pilot drowsiness or incapacitation, it issued alerts and began redistributing tasks. A passenger’s medical emergency triggered different protocols, with the AI helping to coordinate the response while overseeing flight safety. Most impressively, the system successfully executed autoland procedures when the situation called for it, taking an aircraft from cruise altitude all the way to the runway without human input.

When something goes wrong, who do we hold accountable?

Traditional aviation doctrine assumes a clear chain of accountability. A licensed human pilot makes decisions. Responsibility flows through the airline as operator and up to the manufacturer when equipment fails. The system works because there’s always someone accountable, someone who holds credentials that can be suspended or revoked. Some entity that can be fined or its right to do business restricted. But what happens when an algorithm makes critical flight decisions instead of a person? We’re not there yet, but fully autonomous aircraft do seem inevitable – indeed, there’s a whole cottage industry already dedicated to making them happen. When they arrive, there may be no pilot onboard at all, which raises a fundamental question: who or what becomes the ‘pilot-in-command’? The term itself assumes a human. Aviation law and insurance frameworks were built around that assumption. An algorithm can’t hold a license. It can’t face criminal charges or lose its certification. Someone has to answer for what happens at 10,000 metres.

The existing aviation liability regimes currently channel most risk to the operator under strict liability principles, regardless of pilot involvement. Regulators and law commissions have indicated that even with full autonomy, the operator – a legal entity – will remain fully accountable for what happens in flight. The FAA’s guidance reinforces this by urging clear responsibility assignment for AI systems. Someone, usually a human supervisor or the operating airline, must always be identified as ultimately in command and accountable, even if an AI handles the actual flying. The law needs a person or company to hold responsible.

The blame game

But in AI-driven incidents, the root cause can be bewilderingly obscure. It might lie in the software, the hardware, the training data, or some combination of all three. The 737 Max disasters illustrate this ambiguity well. In those crashes, automated software called MCAS repeatedly pushed the nose down based on faulty sensor input while human pilots struggled to counteract it. Was the culprit a badly designed algorithm, a malfunctioning sensor, or human oversight lapses? Investigators identified all three factors: Boeing’s MCAS logic was arguably too aggressive, a sensor fed inaccurate data that falsely indicated a stall, and the airline and pilots failed to address the known sensor issue or effectively override the automation. Sorting out liability proved far from straightforward.

Unlike a crash involving humans, where blame might settle on pilot negligence or a single mechanical defect, AI-mediated accidents tend to involve a web of contributors. The AI ecosystem includes hardware makers, software developers, data scientists, and airlines – all of whom influence how the system behaves. Responsibility thus diffuses across these parties, making it hard to pin fault on any single agent. A further complication is that AI decisions often occur inside black-box models that even their creators struggle to fully explain. Machine learning systems, especially neural networks, arrive at outcomes through complex statistical patterns rather than transparent decision logic. So, how do you hold a system accountable when you can’t even explain what it did wrong? How do you prove negligence when the decision-making process itself is opaque?

The challenges holding back adoption

While AI promises to solve some of aviation’s long-standing problems, legal and regulatory dilemmas are standing in the way.

The promise of autonomous AI in aviation comes tangled with numerous unresolved legal and regulatory dilemmas, and the industry hasn’t figured out any good answers yet. Aviation certification assumes designs are frozen in place for decades. Once a system passes exhaustive testing in a fixed configuration, it’s locked down. Regulators approve that specific version, and any meaningful changes trigger recertification. AI models, on the other hand, evolve through retraining, data refreshes, and software updates. Their behaviour can shift as they ingest new information or as developers refine their algorithms. So, how do you certify an AI pilot whose decision-making can change over time? If even a small model adjustment counts as a new system, each retrain could trigger fresh certification – an endless, impractical loop that would stall development. Yet allowing unchecked updates risks deploying systems that behave differently from what regulators approved.

Then there’s the insurance problem. Aviation insurance rests on decades of data about human pilots, mechanical failure modes, and clear liability lines. Actuaries know how to price risk when a pilot with 10,000 flight hours sits in the cockpit. They understand how often turbofan engines fail. They have no comparable foundation for assessing AI decisions. The diffusion of responsibility between the airline, the AI manufacturer, the software supplier, and the data vendor makes underwriting especially difficult. Insurers have been vocal about needing legal clarity before they can price risk with confidence. Industry groups argue that aviation law, currently built largely around human pilot responsibility, must be revised to reflect multiple stakeholders. Until that happens, who will insure the decisions of a non-human agent remains an open question.

Keeping up with the times

The issue of criminal liability presents another conundrum. Serious aviation accidents can prompt criminal investigations, particularly when negligence or recklessness seems to be involved. However, an autonomous AI has no ‘intent’. It is incapable of being negligent in the human sense. If an AI pilot causes fatalities, current legal doctrines may find no one to punish unless negligence can be pinned on a human or a corporation. The absence of someone to hold criminally accountable feels unsatisfying, especially to victims’ families. Recognising this discomfort, some legal scholars have proposed updating doctrines so corporations deploying autonomous systems can face criminal liability when grossly negligent processes lead to deaths. But proving gross negligence gets complicated when multiple parties contributed to the AI’s development and deployment.

Add to all of this the tangled web of international regulations governing aviation. US regulations differ from those of the European Union Aviation Safety Agency (EASA), which differ from China’s CAAC. Now consider the following scenario: An AI trained on FAA rules crosses into European airspace governed by EASA. It makes a decision optimised for US regulations that just so happen to violate EU law. Who’s liable? The airline? The AI vendor? The aircraft crosses jurisdictions constantly, and each jurisdiction has different expectations about safe operation. The obvious solution would be harmonised global standards and certification processes – an agreed baseline that allows AI meeting certain criteria to be accepted broadly, much like aircraft type certificates work today. But aviation has struggled for decades to align on far simpler regulations. Getting the world’s aviation authorities to agree on AI standards seems an even more daunting prospect.

Paving the way for AI in aviation

The adoption of AI in the aviation industry remains tentative, but we are starting to see the first steps towards greater acceptance.

To get a sense of how aviation might navigate these challenges, it helps to look at how other industries are handling similar problems. The FDA has been wrestling with the same certification puzzle for medical AI devices. Their proposed solution is something called a Predetermined Change Control Plan – essentially a framework where developers pre-specify how an algorithm can evolve and how updates will be validated. If regulators approve the plan upfront, the device can update within those agreed boundaries without requiring fresh approvals each time. Aviation could arguably adopt something similar, allowing AI systems to learn and improve within pre-certified parameters rather than treating every update as an entirely new system.

EASA has already started mapping out a path forward with its multi-stage AI Roadmap, which – similar to the SAE autonomy system for road vehicles – defines three levels of AI in aviation, starting with AI that merely assists human decisions (Level 1), progressing through AI that shares decision-making (Level 2), and eventually reaching highly autonomous systems (Level 3). The FAA released its own AI Safety Assurance Roadmap along parallel lines, emphasising incremental implementation, clear accountability, and building on existing safety rules wherever possible. By contrast, SAE autonomy levels see passenger vehicles graded on a scale of one to five, with five designated for full autonomy with no requirement for human oversight whatsoever. The most advanced systems on the road today are SAE Level 4. Aviation would follow a similar path, moving gradually from systems where AI makes suggestions and humans have the final say to ones where AI makes decisions while humans supervise.

The risk factor

That progression naturally raises questions about who’s doing the monitoring and what qualifications they need. Some regulators are proposing a new category of algorithmic operators with separate licensing requirements. We already have drone pilots operating remotely under special certificates; future AI supervisors might oversee entire fleets from control centres, monitoring multiple aircraft simultaneously. These operators would need both aeronautical knowledge and AI literacy – understanding system states, override logic, and the ethical dimensions of algorithmic accountability. While no AI pilot license exists at the time of writing, FAA and EASA advisory groups are exploring what certification for autonomy oversight roles might look like. Drones will likely pioneer these standards before they scale to larger commercial aircraft, simply because the regulatory path is clearer and the consequences of failure are more contained.

Insurance is moving more cautiously, but showing tentative signs of adaptation. Lloyd’s and other underwriters are already in the process of crafting policies for autonomous aviation applications like air taxis and delivery drones. They’re acutely aware that early mishaps could sour public trust and chase capital from the market, which is why they’re pushing hard for robust regulation to anchor their underwriting. We’re also seeing AI-specific insurance products emerge. Lloyd’s of London, for example, recently introduced a policy covering losses from malfunctioning AI tools, initially targeting things like chatbots, but effectively creating a new category of algorithmic risk insurance. In aviation, underwriters may start demanding evidence of robust machine learning assurance as conditions of coverage. For now, the industry is experimenting cautiously, writing policies with conservative terms and likely charging a significant premium for all that uncertainty.

The path ahead

The aviation industry finds itself caught between competing pressures with no easy answers in sight. Airlines are scrambling to fill cockpits as experienced pilots retire faster than new ones can be trained, demand keeps climbing beyond what anyone predicted even two years ago, and airspace grows more congested by the quarter. Efficiency demands pile up from regulators, passengers, and shareholders alike. In this context, automation starts looking less like an option and more like a necessity. But rushing these systems into commercial service without clear legal frameworks could prove catastrophic for the industry. Airlines deploying AI without regulatory blessing might find themselves uninsurable – no underwriter will touch a risk they can’t quantify. 

So aviation faces a genuine bind. The industry may actually need AI to solve labour shortages, manage increased traffic, and achieve safety improvements beyond what human pilots can deliver alone. Yet the legal frameworks, ethical guidelines, and accountability structures required to deploy that AI responsibly remain incomplete at best, absent at worst. We’re being asked to hand over controls before we’ve agreed on who answers when those controls fail. The technology advances faster than our ability to govern it, and neither speeding up nor slowing down offers a clean path forward. That tension between operational necessity and unresolved accountability will shape how aviation evolves over the next decade, and right now, no one has figured out how to resolve it.

Share via
Copy link