{"id":81500,"date":"2025-12-22T14:14:00","date_gmt":"2025-12-22T12:14:00","guid":{"rendered":"https:\/\/blog.richardvanhooijdonk.com\/?p=81500"},"modified":"2026-02-11T10:26:55","modified_gmt":"2026-02-11T08:26:55","slug":"ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane","status":"publish","type":"post","link":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/","title":{"rendered":"AI in aerospace: who\u2019s liable when the algorithm crashes the plane?"},"content":{"rendered":"\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\"><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-cover is-light\"><span aria-hidden=\"true\" class=\"wp-block-cover__background has-cyan-bluish-gray-background-color has-background-dim-20 has-background-dim\"><\/span><div class=\"wp-block-cover__inner-container is-layout-flow wp-block-cover-is-layout-flow\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-fe9cc265 wp-block-group-is-layout-flex\">\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading has-black-color has-text-color has-link-color wp-elements-4262f416392c5c28b0a4c1a4c4499b21\">Executive summary<\/h2>\n\n\n\n<p class=\"has-black-color has-text-color has-link-color wp-elements-a648ca01b80b32be5869d5e0759f4eaf\">Aviation stands at a crossroads where technological capability has outpaced regulatory and legal readiness. While AI promises solutions to acute pilot shortages and operational challenges, the industry faces unresolved questions about liability, certification, and accountability when algorithms make critical flight decisions. Despite decades of successfully adopting advanced technologies, aviation executives are approaching AI with unusual caution, aware that the consequences of getting this wrong are measured in human lives.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"has-black-color has-text-color has-link-color wp-elements-245efe6b6667ffa68b25a6dca2ccb9d9\">Air travel recovered far faster than predicted after the pandemic, but severe pilot shortages now threaten to constrain growth.<\/li>\n\n\n\n<li class=\"has-black-color has-text-color has-link-color wp-elements-c873095b637d85b06b9a4699ec2b8036\">Certification rules assume aircraft systems stay fixed, but AI learns and updates itself constantly.<\/li>\n\n\n\n<li class=\"has-black-color has-text-color has-link-color wp-elements-9318510954d319bb07b5c9eda7975e63\">Insurance companies won\u2019t touch full autonomy until someone sorts out who\u2019s liable when things go wrong.<\/li>\n\n\n\n<li class=\"has-black-color has-text-color has-link-color wp-elements-6a8110f153834973d7471660c8d2fc86\">Criminal law has no idea how to prosecute algorithmic decisions that injure or kill people.<\/li>\n\n\n\n<li class=\"has-black-color has-text-color has-link-color wp-elements-bce307bfe895818cfb3231ceffa16de4\">International regulatory fragmentation means AI trained on one jurisdiction\u2019s rules might violate another\u2019s, with no harmonised global standards in sight.<\/li>\n\n\n\n<li class=\"has-black-color has-text-color has-link-color wp-elements-d9665dfa4eb024f4d4a8ed94a9df6233\">Real-world testing continues anyway, with companies like GE Aerospace, Merlin Labs, and the DARWIN project developing AI systems that can interpret controller instructions and manage emergencies.<\/li>\n<\/ul>\n\n\n\n<p class=\"has-black-color has-text-color has-link-color wp-elements-6c02611f07739421b23a8cb4ac1a0aa6\">The industry faces a bind with no clear resolution. Moving too fast risks creating uninsurable airlines and unlimited manufacturer liability. Moving too slowly risks falling behind competitors and missing the technological moment. How aviation navigates this tension over the next decade will determine whether AI becomes a transformative safety tool or a cautionary tale about deploying powerful technologies before society is ready to govern them.<\/p>\n<\/div>\n<\/div><\/div>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n\n\n\n<p>Here\u2019s a scenario: a commercial flight from Frankfurt to London hits unexpected turbulence at 10,700 metres. Sixteen passengers are injured, three of them seriously. The subsequent investigation reveals that an AI-powered route optimisation system had processed incoming weather data and recommended a flight path that, while shorter and more fuel-efficient on paper, cut directly through an atmospheric disturbance the system had either misinterpreted or deprioritised. The airline points to the AI vendor\u2019s assurances about the system\u2019s validation. The vendor points to gaps in the training data they received. The regulator questions why the airline deployed the technology without more robust oversight protocols. Lawsuits follow. Insurance premiums spike. And the question lingers: when an algorithm makes a decision that harms people, where does accountability actually live?<\/p>\n\n\n\n<p>The aviation industry has always been a proving ground for advanced technology. By-wire steering systems, satellite navigation, predictive maintenance algorithms \u2013 the industry has a long history of embracing technologies that seemed radical at the time and making them routine. But AI is different enough that many executives are proceeding with caution. Unlike previous innovations, where failure modes could be mapped and tested exhaustively, AI systems can behave unpredictably when encountering \u2018edge\u2019 cases that exist outside their training parameters. An aircraft that relies on AI for critical functions becomes, in a sense, less knowable than purely mechanical or even software-based predecessors. That uncertainty sits uncomfortably in an industry where every last system must perform reliably across millions of flights, in countless conditions, with no room for the kind of iterative learning that works in less consequential domains.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The current state of the aviation industry<\/h2>\n\n\n\n<p><em><strong>Could AI offer a solution for growing pilot shortages and help airlines meet the increased demand for travel?<\/strong><\/em><\/p>\n\n\n\n<p>When the pandemic grounded fleets in 2020, analysts predicted a long and grinding recovery for air travel. Some forecasts suggested that it would take years, perhaps even a decade, before passenger numbers returned to 2019 levels. As it turns out, they were wrong. By 2024, European airports <a href=\"https:\/\/www.airlineratings.com\/articles\/global-pilot-shortage-intensifies-as-airline-expansion-and-retirements-surge\" target=\"_blank\" rel=\"noreferrer noopener\">welcomed<\/a> over five billion passengers \u2013 a 7.4% increase from the previous year and a figure that surpassed even pre-pandemic traffic. Global travel demand is now projected to grow at 4.3% annually over the next two decades. Airlines that had prepared for a gradual rebuilding suddenly found themselves scrambling to add flight frequencies and expand routes to keep pace with demand they hadn\u2019t quite anticipated. Meeting that demand would be challenging enough under normal circumstances. The growing pilot shortage, however, makes it considerably harder.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The pilot crisis<\/h3>\n\n\n\n<p>Training a commercial pilot requires significant time and money. In the US, the process takes several years and can <a href=\"https:\/\/www.dw.com\/en\/pilot-crisis-looms-as-airlines-scramble-to-fill-cockpits\/a-74195149\" target=\"_blank\" rel=\"noreferrer noopener\">cost<\/a> upwards of US$100,000 \u2013 a serious investment that discourages many potential candidates before they even begin. Additionally, the Federal Aviation Administration (FAA) requires first officers at scheduled passenger airlines to log 1,500 flight hours before they can hold an Air Transport Pilot certificate, which typically adds another year or two on top of the initial training period. To make matters even worse, the pipeline of new pilots can\u2019t keep pace with the departures. The FAA projects that approximately 4,300 pilots will retire annually through 2042, while similar trends have been observed in Europe as well. Boeing <a href=\"http:\/\/www.boeing.com\/commercial\/market\/pilot-technician-outlook#resources\" target=\"_blank\" rel=\"noreferrer noopener\">estimates<\/a> that the global aviation industry will need 660,000 new commercial pilots by 2044.<\/p>\n\n\n\n<p>Some airlines have responded by accelerating recruitment and relaxing certain hiring criteria. Legacy carriers that once insisted on fluency in a national language, for example, are now softening that requirement to widen the applicant pool. Even with these adjustments, the numbers don\u2019t add up, and the gap continues to widen. This has unsurprisingly led some in the industry to consider whether AI and increased cockpit automation might help bridge the gap. But while other sectors have eagerly adopted AI to address labour constraints, airlines continue to move with notable caution, and the adoption of AI remains limited. Dan Bubb, who teaches commercial aviation at the University of Nevada, Las Vegas, doesn\u2019t believe that AI can adequately replace human pilots: \u201cI have no doubt that AI will make air travel more efficient, in terms of time and fuel burn, but not replace humans.\u201d<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">AI in the cockpit<\/h3>\n\n\n\n<p>Despite the scepticism, some industry players are moving forward with their AI experiments anyway. GE Aerospace recently <a href=\"https:\/\/breakingdefense.com\/2025\/09\/ge-aerospace-picks-merlin-for-ai-co-pilot-with-eyes-on-kc-135-ccr-upgrade-exclusive\/\" target=\"_blank\" rel=\"noreferrer noopener\">partnered<\/a> with Merlin Labs to integrate AI directly into its avionics systems, a move that signals sincere confidence in autonomous flight technology. Merlin has been quietly testing its \u2018aircraft-agnostic\u2019 AI since 2019, and the technology has reached an impressive level of sophistication. The Merlin Pilot can listen to air traffic control instructions through natural language processing and translate those verbal commands straight into flight actions. It operates entirely onboard without relying on GPS or ground links, using its own sensors to make real-time decisions. For now, human pilots remain firmly in control, supervising the system and ready to override when necessary. But GE and Merlin are already looking ahead to single-pilot operations and \u2013 maybe eventually \u2013 to fully autonomous flights where humans monitor from the ground.<\/p>\n\n\n\n<p>Meanwhile, in Europe, the DARWIN project recently <a href=\"https:\/\/www.dlr.de\/en\/fl\/latest\/news\/flight-trials-with-ai-based-digital-co-pilot-successfully-conducted\" target=\"_blank\" rel=\"noreferrer noopener\">completed<\/a> the first manned flights of an AI-based digital co-pilot designed specifically to reduce pilot workload and improve safety in reduced-crew or single-pilot operations. During the trials, the DARWIN system handled a range of simulated emergencies that would normally require immediate human intervention. When the system detected pilot drowsiness or incapacitation, it issued alerts and began redistributing tasks. A passenger\u2019s medical emergency triggered different protocols, with the AI helping to coordinate the response while overseeing flight safety. Most impressively, the system successfully executed autoland procedures when the situation called for it, taking an aircraft from cruise altitude all the way to the runway without human input.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The legal void<\/h2>\n\n\n\n<p><em><strong>When something goes wrong, who do we hold accountable?<\/strong><\/em><\/p>\n\n\n\n<p>Traditional aviation doctrine assumes a clear chain of accountability. A licensed human pilot makes decisions. Responsibility flows through the airline as operator and up to the manufacturer when equipment fails. The system works because there\u2019s always someone accountable, someone who holds credentials that can be suspended or revoked. Some entity that can be fined or its right to do business restricted. But what happens when an algorithm makes critical flight decisions instead of a person? We\u2019re not there yet, but fully autonomous aircraft do seem inevitable \u2013 indeed, there\u2019s a whole cottage industry already <a href=\"https:\/\/www.mckinsey.com\/featured-insights\/the-next-normal\/air-taxis\" target=\"_blank\" rel=\"noreferrer noopener\">dedicated<\/a> to making them happen. When they arrive, there may be no pilot onboard at all, which <a href=\"https:\/\/www.globallegalinsights.com\/practice-areas\/ai-machine-learning-and-big-data-laws-and-regulations\/autonomous-ai-who-is-responsible-when-ai-acts-autonomously-and-things-go-wrong\/#:~:text=Autonomous%20AI%20systems%20%E2%80%93%20with,potentially%20harmful%20or%20disruptive%20outcomes\" target=\"_blank\" rel=\"noreferrer noopener\">raises<\/a> a fundamental question: who or what becomes the \u2018pilot-in-command\u2019? The term itself assumes a human. Aviation law and insurance frameworks were built around that assumption. An algorithm can\u2019t hold a license. It can\u2019t face criminal charges or lose its certification. Someone has to answer for what happens at 10,000 metres.<\/p>\n\n\n\n<p>The existing aviation liability regimes currently channel most risk to the operator under strict liability principles, regardless of pilot involvement. Regulators and law commissions have indicated that even with full autonomy, the operator \u2013 a legal entity \u2013 will remain fully accountable for what happens in flight. The FAA\u2019s guidance reinforces this by <a href=\"https:\/\/www.faa.gov\/aircraft\/air_cert\/step\/roadmap_for_AI_safety_assurance#:~:text=%E2%80%A2%20Avoid%20Personification%3A%20Treat%20AI,a%20clear%20understanding%20of%20AI%27s\" target=\"_blank\" rel=\"noreferrer noopener\">urging<\/a> clear responsibility assignment for AI systems. Someone, usually a human supervisor or the operating airline, must always be identified as ultimately in command and accountable, even if an AI handles the actual flying. The law needs a person or company to hold responsible.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The blame game<\/h3>\n\n\n\n<p>But in AI-driven incidents, the root cause can be bewilderingly obscure. It might lie in the software, the hardware, the training data, or some combination of all three. The 737 Max disasters <a href=\"https:\/\/www.aglaw.us\/janzenaglaw\/2019\/4\/14\/737-max-preview-legal-issues-with-autonomous-equipment\" target=\"_blank\" rel=\"noreferrer noopener\">illustrate<\/a> this ambiguity well. In those crashes, automated software called MCAS repeatedly pushed the nose down based on faulty sensor input while human pilots struggled to counteract it. Was the culprit a badly designed algorithm, a malfunctioning sensor, or human oversight lapses? Investigators identified all three factors: Boeing\u2019s MCAS logic was arguably too aggressive, a sensor fed inaccurate data that falsely indicated a stall, and the airline and pilots failed to address the known sensor issue or effectively override the automation. Sorting out liability proved far from straightforward.<\/p>\n\n\n\n<p>Unlike a crash involving humans, where blame might settle on pilot negligence or a single mechanical defect, AI-mediated accidents tend to involve a web of contributors. The AI ecosystem includes hardware makers, software developers, data scientists, and airlines \u2013 all of whom influence how the system behaves. Responsibility thus <a href=\"https:\/\/www.cigionline.org\/articles\/who-is-liable-for-ai-driven-accidents-the-law-is-still-emerging\/#:~:text=involved%20in%20the%20development%20and,deployment%20of%20AI%20systems\" target=\"_blank\" rel=\"noreferrer noopener\">diffuses<\/a> across these parties, making it hard to pin fault on any single agent. A further complication is that AI decisions often occur inside black-box models that even their creators struggle to fully explain. Machine learning systems, especially neural networks, arrive at outcomes through complex statistical patterns rather than transparent decision logic. So, how do you hold a system accountable when you can\u2019t even explain what it did wrong? How do you prove negligence when the decision-making process itself is opaque?<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The challenges holding back adoption<\/h2>\n\n\n\n<p><em>While AI promises to solve some of aviation\u2019s long-standing problems, legal and regulatory dilemmas are standing in the way.<\/em><\/p>\n\n\n\n<p>The promise of autonomous AI in aviation <a href=\"https:\/\/www.sesarju.eu\/sites\/default\/files\/documents\/sid\/2024\/papers\/SIDs_2024_paper_052%20final.pdf#:~:text=Harmonisation%20%E2%80%A2%20The%20power%20of,research%20potential%20through%20knowledge%20exchange\" target=\"_blank\" rel=\"noreferrer noopener\">comes<\/a> tangled with numerous unresolved legal and regulatory dilemmas, and the industry hasn\u2019t figured out any good answers yet. Aviation certification assumes designs are frozen in place for decades. Once a system passes exhaustive testing in a fixed configuration, it\u2019s locked down. Regulators approve that specific version, and any meaningful changes trigger recertification. AI models, on the other hand, evolve through retraining, data refreshes, and software updates. Their behaviour can shift as they ingest new information or as developers refine their algorithms. So, how do you certify an AI pilot whose decision-making can change over time? If even a small model adjustment counts as a new system, each retrain could trigger fresh certification \u2013 an endless, impractical loop that would stall development. Yet allowing unchecked updates risks deploying systems that behave differently from what regulators approved.<\/p>\n\n\n\n<p>Then there\u2019s the insurance problem. Aviation insurance rests on decades of data about human pilots, mechanical failure modes, and clear liability lines. Actuaries know how to price risk when a pilot with 10,000 flight hours sits in the cockpit. They understand how often turbofan engines fail. They have no comparable foundation for <a href=\"https:\/\/www.aon.com\/en\/insights\/articles\/aviations-future-flightpath-5-risks-on-the-horizon#:~:text=Litigation%20is%20also%20on%20the,the%20technology%27s%20interactions%20with%20customers\" target=\"_blank\" rel=\"noreferrer noopener\">assessing<\/a> AI decisions. The diffusion of responsibility between the airline, the AI manufacturer, the software supplier, and the data vendor makes underwriting especially difficult. Insurers have been vocal about needing legal clarity before they can price risk with confidence. Industry groups argue that aviation law, currently built largely around human pilot responsibility, must be revised to reflect multiple stakeholders. Until that happens, who will insure the decisions of a non-human agent remains an open question.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Keeping up with the times<\/h3>\n\n\n\n<p>The issue of criminal liability presents another conundrum. Serious aviation accidents can prompt criminal investigations, particularly when negligence or recklessness seems to be involved. However, an autonomous AI has no \u2018intent\u2019. It is incapable of being negligent in the human sense. If an AI pilot causes fatalities, current legal doctrines may find no one to punish unless negligence can be pinned on a human or a corporation. The absence of someone to hold criminally accountable feels unsatisfying, especially to victims\u2019 families. Recognising this discomfort, some legal scholars have <a href=\"https:\/\/www.reinsurancene.ws\/iua-urges-legal-reform-to-enable-growth-of-aviation-technologies\/#:~:text=Aviation%20Authority%20and%20the%20Department,including%20software%20developers%20and%20manufacturers\" target=\"_blank\" rel=\"noreferrer noopener\">proposed<\/a> updating doctrines so corporations deploying autonomous systems can face criminal liability when grossly negligent processes lead to deaths. But proving gross negligence gets complicated when multiple parties contributed to the AI\u2019s development and deployment.<\/p>\n\n\n\n<p>Add to all of this the tangled web of international regulations governing aviation. US regulations differ from those of the European Union Aviation Safety Agency (EASA), which differ from China\u2019s CAAC. Now consider the following scenario: An AI trained on FAA rules crosses into European airspace governed by EASA. It makes a decision optimised for US regulations that just so happen to violate EU law. Who\u2019s liable? The airline? The AI vendor? The aircraft crosses jurisdictions constantly, and each jurisdiction has different expectations about safe operation. The obvious solution would be harmonised global standards and certification processes \u2013 an agreed baseline that allows AI meeting certain criteria to be accepted broadly, much like aircraft type certificates work today. But aviation has struggled for decades to align on far simpler regulations. Getting the world\u2019s aviation authorities to agree on AI standards seems an even more daunting prospect.<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"Robot pilots ready to fly planes\" width=\"800\" height=\"450\" src=\"https:\/\/www.youtube.com\/embed\/yzn-eF0RQlI?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Paving the way for AI in aviation<\/h2>\n\n\n\n<p><em><strong>The adoption of AI in the aviation industry remains tentative, but we are starting to see the first steps towards greater acceptance.<\/strong><\/em><\/p>\n\n\n\n<p>To get a sense of how aviation might navigate these challenges, it helps to look at how other industries are handling similar problems. The FDA has been <a href=\"https:\/\/namsa.com\/resources\/blog\/fdas-regulation-of-ai-ml-samd\/#:~:text=,they%20arise\" target=\"_blank\" rel=\"noreferrer noopener\">wrestling<\/a> with the same certification puzzle for medical AI devices. Their proposed solution is something called a Predetermined Change Control Plan \u2013 essentially a framework where developers pre-specify how an algorithm can evolve and how updates will be validated. If regulators approve the plan upfront, the device can update within those agreed boundaries without requiring fresh approvals each time. Aviation could arguably adopt something similar, allowing AI systems to learn and improve within pre-certified parameters rather than treating every update as an entirely new system.<\/p>\n\n\n\n<p>EASA has already started <a href=\"https:\/\/www.easa.europa.eu\/en\/domains\/research-innovation\/ai#:~:text=News\" target=\"_blank\" rel=\"noreferrer noopener\">mapping out<\/a> a path forward with its multi-stage AI Roadmap, which \u2013 <a href=\"https:\/\/www.sae.org\/news\/blog\/sae-levels-driving-automation-clarity-refinements\" target=\"_blank\" rel=\"noreferrer noopener\">similar<\/a> to the SAE autonomy system for road vehicles \u2013 defines three levels of AI in aviation, starting with AI that merely assists human decisions (Level 1), progressing through AI that shares decision-making (Level 2), and eventually reaching highly autonomous systems (Level 3). The FAA <a href=\"https:\/\/www.faa.gov\/about\/office_org\/headquarters_offices\/ang\/redac\/REDAC-Roadmap-to-AI-ML-at-the-FAA-SAS-Briefing-202302#:~:text=the%20air%20vehicle,various%20types%20and%20sources%20of\" target=\"_blank\" rel=\"noreferrer noopener\">released<\/a> its own AI Safety Assurance Roadmap along parallel lines, emphasising incremental implementation, clear accountability, and building on existing safety rules wherever possible. By contrast, SAE autonomy levels see passenger vehicles graded on a scale of one to five, with five designated for full autonomy with no requirement for human oversight whatsoever. The most advanced systems on the road today are SAE Level 4. Aviation would follow a similar path, moving gradually from systems where AI makes suggestions and humans have the final say to ones where AI makes decisions while humans supervise.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The risk factor<\/h3>\n\n\n\n<p>That progression naturally raises questions about who\u2019s doing the monitoring and what qualifications they need. Some regulators are <a href=\"https:\/\/nbaa.org\/news\/business-aviation-insider\/2023-07\/ai-and-autonomous-flight\/\" target=\"_blank\" rel=\"noreferrer noopener\">proposing<\/a> a new category of algorithmic operators with separate licensing requirements. We already have drone pilots operating remotely under special certificates; future AI supervisors might oversee entire fleets from control centres, monitoring multiple aircraft simultaneously. These operators would need both aeronautical knowledge and AI literacy \u2013 understanding system states, override logic, and the ethical dimensions of algorithmic accountability. While no AI pilot license exists at the time of writing, FAA and EASA advisory groups are exploring what certification for autonomy oversight roles might look like. Drones will likely pioneer these standards before they scale to larger commercial aircraft, simply because the regulatory path is clearer and the consequences of failure are more contained.<\/p>\n\n\n\n<p>Insurance is moving more cautiously, but showing tentative signs of adaptation. Lloyd\u2019s and other underwriters are already in the process of crafting policies for autonomous aviation applications like air taxis and delivery drones. They\u2019re acutely aware that early mishaps could sour public trust and chase capital from the market, which is why they\u2019re pushing hard for robust regulation to anchor their underwriting. We\u2019re also seeing AI-specific insurance products emerge. Lloyd\u2019s of London, for example, recently <a href=\"https:\/\/www.techmonitor.ai\/digital-economy\/ai-and-automation\/lloyds-insurers-introduce-protection-financial-losses-ai#:~:text=Insurers%20at%20Lloyd%E2%80%99s%20of%20London,the%20technology%20into%20their%20operations\" target=\"_blank\" rel=\"noreferrer noopener\">introduced<\/a> a policy covering losses from malfunctioning AI tools, initially targeting things like chatbots, but effectively creating a new category of algorithmic risk insurance. In aviation, underwriters may start demanding evidence of robust machine learning assurance as conditions of coverage. For now, the industry is experimenting cautiously, writing policies with conservative terms and likely charging a significant premium for all that uncertainty.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The path ahead<\/h2>\n\n\n\n<p>The aviation industry finds itself caught between competing pressures with no easy answers in sight. Airlines are scrambling to fill cockpits as experienced pilots retire faster than new ones can be trained, demand keeps climbing beyond what anyone predicted even two years ago, and airspace grows more congested by the quarter. Efficiency demands pile up from regulators, passengers, and shareholders alike. In this context, automation starts looking less like an option and more like a necessity. But rushing these systems into commercial service without clear legal frameworks could prove catastrophic for the industry. Airlines deploying AI without regulatory blessing might find themselves uninsurable \u2013 no underwriter will touch a risk they can\u2019t quantify.&nbsp;<\/p>\n\n\n\n<p>So aviation faces a genuine bind. The industry may actually need AI to solve labour shortages, manage increased traffic, and achieve safety improvements beyond what human pilots can deliver alone. Yet the legal frameworks, ethical guidelines, and accountability structures required to deploy that AI responsibly remain incomplete at best, absent at worst. We\u2019re being asked to hand over controls before we\u2019ve agreed on who answers when those controls fail. The technology advances faster than our ability to govern it, and neither speeding up nor slowing down offers a clean path forward. That tension between operational necessity and unresolved accountability will shape how aviation evolves over the next decade, and right now, no one has figured out how to resolve it.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Autonomous flight systems can react faster than any human pilot. But when they make split-second decisions that violate airspace or injure passengers, who pays the price?<\/p>\n","protected":false},"author":10,"featured_media":81519,"parent":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","footnotes":""},"categories":[2886],"tags":[],"article-type":[],"trends":[5485],"class_list":["post-81500","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-transportation","trends-artificial-intelligence-en"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI in aerospace: who\u2019s liable when the algorithm crashes the plane? - Richard van Hooijdonk Blog<\/title>\n<meta name=\"description\" content=\"Autonomous flight systems can react faster than any human pilot. But when they make split-second decisions that violate airspace or injure passengers, who pays the price?\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI in aerospace: who\u2019s liable when the algorithm crashes the plane? - Richard van Hooijdonk Blog\" \/>\n<meta property=\"og:description\" content=\"Autonomous flight systems can react faster than any human pilot. But when they make split-second decisions that violate airspace or injure passengers, who pays the price?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/\" \/>\n<meta property=\"og:site_name\" content=\"Richard van Hooijdonk Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-12-22T12:14:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-11T08:26:55+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/12\/shutterstock_2227502071-min-2-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1709\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"sheheryar khan\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"sheheryar khan\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"14 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/\"},\"author\":{\"name\":\"sheheryar khan\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/person\\\/5b8ddcabed59c2c30bcffbd7cefda6b7\"},\"headline\":\"AI in aerospace: who\u2019s liable when the algorithm crashes the plane?\",\"datePublished\":\"2025-12-22T12:14:00+00:00\",\"dateModified\":\"2026-02-11T08:26:55+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/\"},\"wordCount\":3117,\"publisher\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/shutterstock_2227502071-min-2-scaled.jpg\",\"articleSection\":[\"Transportation\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/\",\"name\":\"AI in aerospace: who\u2019s liable when the algorithm crashes the plane? - Richard van Hooijdonk Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/shutterstock_2227502071-min-2-scaled.jpg\",\"datePublished\":\"2025-12-22T12:14:00+00:00\",\"dateModified\":\"2026-02-11T08:26:55+00:00\",\"description\":\"Autonomous flight systems can react faster than any human pilot. But when they make split-second decisions that violate airspace or injure passengers, who pays the price?\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/#primaryimage\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/shutterstock_2227502071-min-2-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/shutterstock_2227502071-min-2-scaled.jpg\",\"width\":2560,\"height\":1709},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/keynotespreker-trendwatcher-en-futurist\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI in aerospace: who\u2019s liable when the algorithm crashes the plane?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#website\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/\",\"name\":\"Richard van Hooijdonk Blog\",\"description\":\"Keynote speaker, trendwatcher and futurist\",\"publisher\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#organization\",\"name\":\"Richard van Hooijdonk BV\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2019\\\/04\\\/logo-footer-1.png\",\"contentUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2019\\\/04\\\/logo-footer-1.png\",\"width\":100,\"height\":72,\"caption\":\"Richard van Hooijdonk BV\"},\"image\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/person\\\/5b8ddcabed59c2c30bcffbd7cefda6b7\",\"name\":\"sheheryar khan\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g\",\"caption\":\"sheheryar khan\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI in aerospace: who\u2019s liable when the algorithm crashes the plane? - Richard van Hooijdonk Blog","description":"Autonomous flight systems can react faster than any human pilot. But when they make split-second decisions that violate airspace or injure passengers, who pays the price?","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/","og_locale":"en_US","og_type":"article","og_title":"AI in aerospace: who\u2019s liable when the algorithm crashes the plane? - Richard van Hooijdonk Blog","og_description":"Autonomous flight systems can react faster than any human pilot. But when they make split-second decisions that violate airspace or injure passengers, who pays the price?","og_url":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/","og_site_name":"Richard van Hooijdonk Blog","article_published_time":"2025-12-22T12:14:00+00:00","article_modified_time":"2026-02-11T08:26:55+00:00","og_image":[{"width":2560,"height":1709,"url":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/12\/shutterstock_2227502071-min-2-scaled.jpg","type":"image\/jpeg"}],"author":"sheheryar khan","twitter_card":"summary_large_image","twitter_misc":{"Written by":"sheheryar khan","Est. reading time":"14 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/#article","isPartOf":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/"},"author":{"name":"sheheryar khan","@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/person\/5b8ddcabed59c2c30bcffbd7cefda6b7"},"headline":"AI in aerospace: who\u2019s liable when the algorithm crashes the plane?","datePublished":"2025-12-22T12:14:00+00:00","dateModified":"2026-02-11T08:26:55+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/"},"wordCount":3117,"publisher":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#organization"},"image":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/#primaryimage"},"thumbnailUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/12\/shutterstock_2227502071-min-2-scaled.jpg","articleSection":["Transportation"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/","url":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/","name":"AI in aerospace: who\u2019s liable when the algorithm crashes the plane? - Richard van Hooijdonk Blog","isPartOf":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/#primaryimage"},"image":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/#primaryimage"},"thumbnailUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/12\/shutterstock_2227502071-min-2-scaled.jpg","datePublished":"2025-12-22T12:14:00+00:00","dateModified":"2026-02-11T08:26:55+00:00","description":"Autonomous flight systems can react faster than any human pilot. But when they make split-second decisions that violate airspace or injure passengers, who pays the price?","breadcrumb":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/#primaryimage","url":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/12\/shutterstock_2227502071-min-2-scaled.jpg","contentUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/12\/shutterstock_2227502071-min-2-scaled.jpg","width":2560,"height":1709},{"@type":"BreadcrumbList","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-in-aerospace-whos-liable-when-the-algorithm-crashes-the-plane\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blog.richardvanhooijdonk.com\/en\/keynotespreker-trendwatcher-en-futurist\/"},{"@type":"ListItem","position":2,"name":"AI in aerospace: who\u2019s liable when the algorithm crashes the plane?"}]},{"@type":"WebSite","@id":"https:\/\/blog.richardvanhooijdonk.com\/#website","url":"https:\/\/blog.richardvanhooijdonk.com\/","name":"Richard van Hooijdonk Blog","description":"Keynote speaker, trendwatcher and futurist","publisher":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.richardvanhooijdonk.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/blog.richardvanhooijdonk.com\/#organization","name":"Richard van Hooijdonk BV","url":"https:\/\/blog.richardvanhooijdonk.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/logo\/image\/","url":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2019\/04\/logo-footer-1.png","contentUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2019\/04\/logo-footer-1.png","width":100,"height":72,"caption":"Richard van Hooijdonk BV"},"image":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/person\/5b8ddcabed59c2c30bcffbd7cefda6b7","name":"sheheryar khan","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g","caption":"sheheryar khan"}}]}},"_links":{"self":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/posts\/81500","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/comments?post=81500"}],"version-history":[{"count":0,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/posts\/81500\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/media\/81519"}],"wp:attachment":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/media?parent=81500"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/categories?post=81500"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/tags?post=81500"},{"taxonomy":"article-type","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/article-type?post=81500"},{"taxonomy":"trends","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/trends?post=81500"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}