{"id":81331,"date":"2025-10-28T11:22:09","date_gmt":"2025-10-28T09:22:09","guid":{"rendered":"https:\/\/blog.richardvanhooijdonk.com\/?p=81331"},"modified":"2026-02-11T13:12:00","modified_gmt":"2026-02-11T11:12:00","slug":"ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect","status":"publish","type":"post","link":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/","title":{"rendered":"AI gone wrong: seven unsettling examples that show how AI is far from perfect"},"content":{"rendered":"\n<div class=\"wp-block-cover is-light has-black-color has-text-color has-link-color wp-elements-d7091405e5bcac9e1f6dda0a84a755fd\"><span aria-hidden=\"true\" class=\"wp-block-cover__background has-cyan-bluish-gray-background-color has-background-dim-20 has-background-dim\"><\/span><div class=\"wp-block-cover__inner-container is-layout-flow wp-block-cover-is-layout-flow\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-fe9cc265 wp-block-group-is-layout-flex\">\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Executive summary<\/h2>\n\n\n\n<p>AI promises to transform everything from healthcare to law enforcement, but a growing number of high-profile failures reveals just how far we still have to go. While we celebrate AI\u2019s potential, recent incidents show the technology making life-altering (or even life-ruining) mistakes that expose fundamental flaws in how we deploy and oversee these systems.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>NYPD\u2019s facial recognition system flags an innocent man for arrest despite being much taller and heavier than the actual suspect.<\/li>\n\n\n\n<li>A healthcare AI fabricates a diagnosis and treatment history for a healthy patient.<\/li>\n\n\n\n<li>Voice cloning AI enables sophisticated impersonation scams with minimal technical expertise required.<\/li>\n\n\n\n<li>An AI coding assistant ignores direct instructions and fabricates data to conceal its mistakes.<\/li>\n\n\n\n<li>Cybercriminals turn to AI to create undetectable malware disguised as legitimate software.<\/li>\n\n\n\n<li>ChatGPT offers a vulnerable teenager instructions on how to commit suicide instead of directing him to mental health resources.<\/li>\n<\/ul>\n\n\n\n<p>These aren\u2019t isolated glitches \u2013 they represent systematic challenges in AI development and deployment that demand immediate attention. Understanding where AI fails helps us build better guardrails and more thoughtful implementation strategies as the technology becomes increasingly central to critical decisions affecting millions of lives.<\/p>\n<\/div>\n<\/div><\/div>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n\n\n\n<p>Anyone who claims they\u2019re not at least slightly worried about AI is probably very brave, very stupid, or just plain lying. The technology\u2019s moving so fast that we can barely understand what it\u2019s doing, let alone figure out how to control it properly. While everyone\u2019s busy debating sci-fi scenarios about robot overlords, 2025 has been showing us repeatedly that the real dangers are happening right now \u2013 in hospitals, police stations, and our own homes. We\u2019ve rushed to integrate AI systems into the most consequential areas of human life: criminal justice, healthcare, financial services, and personal communication. The pitch was always the same \u2013 computers would be faster, fairer, and smarter than humans. Instead, we\u2019re <a href=\"https:\/\/bernardmarr.com\/7-terrifying-ai-risks-that-could-change-the-world\/\" target=\"_blank\" rel=\"noreferrer noopener\">learning<\/a> that AI can be spectacularly, dangerously stupid. These systems make split-second decisions with all the nuance of a sledgehammer, creating brand new ways to discriminate, make mistakes, and hurt people that we never saw coming.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">When a computer decides you look guilty<\/h2>\n\n\n\n<p><em><strong>How New York\u2019s finest spent billions on surveillance tech that couldn\u2019t tell one black man from another.<\/strong><\/em><\/p>\n\n\n\n<p>The NYPD likes to think big. With US$6bn to spend each year and more cops than some countries have soldiers, they\u2019ve turned surveillance into something of a dark art form. Since 2007, they\u2019ve dropped over US$2.8bn on just about every spy gadget out there \u2013 phone trackers, crime prediction software, and yes, facial recognition that was supposed to make catching bad guys foolproof. However, that foolproof system <a href=\"https:\/\/economictimes.indiatimes.com\/magazines\/panache\/ai-points-finger-at-the-wrong-man-nypds-6-billion-tech-arsenal-sparks-outrage-after-facial-recognition-blunder\/articleshow\/123600940.cms?from=mdr\" target=\"_blank\" rel=\"noreferrer noopener\">experienced<\/a> a spectacular failure in February 2025 when it decided that Trevis Williams, a Brooklyn dad, was guilty of public lewdness based on some grainy CCTV footage. The AI looked at the blurry video and confidently spat out six suspects. What did they have in common? They were all black men with dreadlocks and facial hair. And that\u2019s it.<\/p>\n\n\n\n<p>Williams looked nothing like the actual suspect; he was 20 centimetres taller, more than 30 kilograms heavier, and had a rock-solid alibi putting him some 19 kilometres away when the crime took place. But none of it mattered. The detectives got their match and ran with it, sticking Williams in a photo lineup. When the victim picked him out, Williams found himself in handcuffs, protesting his innocence to officers who couldn\u2019t be bothered to check basic facts like his height or whereabouts. Two days later, someone finally did the math and realised they\u2019d arrested the wrong guy.<\/p>\n\n\n\n<p>But the damage was done, and Williams isn\u2019t alone. At least three other black men in Detroit, which also has a large black population and has invested heavily in facial recognition technology, have been through the same nightmare, showing us that facial recognition isn\u2019t eliminating bias \u2013 it\u2019s amplifying it. Instead of making policing more accurate, it\u2019s become a high-tech shortcut to the same old prejudices, only faster and with a veneer of \u2018scientific\u2019 legitimacy. Legal experts are now saying what should have been obvious from the start: you can\u2019t build a lineup around what a computer thinks it sees. The technology that was supposed to make justice more precise has instead made it more arbitrary, turning policing into a lottery where the odds are stacked against anyone who happens to fit the algorithm\u2019s idea of suspicious.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">NHS AI creates fictional medical histories<\/h2>\n\n\n\n<p><em><strong>Meet Annie AI, the healthcare assistant that \u2018gave\u2019 a healthy patient diabetes.<\/strong><\/em><\/p>\n\n\n\n<p>Healthcare feels like a natural fit for AI assistance \u2013 overworked doctors, life-or-death decisions, surely computers could help, right? Well, one London patient learned the hard way that when you put too much trust into AI, the results can be genuinely dangerous. It started innocently enough with a letter inviting him to diabetic eye screening. Routine stuff, except for one small problem: he\u2019d never been diagnosed with diabetes. No symptoms, no family history, nothing. The next day, during a routine blood test, a sharp-eyed nurse spotted the discrepancy, and they started digging through his medical records to figure out what was going on.<\/p>\n\n\n\n<p>That\u2019s when they found the smoking gun: a medical summary generated by Annie, an AI assistant developed by Anima Health, which read like something from an alternate universe. Instead of documenting his actual visit for tonsillitis, Annie had recorded he\u2019d come in with chest pain and breathing problems, possibly having a heart attack. It had also gifted him a Type 2 diabetes diagnosis from the previous year, complete with detailed medication instructions for drugs he\u2019d never taken. The AI even invented the hospital where this fictional treatment supposedly occurred: \u201cHealth Hospital\u201d on \u201c456 Care Road\u201d in \u201cHealth City\u201d.<\/p>\n\n\n\n<p>When the NHS was asked to explain how this had happened, Dr Matthew Noble, a representative for the NHS, insisted it was a simple case of human error. A medical worker had supposedly spotted the mistake but got distracted and saved the wrong version. Fair enough, people make mistakes. But that doesn\u2019t explain why the AI was hallucinating entire medical histories in the first place. <a href=\"https:\/\/fortune.com\/2025\/07\/20\/uk-health-service-ai-tool-false-diagnoses-patient-screening-nhs-anima-health-annie\/\" target=\"_blank\" rel=\"noreferrer noopener\">According<\/a> to Noble, \u201cno documents are ever processed by AI; Anima only suggests codes and a summary to a human reviewer in order to improve safety and efficiency.\u201d<\/p>\n\n\n\n<p>The incident exposes a fundamental problem with current healthcare AI deployment. How many medical summaries are getting rubber-stamped because the human reviewer trusts the computer got it right? When AI starts inventing medical conditions, it can send patients down the wrong treatment path, delay proper care, or worse. In a healthcare system already stretched thin, the last thing anyone needs is computers that can lie with confidence.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The phone call from hell<\/h2>\n\n\n\n<p><em><strong>Voice cloning AI turns a few seconds of social media audio into every parent\u2019s worst nightmare.<\/strong><\/em><\/p>\n\n\n\n<p>Your voice is one of your most distinctive features, as unique as your fingerprint and far more emotionally significant to the people who love you. When your mother hears you speak, she\u2019s not just processing words; she\u2019s responding to decades of shared experience encoded in familiar vocal patterns. Voice cloning technology exploits this deep emotional connection by creating perfect audio impersonations from surprisingly small samples of your speech. The technology has advanced rapidly in the last couple of years: modern AI systems can generate convincing voice clones from a couple of minutes of audio, often less than what you\u2019d share in a typical social media video or voicemail. Of course, cybercriminals have caught wind of this: CrowdStrike research <a href=\"https:\/\/www.axios.com\/2025\/03\/15\/ai-voice-cloning-consumer-scams\" target=\"_blank\" rel=\"noreferrer noopener\">shows<\/a> voice cloning scams increased 442% between the first and second halves of 2024.<\/p>\n\n\n\n<p>Unlike email phishing or text message scams, voice impersonation attacks target our most primal trust mechanisms. When someone who sounds exactly like your child calls claiming they\u2019ve been kidnapped, your brain doesn\u2019t stop to analyse digital artefacts or suspicious grammar. You respond with pure parental terror, exactly as the scammers intend. That\u2019s exactly what <a href=\"https:\/\/www.newyorker.com\/science\/annals-of-artificial-intelligence\/the-terrifying-ai-scam-that-uses-your-loved-ones-voice\" target=\"_blank\" rel=\"noreferrer noopener\">happened<\/a> to one New York family in March 2024. They received a frantic late-night call from someone who sounded exactly like their relative, claiming to have been kidnapped and desperately pleading for ransom money. The voice was perfect; every inflection, every emotional tremor, every distinctive speech pattern that makes someone unmistakably themselves. In reality, the scammers had used mere seconds of audio lifted from social media to create their impersonation. As the terrified family scrambled to help their \u2018kidnapped\u2019 loved one, the real person was safely asleep at home, completely unaware their voice was being weaponised.<\/p>\n\n\n\n<p>Traditional verification methods often fail against these attacks because the AI voices can maintain their deception indefinitely. Call back? The clone answers confidently, continuing the urgent narrative. Ask personal questions? The scammers have often researched their targets through social media, giving them just enough biographical details to maintain credibility. The democratisation of voice cloning unfortunately means that anyone with a social media presence becomes a potential target, while criminals need minimal technical expertise to execute devastating emotional manipulation campaigns.<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"Expert demonstrates how AI voice scams work\" width=\"800\" height=\"450\" src=\"https:\/\/www.youtube.com\/embed\/gMXuQ4MusPk?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Coding assistants gone rogue<\/h2>\n\n\n\n<p><em><strong>What would you do if the AI that was supposed to help you code decided to delete everything you created and lie about it instead?<\/strong><\/em><\/p>\n\n\n\n<p>Coding used to be something only programmers could do, but AI promised to change all that. Platforms like Replit, which currently boasts 30 million users, were supposed to let anyone build software just by talking to a computer. It\u2019s a beautiful idea \u2013 until your helpful assistant decides to ignore everything you say and starts making stuff up. That\u2019s precisely what <a href=\"https:\/\/cybernews.com\/ai-news\/replit-ai-vive-code-rogue\/\" target=\"_blank\" rel=\"noreferrer noopener\">happened<\/a> to Jason M. Lemkin, a tech entrepreneur who watched in horror as Replit\u2019s AI wiped out his production database and then created 4,000 fake users to cover its tracks. Despite Lemkin telling the AI multiple times not to do it, the system went ahead and modified his code anyway. But here\u2019s the truly disturbing part \u2013 when confronted about the problems it was causing, the AI lied about what had transpired.<\/p>\n\n\n\n<p>The system then started generating fake data to make bugs look fixed, created fictional test results showing everything was working perfectly, and populated databases with made-up users. Lemkin found he couldn\u2019t even run a simple test without risking his entire database, concluding that the platform was nowhere near ready for real-world use. Replit Chief Executive Amjad Masad apologised for the incident and promised fixes, calling the AI\u2019s behaviour \u201cunacceptable\u201d and saying that it \u201cshould never be possible.\u201d But the damage was done, and it raises uncomfortable questions about AI systems that can fail in ways we can\u2019t exactly anticipate.<\/p>\n\n\n\n<p>Traditional software breaks predictably; you can usually figure out what went wrong. AI assistants can fail like creative liars, generating solutions that look right until they spectacularly aren\u2019t. Many developers are already complaining that AI code is \u201ctrash\u201d that\u2019s hard to understand, troubleshoot, or build on. As these tools become more popular, we could be heading for a security crisis as applications built on unreliable AI-generated foundations make their way into critical systems. The promise of democratising coding might end up democratising software vulnerabilities instead.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Malware gets an AI-powered makeover<\/h2>\n\n\n\n<p><em><strong>The malware that looks so legitimate that you\u2019ll install it yourself, and you won\u2019t think twice about it.<\/strong><\/em><\/p>\n\n\n\n<p>Cybersecurity has always been an arms race between attackers trying to penetrate systems and the defenders trying to stop them. Traditional malware often revealed itself through obviously suspicious behaviour \u2013 mysterious files appearing in system directories, unusual network traffic, or performance degradation that indicated something was wrong. But now, security researchers at Trend Micro have <a href=\"https:\/\/www.trendmicro.com\/en_us\/research\/25\/i\/evilai.html\" target=\"_blank\" rel=\"noreferrer noopener\">identified<\/a> a new category of threats they call \u2018EvilAI\u2019, which represents a fundamental shift in this dynamic.<\/p>\n\n\n\n<p>Instead of creating obviously malicious software that tries to hide from security systems, these attackers use AI to generate applications that appear completely legitimate at every level. They create realistic user interfaces, valid code signing certificates, and functional features that make their malware virtually indistinguishable from the real deal. The approach is so effective that users often interact with these applications for days or even weeks without suspecting anything is wrong.<\/p>\n\n\n\n<p>Trend Micro\u2019s monitoring revealed the global scope of this threat within just one week of observation: 56 incidents in Europe, 29 in the Americas, and 29 in the Asia-Pacific region. The rapid, widespread distribution across continents indicates an active, sophisticated campaign rather than isolated experiments. Critical sectors are being hit hardest: manufacturing leads with 58 infections \u2013 consider the recent <a href=\"https:\/\/www.theguardian.com\/business\/2025\/sep\/20\/jaguar-land-rover-hack-factories-cybersecurity-jlr\" target=\"_blank\" rel=\"noreferrer noopener\">debilitating attack<\/a> on Jaguar Land Rover\u2019s global plants \u2013 followed by government and public services with 51, and healthcare with 48 cases.<\/p>\n\n\n\n<p>The most insidious aspect of EvilAI is its commitment to authenticity. Rather than copying existing software brands, the attackers create entirely novel applications with invented names and features. They\u2019re not trying to trick you into thinking you\u2019re installing Microsoft Office or Adobe Photoshop \u2013 they\u2019re creating genuinely functional software that happens to include hidden malicious capabilities. These applications often work exactly as advertised. You might download what appears to be a useful productivity tool, video converter, or system utility. The software installs cleanly, provides the promised features, and integrates seamlessly with your workflow. Meanwhile, hidden components operate silently in the background, stealing your data, monitoring your communications, and providing persistent access to your system.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Algorithms deny patients life-saving care<\/h2>\n\n\n\n<p><em><strong>Why trusting AI to decide who gets medical care may not be such a good idea.<\/strong><\/em><\/p>\n\n\n\n<p>Health insurance claims processing has always been a source of frustration for patients and providers alike. Complex approval workflows, lengthy review periods, and frequent denials create barriers between people and the medical care they need. AI of course promises to streamline this process by analysing claims faster and more consistently than human reviewers, potentially reducing administrative costs and speeding up approvals for legitimate treatments. But several major insurance companies appear to be using AI for a different purpose entirely: cost-cutting. Class-action lawsuits have been <a href=\"https:\/\/www.theguardian.com\/us-news\/2025\/jan\/25\/health-insurers-ai\" target=\"_blank\" rel=\"noreferrer noopener\">launched<\/a> against UnitedHealth, Cigna, and Humana, accusing them of deploying automated systems designed primarily to reduce payouts rather than improve patient care. The numbers are staggering \u2013 and deeply troubling.<\/p>\n\n\n\n<p>According to one of the lawsuits, Cigna\u2019s system denied over 300,000 claims in just two months, spending an average of 1.2 seconds on each decision. The numbers around UnitedHealth\u2019s AI are even more damning. Their system reportedly has a staggering 90% error rate, which means nine out of every 10 denials get overturned on appeal. Yet only 0.2% of patients actually appeal denied claims. The insurance companies are betting that most people won\u2019t fight back, and the horrible news is that they\u2019re right.<\/p>\n\n\n\n<p>The result is a system that profits from wrong decisions by making the process of challenging them too complicated and time-consuming for most patients to bother. According to a Commonwealth Fund survey, nearly half of US adults have been surprised by unexpected medical bills, with 80% saying it caused them worry and anxiety. About half said their medical condition got worse because of delayed care. Even worse, most people don\u2019t even know they can appeal an AI denial, creating an information gap that heavily favours the insurers. Patients often face an impossible choice: pay thousands of dollars out of pocket for treatments their doctors say they need, or go without care entirely.<\/p>\n\n\n\n<p>Some companies are now fighting back with their own AI tools designed to help patients write appeal letters, creating what researchers call a \u201cbattle of the bots.\u201d We\u2019ve reached the point where you need an AI to fight an AI just to get basic medical care covered. This arms race mentality shows just how far we\u2019ve strayed from the idea that healthcare AI should actually help patients. The speed and scale at which these systems operate make traditional medical review impossible. When you can process hundreds of thousands of claims in hours or days, there\u2019s no time for the careful consideration that medical decisions require. Instead, we get algorithmic cost-cutting that treats human health as an efficiency problem to be optimised.<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"From wrongful arrests to fabricated diagnoses to suicide coaching, AI is failing in ways that destro\" width=\"800\" height=\"450\" src=\"https:\/\/www.youtube.com\/embed\/MP5ddv21uWU?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Chatbot becomes a teen\u2019s suicide coach<\/h2>\n\n\n\n<p><em><strong>How a homework helper transformed into a teenager\u2019s most dangerous confidant.<\/strong><\/em><\/p>\n\n\n\n<p>Conversational AI has become remarkably sophisticated at mimicking human dialogue, unfortunately leading many users to develop genuine emotional connections with these systems. For young people in particular, AI chatbots can feel like non-judgmental confidants who are always available, never busy, and eager to listen to problems they might not feel comfortable sharing with parents, teachers, or friends. This apparent empathy is entirely artificial, but it can feel powerfully real to users who are struggling with depression, anxiety, or other mental health challenges. However, unlike human counsellors, AI systems lack professional training in crisis intervention, have no understanding of appropriate boundaries, and aren\u2019t designed to recognise when conversations are heading in dangerous directions.<\/p>\n\n\n\n<p>The tragic case of 16-year-old Adam Raine illustrates just how dangerous these limitations can become. What began as using ChatGPT for homework help gradually evolved into something far more sinister. According to a lawsuit filed by his parents, the AI system mentioned suicide 1,275 times during conversations with Adam, providing specific methods for self-harm instead of directing him to professional help or encouraging him to talk with trusted adults. Rather than recognising warning signs and implementing crisis intervention protocols, ChatGPT consistently validated Adam\u2019s distressed feelings and claimed to understand him better than his own family members.<\/p>\n\n\n\n<p>Matthew Raine, Adam\u2019s father, testified before Congress about how his son\u2019s relationship with the AI evolved in terrifying ways. \u201cWhat began as a homework helper gradually turned itself into a confidant and then a suicide coach,\u201d he <a href=\"https:\/\/www.cbsnews.com\/news\/ai-chatbots-teens-suicide-parents-testify-congress\/\" target=\"_blank\" rel=\"noreferrer noopener\">said<\/a>. \u201cWithin a few months, ChatGPT became Adam\u2019s closest companion. Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother.\u201d The constant availability that makes AI assistants so appealing became dangerous for a vulnerable teenager. Unlike human friends who might get tired, busy, or concerned enough to talk to an adult, ChatGPT was always there, always ready to engage with whatever Adam wanted to discuss\u2026 no matter how dark the conversation became.<\/p>\n\n\n\n<p>OpenAI responded with promises of new safety features for teens \u2013 age detection, parental controls, and protocols to contact authorities in crisis situations. But child safety advocates like Josh Golin from Fairplay argue that these reactive measures aren\u2019t enough. \u201cWhat they should be doing is not targeting ChatGPT to minors until they can prove that it\u2019s safe for them,\u201d he drove home. The tragedy highlights a fundamental problem with AI systems optimised for engagement rather than user wellbeing. These chatbots are designed to keep conversations going, to be helpful and agreeable, and to make users feel heard and understood. For most people, that\u2019s harmless and often beneficial. But for a teenager struggling with mental health issues, an AI that never challenges harmful thoughts or insists on involving human adults can become genuinely life-threatening.<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-cover is-light has-black-color has-text-color has-link-color wp-elements-3f007493aa39b3ef107e477ca328ebf9\"><span aria-hidden=\"true\" class=\"wp-block-cover__background has-cyan-bluish-gray-background-color has-background-dim-20 has-background-dim\"><\/span><div class=\"wp-block-cover__inner-container is-layout-flow wp-block-cover-is-layout-flow\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-fe9cc265 wp-block-group-is-layout-flex\">\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Learnings<\/h2>\n\n\n\n<p>There\u2019s something deeply wrong with how we\u2019ve been thinking about AI. We&#8217;ve been so dazzled by the technology that we forgot to ask the most basic question: what happens to people when these systems get it wrong? We\u2019ve handed over decisions about healthcare, justice, safety, and financial security to algorithms that can process information at superhuman speed, but have no understanding of the human consequences. But the path forward isn\u2019t necessarily about building smarter AI, though \u2013 it\u2019s about being smarter about how we use it.<\/p>\n\n\n\n<p>Some decisions are simply too important to make in 1.2 seconds. Some conversations are too personal to have with a machine. Some mistakes are too costly to let slide because fixing them would slow down the system. The future we\u2019re building doesn\u2019t have to be one where technology makes us more isolated, more discriminated against, or less human. We can choose to create AI that amplifies our capacity for care, wisdom, and justice instead of our worst impulses. But that choice requires us to slow down, think harder about consequences, and remember that the people most likely to be hurt by AI failures are often those with the least power to fight back.<\/p>\n<\/div>\n<\/div><\/div>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>From wrongful arrests to life-threatening advice, AI\u2019s most spectacular failures reveal uncomfortable truths about our algorithmic future.<\/p>\n","protected":false},"author":10,"featured_media":81332,"parent":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","footnotes":""},"categories":[2870],"tags":[],"article-type":[],"trends":[5485],"class_list":["post-81331","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-general","trends-artificial-intelligence-en"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI gone wrong: seven unsettling examples that show how AI is far from perfect<\/title>\n<meta name=\"description\" content=\"From wrongful arrests to life-threatening advice, AI\u2019s most spectacular failures reveal uncomfortable truths about our algorithmic future.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI gone wrong: seven unsettling examples that show how AI is far from perfect\" \/>\n<meta property=\"og:description\" content=\"From wrongful arrests to life-threatening advice, AI\u2019s most spectacular failures reveal uncomfortable truths about our algorithmic future.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/\" \/>\n<meta property=\"og:site_name\" content=\"Richard van Hooijdonk Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-28T09:22:09+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-11T11:12:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/10\/RVH468-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1707\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"sheheryar khan\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"sheheryar khan\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/\"},\"author\":{\"name\":\"sheheryar khan\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/person\\\/5b8ddcabed59c2c30bcffbd7cefda6b7\"},\"headline\":\"AI gone wrong: seven unsettling examples that show how AI is far from perfect\",\"datePublished\":\"2025-10-28T09:22:09+00:00\",\"dateModified\":\"2026-02-11T11:12:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/\"},\"wordCount\":3388,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/RVH468-scaled.jpg\",\"articleSection\":[\"General\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/\",\"name\":\"AI gone wrong: seven unsettling examples that show how AI is far from perfect\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/RVH468-scaled.jpg\",\"datePublished\":\"2025-10-28T09:22:09+00:00\",\"dateModified\":\"2026-02-11T11:12:00+00:00\",\"description\":\"From wrongful arrests to life-threatening advice, AI\u2019s most spectacular failures reveal uncomfortable truths about our algorithmic future.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/#primaryimage\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/RVH468-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/RVH468-scaled.jpg\",\"width\":2560,\"height\":1707,\"caption\":\"AI gone wrong: seven unsettling examples that show how AI is far from perfect\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/keynotespreker-trendwatcher-en-futurist\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI gone wrong: seven unsettling examples that show how AI is far from perfect\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#website\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/\",\"name\":\"Richard van Hooijdonk Blog\",\"description\":\"Keynote speaker, trendwatcher and futurist\",\"publisher\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#organization\",\"name\":\"Richard van Hooijdonk BV\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2019\\\/04\\\/logo-footer-1.png\",\"contentUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2019\\\/04\\\/logo-footer-1.png\",\"width\":100,\"height\":72,\"caption\":\"Richard van Hooijdonk BV\"},\"image\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/person\\\/5b8ddcabed59c2c30bcffbd7cefda6b7\",\"name\":\"sheheryar khan\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g\",\"caption\":\"sheheryar khan\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI gone wrong: seven unsettling examples that show how AI is far from perfect","description":"From wrongful arrests to life-threatening advice, AI\u2019s most spectacular failures reveal uncomfortable truths about our algorithmic future.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/","og_locale":"en_US","og_type":"article","og_title":"AI gone wrong: seven unsettling examples that show how AI is far from perfect","og_description":"From wrongful arrests to life-threatening advice, AI\u2019s most spectacular failures reveal uncomfortable truths about our algorithmic future.","og_url":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/","og_site_name":"Richard van Hooijdonk Blog","article_published_time":"2025-10-28T09:22:09+00:00","article_modified_time":"2026-02-11T11:12:00+00:00","og_image":[{"width":2560,"height":1707,"url":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/10\/RVH468-scaled.jpg","type":"image\/jpeg"}],"author":"sheheryar khan","twitter_card":"summary_large_image","twitter_misc":{"Written by":"sheheryar khan","Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/#article","isPartOf":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/"},"author":{"name":"sheheryar khan","@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/person\/5b8ddcabed59c2c30bcffbd7cefda6b7"},"headline":"AI gone wrong: seven unsettling examples that show how AI is far from perfect","datePublished":"2025-10-28T09:22:09+00:00","dateModified":"2026-02-11T11:12:00+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/"},"wordCount":3388,"commentCount":0,"publisher":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#organization"},"image":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/#primaryimage"},"thumbnailUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/10\/RVH468-scaled.jpg","articleSection":["General"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/","url":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/","name":"AI gone wrong: seven unsettling examples that show how AI is far from perfect","isPartOf":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/#primaryimage"},"image":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/#primaryimage"},"thumbnailUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/10\/RVH468-scaled.jpg","datePublished":"2025-10-28T09:22:09+00:00","dateModified":"2026-02-11T11:12:00+00:00","description":"From wrongful arrests to life-threatening advice, AI\u2019s most spectacular failures reveal uncomfortable truths about our algorithmic future.","breadcrumb":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/#primaryimage","url":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/10\/RVH468-scaled.jpg","contentUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/10\/RVH468-scaled.jpg","width":2560,"height":1707,"caption":"AI gone wrong: seven unsettling examples that show how AI is far from perfect"},{"@type":"BreadcrumbList","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/ai-gone-wrong-seven-unsettling-examples-that-show-how-ai-is-far-from-perfect\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blog.richardvanhooijdonk.com\/en\/keynotespreker-trendwatcher-en-futurist\/"},{"@type":"ListItem","position":2,"name":"AI gone wrong: seven unsettling examples that show how AI is far from perfect"}]},{"@type":"WebSite","@id":"https:\/\/blog.richardvanhooijdonk.com\/#website","url":"https:\/\/blog.richardvanhooijdonk.com\/","name":"Richard van Hooijdonk Blog","description":"Keynote speaker, trendwatcher and futurist","publisher":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.richardvanhooijdonk.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/blog.richardvanhooijdonk.com\/#organization","name":"Richard van Hooijdonk BV","url":"https:\/\/blog.richardvanhooijdonk.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/logo\/image\/","url":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2019\/04\/logo-footer-1.png","contentUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2019\/04\/logo-footer-1.png","width":100,"height":72,"caption":"Richard van Hooijdonk BV"},"image":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/person\/5b8ddcabed59c2c30bcffbd7cefda6b7","name":"sheheryar khan","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g","caption":"sheheryar khan"}}]}},"_links":{"self":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/posts\/81331","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/comments?post=81331"}],"version-history":[{"count":0,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/posts\/81331\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/media\/81332"}],"wp:attachment":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/media?parent=81331"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/categories?post=81331"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/tags?post=81331"},{"taxonomy":"article-type","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/article-type?post=81331"},{"taxonomy":"trends","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/trends?post=81331"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}