{"id":81459,"date":"2025-11-26T21:01:47","date_gmt":"2025-11-26T19:01:47","guid":{"rendered":"https:\/\/blog.richardvanhooijdonk.com\/?p=81459"},"modified":"2026-02-11T10:29:52","modified_gmt":"2026-02-11T08:29:52","slug":"how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong","status":"publish","type":"post","link":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/","title":{"rendered":"How governments can get AI right\u2026 and how it might go spectacularly wrong"},"content":{"rendered":"\n<div class=\"wp-block-cover is-light has-black-color has-text-color has-link-color wp-elements-7ba374b2d71461b1a77360ed96c70b5c\"><span aria-hidden=\"true\" class=\"wp-block-cover__background has-cyan-bluish-gray-background-color has-background-dim-20 has-background-dim\"><\/span><div class=\"wp-block-cover__inner-container is-layout-flow wp-block-cover-is-layout-flow\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-fe9cc265 wp-block-group-is-layout-flex\">\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Executive summary<\/h2>\n\n\n\n<p>Governments worldwide are rapidly deploying AI systems to handle everything from welfare claims to traffic management, fundamentally reshaping how citizens interact with public services. While these technologies promise to reduce backlogs, cut costs, and free workers from repetitive tasks, their implementation has produced wildly divergent outcomes \u2013 from dramatic efficiency gains to discriminatory algorithms that devastate vulnerable communities.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>According to the Chief Information Officers Council, US federal agencies have reported more than 1,700 ways they are using AI.<\/li>\n\n\n\n<li>While 66% of people regularly use AI in some form, only 46% actually trust it, reveals a global study of 48,000 people across 47 countries.<\/li>\n\n\n\n<li>The UK government\u2019s new AI tool could potentially save 75,000 staff days annually, or \u00a320m (US$26.7m).<\/li>\n\n\n\n<li>Dubai\u2019s AI traffic system has cut delays by up to 37% across major intersections.<\/li>\n\n\n\n<li>UK welfare algorithms wrongly flagged 200,000 people for fraud investigations.<\/li>\n\n\n\n<li>In France, human rights groups have sued the government for the use of algorithms that allegedly discriminate against disabled people and single mothers.<\/li>\n<\/ul>\n\n\n\n<p>The trajectory of government AI will likely depend less on technological advances than on institutional willingness to address longstanding problems. The technology acts as a catalyst for change, making hidden biases visible and demanding better data, clearer processes, and genuine accountability. Governments that treat AI as a quick fix will amplify existing failures; those that use it as an opportunity for fundamental reform might actually deliver on the promise of better public services.<\/p>\n<\/div>\n<\/div><\/div>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n\n\n\n<p>Registering a birth, accessing social security benefits, renewing a passport \u2013 everyone deals with government services at some point, and often at moments that fundamentally shape their lives. These interactions determine whether people can access healthcare, send their children to school, receive housing support, or navigate countless other essential services. But delivering these sorts of services at scale is genuinely difficult. Governments process millions of transactions each year, make countless administrative decisions, and coordinate across agencies that often operate on incompatible systems. The complexity creates inefficiencies that slow everything down, and that\u2019s before we even factor in the bureaucracy of it all. Eventually, dysfunction metastasises into injustice \u2013 when errors, delays, or denials hit hardest on those who are the most vulnerable.<\/p>\n\n\n\n<p>This is why precisely automated decision-making and AI have <a href=\"https:\/\/www.turing.ac.uk\/research\/research-projects\/estimating-potential-ai-government-services\" target=\"_blank\" rel=\"noreferrer noopener\">gained<\/a> traction as promising solutions. These technologies promise to help governments process requests faster, reduce backlogs, and apply rules more consistently across cases. Instead of overwhelmed caseworkers making rushed decisions, AI could handle routine tasks and flag only the exceptions that explicitly require human attention. Whether that happens depends entirely on how the technology gets implemented. Without proper governance, transparency, or privacy protections, AI can entrench the very problems it claims to solve. Done responsibly, though, it creates different possibilities; public sector workers get tools that free them from repetitive work, letting them focus on cases that genuinely need their judgment; operational costs go down; citizens experience services that actually function. And perhaps most significantly, governments have a chance to rebuild trust that years of bureaucratic frustration have worn away.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<blockquote class=\"wp-block-quote has-text-align-center quote-stat is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"has-text-align-center\">\u201cAI offers government opportunities to transform public services and deliver better outcomes for the taxpayer.\u201d<\/p>\n<cite><em>Gareth Davies, head of the NAO<\/em><\/cite><\/blockquote>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The current state of AI adoption<\/h2>\n\n\n\n<p><em><strong>While the public remains unconvinced about AI, governments around the world are accelerating their implementation of the technology.<\/strong><\/em><\/p>\n\n\n\n<p>Government agencies have been steadily expanding their use of AI in recent years. A 2024 report by the Chief Information Officers Council <a href=\"https:\/\/www.cio.gov\/ai-in-action\/\" target=\"_blank\" rel=\"noreferrer noopener\">counted<\/a> more than 1,700 ways US federal agencies are already using AI to advance their missions and improve public services \u2013 double the number from just a year earlier. The scale of this shift often goes unnoticed because much of it happens behind the scenes, in the administrative machinery that keeps government running. In the UK, on the other hand, government bodies have been more hesitant, or arguably perhaps more grounded about AI\u2019s capabilities. The National Audit Office (NAO) found that only about a third of government departments have actually put AI systems into production, and those that have typically stick to one or two carefully controlled use cases.&nbsp;<\/p>\n\n\n\n<p>However, nearly three-quarters are now piloting or planning AI projects, with each agency exploring on average around four potential applications, including analysing digital images, automating routine checks in application processes, and drafting or summarising text. \u201cAI offers government opportunities to transform public services and deliver better outcomes for the taxpayer,\u201d <a href=\"https:\/\/www.nao.org.uk\/press-releases\/government-encouraged-to-tackle-barriers-to-realising-the-benefits-of-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">says<\/a> Gareth Davies, head of the NAO. \u201cTo deliver these improved outcomes, the government needs to make sure its overall programme for AI adoption tackles longstanding issues, including data quality and ageing IT, as well as builds in effective governance of the risks.\u201d However, he also warns that \u201cwithout prompt action to address barriers to making effective use of AI within public services, government will not secure the benefits it has identified.\u201d<\/p>\n\n\n\n<p>While governments forge ahead with implementation, citizens themselves remain deeply ambivalent about the technology. A global study led by Professor Nicole Gillespie at the University of Melbourne, which surveyed over 48,000 people across 47 countries, <a href=\"https:\/\/mbs.edu\/news\/global-study-reveals-trust-of-ai-remains-a-critical-challenge\" target=\"_blank\" rel=\"noreferrer noopener\">found<\/a> that although 66% of people regularly use AI in some form or another, fewer than half \u2013 just 46% \u2013 actually trust it. Four out of five respondents have experienced or observed AI\u2019s benefits firsthand, from slashing time spent on mundane tasks to improved personalisation and accessibility. Yet, at the same time, four in five are also worried about risks, and two in five have personally experienced negative impacts, ranging from the loss of human interaction and cybersecurity vulnerabilities to the spread of misinformation and disinformation and the gradual erosion of skills as people rely more heavily on automated systems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Algorithmic success stories<\/h2>\n\n\n\n<p><em><strong>From streamlining consultation analysis to automating patient triage, AI is delivering measurable wins for stretched public services.<\/strong><\/em><\/p>\n\n\n\n<p>So, we\u2019ve examined the current state of play for AI adoption in government \u2013 but what does it look like in practice? First, let\u2019s look at what happens when governments actually get AI right. The UK government recently needed to analyse more than 50,000 responses to the Independent Water Commission\u2019s review of the water sector \u2013 the kind of task that typically means civil servants drowning in paperwork for months on end. Instead, this time the task was delegated to Consult, a new AI tool developed within the government\u2019s Humphrey suite of AI technologies, which managed to sort through all those free-text responses and group them into key themes in about two hours, all at a cost of just \u00a3240 (US$320.8). The AI\u2019s output was then reviewed and validated by human experts, which took another 22 hours \u2013 still a fraction of the time it would have taken to do the whole job manually.<\/p>\n\n\n\n<p>The government estimates that scaling this approach across all public consultations could free up 75,000 staff days annually that are currently spent on manual analysis. That\u2019s roughly \u00a320m worth of human intelligence redirected from paperwork to actually solving problems. \u201cThis shows the huge potential for technology and AI to deliver better and more efficient public services, and provide better value for the taxpayer,\u201d <a href=\"https:\/\/www.thinkdigitalpartners.com\/news\/2025\/10\/16\/government-built-ai-consult-saves-thousands-of-staff-days\/\" target=\"_blank\" rel=\"noreferrer noopener\">explains<\/a> Digital Government Minister Ian Murray. \u201cBy taking on the basic admin, Consult is giving staff time to focus on what matters \u2013 taking action to fix public services. In the process, it could save the taxpayer hundreds of thousands of pounds.\u201d<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe title=\"NYC\u2019s AI chatbot spent 5 months confidently telling residents to break the law, giving illegal advic\" width=\"800\" height=\"450\" src=\"https:\/\/www.youtube.com\/embed\/hJ1AyM_9Slc?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Triaged by AI<\/h3>\n\n\n\n<p>Healthcare offers an even more compelling example of how AI can help solve everyday frustrations. In 2024, GP practices in the UK fielded 240 million calls, with patients waiting an average of 9.1 minutes just to speak to a receptionist. Perhaps most disturbingly, 4% of calls were never answered, according to the Social Market Foundation. Recognising the need to improve this aspect of their operations, Groves Medical Centre in Surrey and South West London <a href=\"https:\/\/www.smf.co.uk\/wp-content\/uploads\/2024\/11\/In-the-blink-of-an-AI-Nov-2024.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">introduced<\/a> an AI-based triaging system to help staff manage the caseload and alleviate the dreaded 8am rush.<\/p>\n\n\n\n<p>The results were overwhelmingly positive: waiting times for appointments plummeted from 11 days down to just 3. The morning phone stampede eased, with 47% fewer calls placed during peak hours. Perhaps most importantly, the increased efficiency didn\u2019t come at the expense of the quality of care. The practice actually increased face-to-face appointments by 60%, with 85% of bookings through the new system resulting in in-person consultations, while patients needed 70% fewer follow-ups because they got proper care the first time around. Doctors could even extend their standard appointments from 10 to 15 minutes, allowing them to have more meaningful conversations with patients rather than rushing through a backlog.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Reducing congestion one intersection at a time<\/h3>\n\n\n\n<p>Over in Dubai, the Roads and Transport Authority (RTA) has been quietly transforming how traffic flows through the city. Its upgraded central traffic signal control system now uses AI to detect congestion patterns as they develop and adjust in real time. Rather than having fixed timing patterns that ignore actual traffic, the system runs simulations of different scenarios, then implements whatever actually works best in the real world.&nbsp;<\/p>\n\n\n\n<p><a href=\"https:\/\/gulfbusiness.com\/dubais-traffic-signal-ai-system-cuts-delays-37\/\" target=\"_blank\" rel=\"noreferrer noopener\">According to<\/a> Mohammed Al Ali, Director of Intelligent Traffic Systems at RTA, the new system has resulted in a significant reduction in waiting times, improved coordination between intersections, and smoother traffic flows, with some major intersections seeing efficiency gains of up to 37%. Dubai\u2019s municipal government sees this as just the beginning of a deeper transformation. By 2026, the city will have 300 AI-managed intersections coordinating not just cars but also buses, pedestrians, and cyclists, all while communicating with smart vehicles in real time through Vehicle-to-Everything (V2X) technology, providing a much more granular view of how people and goods actually move through the city.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">A bridge between the government and the citizens<\/h3>\n\n\n\n<p>Buenos Aires, meanwhile, focused on the most basic government service of all: answering citizens\u2019 questions. The city\u2019s chatbot Boti now fields over 2 million queries monthly in both Spanish and English, ranging from&nbsp; the mundane \u2013 \u201cWhere can I renew my licence?\u201d \u2013 to the time-sensitive: \u201cWhat\u2019s on at the cultural centre this weekend?\u201d The team behind Boti spent months training it on the city\u2019s services, tourist attractions, and administrative processes, creating something that feels decidedly less like talking to a machine and more like texting a knowledgeable friend. The operational impact has been substantial \u2013 workload dropped by 50%, freeing staff to handle complex cases while Boti manages routine inquiries. Citizens get instant, accurate answers about everything from museum hours to permit requirements, while the city government learns from every interaction what information people actually need.&nbsp;<\/p>\n\n\n\n<p>\u201cGenerative technology allowed us to demonstrate the need to centralise all government information in a single repository,\u201d <a href=\"https:\/\/www.microsoft.com\/en\/customers\/story\/21596-government-of-the-city-of-buenos-aires-azure-open-ai-service\" target=\"_blank\" rel=\"noreferrer noopener\">says<\/a> Julieta Rappan, General Director of Digital Channels with the Government of the City of Buenos Aires. \u201cThis not only improves the efficiency in its distribution to different channels but also enables personalised and more effective experiences for citizens, such as Boti\u2019s with ChatGPT.\u201d In other words, the chatbot became a catalyst for agency transformation, pushing the government to organise its knowledge in ways that actually serve citizens.<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<blockquote class=\"wp-block-quote has-text-align-center quote-stat is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"has-text-align-center\">\u201cUsing algorithms in the context of social policy comes with way more risks than it comes with benefits.\u201d<\/p>\n<cite><em>Soizic P\u00e9nicaud, a lecturer in AI policy at Sciences Po Paris<\/em><\/cite><\/blockquote>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">When AI goes wrong<\/h2>\n\n\n\n<p><em><strong>Whether it\u2019s falsely accusing people of fraud or advising them to break the law, the consequences of AI\u2019s shortcomings can be devastating.<\/strong><\/em><\/p>\n\n\n\n<p>Of course, for every AI success story there\u2019s a corresponding cautionary tale reminding us why many remain wary of automated decision-making. The UK\u2019s Department for Work and Pensions learned this the hard way when its fraud detection algorithm <a href=\"https:\/\/www.theguardian.com\/society\/article\/2024\/jun\/23\/dwp-algorithm-wrongly-flags-200000-people-possible-fraud-error\" target=\"_blank\" rel=\"noreferrer noopener\">flagged<\/a> more than 200,000 people as potential benefit cheats. Officials spent \u00a34.4m investigating these supposed high-risk cases, only to discover the system was wrong about most of them. An internal assessment revealed what many had already suspected \u2013 the AI showed clear bias based on age, disability, marital status, and nationality, systematically targeting certain groups for investigation regardless of actual risk.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Automated injustice<\/h3>\n\n\n\n<p>The pattern of algorithmic discrimination extends across Europe, each country discovering its own version of the same fundamental problem. In France, the welfare agency CNAF built a system that analyses personal data from over 30 million people to identify potential cases of benefits fraud. Everyone gets a score between 0 and 1, supposedly indicating their likelihood of receiving payments they shouldn\u2019t. Score too high, and you might face what recipients describe as invasive investigations (which include not just benefit claimants but their families and housemates too), your benefits suspended while bureaucrats rifle through your life looking for fraud that probably doesn\u2019t even exist. In response, a coalition of human rights groups launched legal action against the French government, arguing the algorithm systematically discriminates against disabled people and single mothers. While the outcome of the case is pending, the French government has since <a href=\"https:\/\/www.campusfrance.org\/en\/actu\/creation-d-un-institut-national-pour-l-evaluation-et-la-securite-de-l-ia\" target=\"_blank\" rel=\"noreferrer noopener\">launched<\/a> a new institute, INESIA, to assess the safe and secure use of AI.&nbsp;<\/p>\n\n\n\n<p>The Netherlands experienced perhaps the most devastating example of algorithmic injustice, with tens of thousands falsely accused of defrauding the child benefits system. Members of the Ghanaian community found themselves disproportionately targeted and the consequences cascaded far beyond simple repayment demands, with many families experiencing spiralling debt, destroyed credit ratings, and lives derailed by false accusations. Soizic P\u00e9nicaud, who teaches AI policy at Sciences Po Paris, argues that the problem is not in the technology itself but in the manner in which it\u2019s used. \u201cUsing algorithms in the context of social policy comes with way more risks than it comes with benefits,\u201d she <a href=\"https:\/\/www.wired.com\/story\/algorithms-policed-welfare-systems-for-years-now-theyre-under-fire-for-bias\/\" target=\"_blank\" rel=\"noreferrer noopener\">says<\/a>. \u201cI haven\u2019t seen any example in Europe or in the world in which these systems have been used with positive results.\u201d<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Breaking the law<\/h3>\n\n\n\n<p>Even simpler AI applications can go spectacularly wrong when governments rush to deploy without proper testing. New York City learned this the hard way after launching an AI chatbot in October 2024, which was supposed to help residents navigate the complexities of starting and running a business in the city. The bot looked professional, responded confidently, and <a href=\"https:\/\/www.reuters.com\/technology\/new-york-city-defends-ai-chatbot-that-advised-entrepreneurs-break-laws-2024-04-04\/\" target=\"_blank\" rel=\"noreferrer noopener\">dispensed<\/a> advice that was often completely wrong. Ask about tenant rights, and the bot would cheerfully inform you that landlords could lock out tenants and charge whatever rent they pleased, when in reality, both actions would have been highly illegal. The bot seemed equally confused about worker protections, incorrectly advising that employers could take a cut of tips and change schedules without notice. For five months, this authoritative-sounding system spread dangerous misinformation about fundamental legal rights, potentially causing real harm to anyone who trusted its wildly inaccurate guidance.<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-cover is-light has-black-color has-text-color has-link-color wp-elements-99cd1c0f0da916d9477c5e6c0f273c04\"><span aria-hidden=\"true\" class=\"wp-block-cover__background has-cyan-bluish-gray-background-color has-background-dim-20 has-background-dim\"><\/span><div class=\"wp-block-cover__inner-container is-layout-flow wp-block-cover-is-layout-flow\">\n<div class=\"wp-block-group is-vertical is-layout-flex wp-container-core-group-is-layout-fe9cc265 wp-block-group-is-layout-flex\">\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Learnings<\/h2>\n\n\n\n<p>The promise of government AI was always going to collide with the messy reality of how public services actually work. There\u2019s something almost endearingly naive about the optimism \u2013 this belief that we could automate away decades of bureaucratic dysfunction with our clever algorithms. As if the problem was ever really about processing speed. What nobody talks about enough is how government services shape the texture of ordinary life. These aren\u2019t just transactions; they\u2019re the moments when people are often at their most vulnerable, navigating systems that feel designed to exhaust rather than help. The technology itself is neutral, but it arrives at a moment when trust between citizens and institutions feels particularly fragile.<\/p>\n\n\n\n<p>The early experiments reveal an uncomfortable truth: AI makes visible what was always there. The biases, the broken processes, the assumptions about who deserves scrutiny and who doesn\u2019t. Automation doesn\u2019t fix these problems \u2013 it just processes them faster, at scale, with a veneer of objectivity that makes them harder to challenge. Yet there\u2019s something promising about forcing governments to confront their own dysfunction. When an algorithm starts making obviously terrible decisions, you can\u2019t blame individual caseworkers or claim it\u2019s just bad luck. The failure becomes systemic, undeniable, and demands an answer. Maybe that\u2019s where the real opportunity lies: not in the efficiency gains or cost savings, but in this moment of forced transparency, where governments are having to ask fundamental questions about fairness, accountability, and what they owe their citizens.<\/p>\n<\/div>\n<\/div><\/div>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI promises to transform how governments deliver essential services to millions. But who takes responsibility when things take a wrong turn?<\/p>\n","protected":false},"author":10,"featured_media":81464,"parent":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","footnotes":""},"categories":[2871],"tags":[],"article-type":[],"trends":[5485],"class_list":["post-81459","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-government","trends-artificial-intelligence-en"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>How governments can get AI right\u2026 and how it might go spectacularly wrong<\/title>\n<meta name=\"description\" content=\"AI promises to transform how governments deliver essential services to millions. But who takes responsibility when things take a wrong turn?\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How governments can get AI right\u2026 and how it might go spectacularly wrong\" \/>\n<meta property=\"og:description\" content=\"AI promises to transform how governments deliver essential services to millions. But who takes responsibility when things take a wrong turn?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/\" \/>\n<meta property=\"og:site_name\" content=\"Richard van Hooijdonk Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-11-26T19:01:47+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-11T08:29:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/11\/shutterstock_2432914805-min-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"800\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"sheheryar khan\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"sheheryar khan\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/\"},\"author\":{\"name\":\"sheheryar khan\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/person\\\/5b8ddcabed59c2c30bcffbd7cefda6b7\"},\"headline\":\"How governments can get AI right\u2026 and how it might go spectacularly wrong\",\"datePublished\":\"2025-11-26T19:01:47+00:00\",\"dateModified\":\"2026-02-11T08:29:52+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/\"},\"wordCount\":2666,\"publisher\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/shutterstock_2432914805-min-scaled.jpg\",\"articleSection\":[\"Government\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/\",\"name\":\"How governments can get AI right\u2026 and how it might go spectacularly wrong\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/shutterstock_2432914805-min-scaled.jpg\",\"datePublished\":\"2025-11-26T19:01:47+00:00\",\"dateModified\":\"2026-02-11T08:29:52+00:00\",\"description\":\"AI promises to transform how governments deliver essential services to millions. But who takes responsibility when things take a wrong turn?\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/#primaryimage\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/shutterstock_2432914805-min-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2025\\\/11\\\/shutterstock_2432914805-min-scaled.jpg\",\"width\":2560,\"height\":800},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/en\\\/keynotespreker-trendwatcher-en-futurist\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How governments can get AI right\u2026 and how it might go spectacularly wrong\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#website\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/\",\"name\":\"Richard van Hooijdonk Blog\",\"description\":\"Keynote speaker, trendwatcher and futurist\",\"publisher\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#organization\",\"name\":\"Richard van Hooijdonk BV\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2019\\\/04\\\/logo-footer-1.png\",\"contentUrl\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/wp-content\\\/uploads\\\/2019\\\/04\\\/logo-footer-1.png\",\"width\":100,\"height\":72,\"caption\":\"Richard van Hooijdonk BV\"},\"image\":{\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/blog.richardvanhooijdonk.com\\\/#\\\/schema\\\/person\\\/5b8ddcabed59c2c30bcffbd7cefda6b7\",\"name\":\"sheheryar khan\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g\",\"caption\":\"sheheryar khan\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How governments can get AI right\u2026 and how it might go spectacularly wrong","description":"AI promises to transform how governments deliver essential services to millions. But who takes responsibility when things take a wrong turn?","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/","og_locale":"en_US","og_type":"article","og_title":"How governments can get AI right\u2026 and how it might go spectacularly wrong","og_description":"AI promises to transform how governments deliver essential services to millions. But who takes responsibility when things take a wrong turn?","og_url":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/","og_site_name":"Richard van Hooijdonk Blog","article_published_time":"2025-11-26T19:01:47+00:00","article_modified_time":"2026-02-11T08:29:52+00:00","og_image":[{"width":2560,"height":800,"url":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/11\/shutterstock_2432914805-min-scaled.jpg","type":"image\/jpeg"}],"author":"sheheryar khan","twitter_card":"summary_large_image","twitter_misc":{"Written by":"sheheryar khan","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/#article","isPartOf":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/"},"author":{"name":"sheheryar khan","@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/person\/5b8ddcabed59c2c30bcffbd7cefda6b7"},"headline":"How governments can get AI right\u2026 and how it might go spectacularly wrong","datePublished":"2025-11-26T19:01:47+00:00","dateModified":"2026-02-11T08:29:52+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/"},"wordCount":2666,"publisher":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#organization"},"image":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/#primaryimage"},"thumbnailUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/11\/shutterstock_2432914805-min-scaled.jpg","articleSection":["Government"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/","url":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/","name":"How governments can get AI right\u2026 and how it might go spectacularly wrong","isPartOf":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/#primaryimage"},"image":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/#primaryimage"},"thumbnailUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/11\/shutterstock_2432914805-min-scaled.jpg","datePublished":"2025-11-26T19:01:47+00:00","dateModified":"2026-02-11T08:29:52+00:00","description":"AI promises to transform how governments deliver essential services to millions. But who takes responsibility when things take a wrong turn?","breadcrumb":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/#primaryimage","url":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/11\/shutterstock_2432914805-min-scaled.jpg","contentUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2025\/11\/shutterstock_2432914805-min-scaled.jpg","width":2560,"height":800},{"@type":"BreadcrumbList","@id":"https:\/\/blog.richardvanhooijdonk.com\/en\/how-governments-can-get-ai-right-and-how-it-might-go-spectacularly-wrong\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/blog.richardvanhooijdonk.com\/en\/keynotespreker-trendwatcher-en-futurist\/"},{"@type":"ListItem","position":2,"name":"How governments can get AI right\u2026 and how it might go spectacularly wrong"}]},{"@type":"WebSite","@id":"https:\/\/blog.richardvanhooijdonk.com\/#website","url":"https:\/\/blog.richardvanhooijdonk.com\/","name":"Richard van Hooijdonk Blog","description":"Keynote speaker, trendwatcher and futurist","publisher":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.richardvanhooijdonk.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/blog.richardvanhooijdonk.com\/#organization","name":"Richard van Hooijdonk BV","url":"https:\/\/blog.richardvanhooijdonk.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/logo\/image\/","url":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2019\/04\/logo-footer-1.png","contentUrl":"https:\/\/blog.richardvanhooijdonk.com\/wp-content\/uploads\/2019\/04\/logo-footer-1.png","width":100,"height":72,"caption":"Richard van Hooijdonk BV"},"image":{"@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/blog.richardvanhooijdonk.com\/#\/schema\/person\/5b8ddcabed59c2c30bcffbd7cefda6b7","name":"sheheryar khan","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/07ae74b03e00f9ff42e325d79df595de8f0d2212f49d9fe9ff4d54b5df9a1180?s=96&d=mm&r=g","caption":"sheheryar khan"}}]}},"_links":{"self":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/posts\/81459","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/comments?post=81459"}],"version-history":[{"count":0,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/posts\/81459\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/media\/81464"}],"wp:attachment":[{"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/media?parent=81459"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/categories?post=81459"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/tags?post=81459"},{"taxonomy":"article-type","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/article-type?post=81459"},{"taxonomy":"trends","embeddable":true,"href":"https:\/\/blog.richardvanhooijdonk.com\/en\/wp-json\/wp\/v2\/trends?post=81459"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}