Executive summary
Governments worldwide are rapidly deploying AI systems to handle everything from welfare claims to traffic management, fundamentally reshaping how citizens interact with public services. While these technologies promise to reduce backlogs, cut costs, and free workers from repetitive tasks, their implementation has produced wildly divergent outcomes – from dramatic efficiency gains to discriminatory algorithms that devastate vulnerable communities.
- According to the Chief Information Officers Council, US federal agencies have reported more than 1,700 ways they are using AI.
- While 66% of people regularly use AI in some form, only 46% actually trust it, reveals a global study of 48,000 people across 47 countries.
- The UK government’s new AI tool could potentially save 75,000 staff days annually, or £20m (US$26.7m).
- Dubai’s AI traffic system has cut delays by up to 37% across major intersections.
- UK welfare algorithms wrongly flagged 200,000 people for fraud investigations.
- In France, human rights groups have sued the government for the use of algorithms that allegedly discriminate against disabled people and single mothers.
The trajectory of government AI will likely depend less on technological advances than on institutional willingness to address longstanding problems. The technology acts as a catalyst for change, making hidden biases visible and demanding better data, clearer processes, and genuine accountability. Governments that treat AI as a quick fix will amplify existing failures; those that use it as an opportunity for fundamental reform might actually deliver on the promise of better public services.
Registering a birth, accessing social security benefits, renewing a passport – everyone deals with government services at some point, and often at moments that fundamentally shape their lives. These interactions determine whether people can access healthcare, send their children to school, receive housing support, or navigate countless other essential services. But delivering these sorts of services at scale is genuinely difficult. Governments process millions of transactions each year, make countless administrative decisions, and coordinate across agencies that often operate on incompatible systems. The complexity creates inefficiencies that slow everything down, and that’s before we even factor in the bureaucracy of it all. Eventually, dysfunction metastasises into injustice – when errors, delays, or denials hit hardest on those who are the most vulnerable.
This is why precisely automated decision-making and AI have gained traction as promising solutions. These technologies promise to help governments process requests faster, reduce backlogs, and apply rules more consistently across cases. Instead of overwhelmed caseworkers making rushed decisions, AI could handle routine tasks and flag only the exceptions that explicitly require human attention. Whether that happens depends entirely on how the technology gets implemented. Without proper governance, transparency, or privacy protections, AI can entrench the very problems it claims to solve. Done responsibly, though, it creates different possibilities; public sector workers get tools that free them from repetitive work, letting them focus on cases that genuinely need their judgment; operational costs go down; citizens experience services that actually function. And perhaps most significantly, governments have a chance to rebuild trust that years of bureaucratic frustration have worn away.
“AI offers government opportunities to transform public services and deliver better outcomes for the taxpayer.”
Gareth Davies, head of the NAO
The current state of AI adoption
While the public remains unconvinced about AI, governments around the world are accelerating their implementation of the technology.
Government agencies have been steadily expanding their use of AI in recent years. A 2024 report by the Chief Information Officers Council counted more than 1,700 ways US federal agencies are already using AI to advance their missions and improve public services – double the number from just a year earlier. The scale of this shift often goes unnoticed because much of it happens behind the scenes, in the administrative machinery that keeps government running. In the UK, on the other hand, government bodies have been more hesitant, or arguably perhaps more grounded about AI’s capabilities. The National Audit Office (NAO) found that only about a third of government departments have actually put AI systems into production, and those that have typically stick to one or two carefully controlled use cases.
However, nearly three-quarters are now piloting or planning AI projects, with each agency exploring on average around four potential applications, including analysing digital images, automating routine checks in application processes, and drafting or summarising text. “AI offers government opportunities to transform public services and deliver better outcomes for the taxpayer,” says Gareth Davies, head of the NAO. “To deliver these improved outcomes, the government needs to make sure its overall programme for AI adoption tackles longstanding issues, including data quality and ageing IT, as well as builds in effective governance of the risks.” However, he also warns that “without prompt action to address barriers to making effective use of AI within public services, government will not secure the benefits it has identified.”
While governments forge ahead with implementation, citizens themselves remain deeply ambivalent about the technology. A global study led by Professor Nicole Gillespie at the University of Melbourne, which surveyed over 48,000 people across 47 countries, found that although 66% of people regularly use AI in some form or another, fewer than half – just 46% – actually trust it. Four out of five respondents have experienced or observed AI’s benefits firsthand, from slashing time spent on mundane tasks to improved personalisation and accessibility. Yet, at the same time, four in five are also worried about risks, and two in five have personally experienced negative impacts, ranging from the loss of human interaction and cybersecurity vulnerabilities to the spread of misinformation and disinformation and the gradual erosion of skills as people rely more heavily on automated systems.
Algorithmic success stories
From streamlining consultation analysis to automating patient triage, AI is delivering measurable wins for stretched public services.
So, we’ve examined the current state of play for AI adoption in government – but what does it look like in practice? First, let’s look at what happens when governments actually get AI right. The UK government recently needed to analyse more than 50,000 responses to the Independent Water Commission’s review of the water sector – the kind of task that typically means civil servants drowning in paperwork for months on end. Instead, this time the task was delegated to Consult, a new AI tool developed within the government’s Humphrey suite of AI technologies, which managed to sort through all those free-text responses and group them into key themes in about two hours, all at a cost of just £240 (US$320.8). The AI’s output was then reviewed and validated by human experts, which took another 22 hours – still a fraction of the time it would have taken to do the whole job manually.
The government estimates that scaling this approach across all public consultations could free up 75,000 staff days annually that are currently spent on manual analysis. That’s roughly £20m worth of human intelligence redirected from paperwork to actually solving problems. “This shows the huge potential for technology and AI to deliver better and more efficient public services, and provide better value for the taxpayer,” explains Digital Government Minister Ian Murray. “By taking on the basic admin, Consult is giving staff time to focus on what matters – taking action to fix public services. In the process, it could save the taxpayer hundreds of thousands of pounds.”
Triaged by AI
Healthcare offers an even more compelling example of how AI can help solve everyday frustrations. In 2024, GP practices in the UK fielded 240 million calls, with patients waiting an average of 9.1 minutes just to speak to a receptionist. Perhaps most disturbingly, 4% of calls were never answered, according to the Social Market Foundation. Recognising the need to improve this aspect of their operations, Groves Medical Centre in Surrey and South West London introduced an AI-based triaging system to help staff manage the caseload and alleviate the dreaded 8am rush.
The results were overwhelmingly positive: waiting times for appointments plummeted from 11 days down to just 3. The morning phone stampede eased, with 47% fewer calls placed during peak hours. Perhaps most importantly, the increased efficiency didn’t come at the expense of the quality of care. The practice actually increased face-to-face appointments by 60%, with 85% of bookings through the new system resulting in in-person consultations, while patients needed 70% fewer follow-ups because they got proper care the first time around. Doctors could even extend their standard appointments from 10 to 15 minutes, allowing them to have more meaningful conversations with patients rather than rushing through a backlog.
Reducing congestion one intersection at a time
Over in Dubai, the Roads and Transport Authority (RTA) has been quietly transforming how traffic flows through the city. Its upgraded central traffic signal control system now uses AI to detect congestion patterns as they develop and adjust in real time. Rather than having fixed timing patterns that ignore actual traffic, the system runs simulations of different scenarios, then implements whatever actually works best in the real world.
According to Mohammed Al Ali, Director of Intelligent Traffic Systems at RTA, the new system has resulted in a significant reduction in waiting times, improved coordination between intersections, and smoother traffic flows, with some major intersections seeing efficiency gains of up to 37%. Dubai’s municipal government sees this as just the beginning of a deeper transformation. By 2026, the city will have 300 AI-managed intersections coordinating not just cars but also buses, pedestrians, and cyclists, all while communicating with smart vehicles in real time through Vehicle-to-Everything (V2X) technology, providing a much more granular view of how people and goods actually move through the city.
A bridge between the government and the citizens
Buenos Aires, meanwhile, focused on the most basic government service of all: answering citizens’ questions. The city’s chatbot Boti now fields over 2 million queries monthly in both Spanish and English, ranging from the mundane – “Where can I renew my licence?” – to the time-sensitive: “What’s on at the cultural centre this weekend?” The team behind Boti spent months training it on the city’s services, tourist attractions, and administrative processes, creating something that feels decidedly less like talking to a machine and more like texting a knowledgeable friend. The operational impact has been substantial – workload dropped by 50%, freeing staff to handle complex cases while Boti manages routine inquiries. Citizens get instant, accurate answers about everything from museum hours to permit requirements, while the city government learns from every interaction what information people actually need.
“Generative technology allowed us to demonstrate the need to centralise all government information in a single repository,” says Julieta Rappan, General Director of Digital Channels with the Government of the City of Buenos Aires. “This not only improves the efficiency in its distribution to different channels but also enables personalised and more effective experiences for citizens, such as Boti’s with ChatGPT.” In other words, the chatbot became a catalyst for agency transformation, pushing the government to organise its knowledge in ways that actually serve citizens.
“Using algorithms in the context of social policy comes with way more risks than it comes with benefits.”
Soizic Pénicaud, a lecturer in AI policy at Sciences Po Paris
When AI goes wrong
Whether it’s falsely accusing people of fraud or advising them to break the law, the consequences of AI’s shortcomings can be devastating.
Of course, for every AI success story there’s a corresponding cautionary tale reminding us why many remain wary of automated decision-making. The UK’s Department for Work and Pensions learned this the hard way when its fraud detection algorithm flagged more than 200,000 people as potential benefit cheats. Officials spent £4.4m investigating these supposed high-risk cases, only to discover the system was wrong about most of them. An internal assessment revealed what many had already suspected – the AI showed clear bias based on age, disability, marital status, and nationality, systematically targeting certain groups for investigation regardless of actual risk.
Automated injustice
The pattern of algorithmic discrimination extends across Europe, each country discovering its own version of the same fundamental problem. In France, the welfare agency CNAF built a system that analyses personal data from over 30 million people to identify potential cases of benefits fraud. Everyone gets a score between 0 and 1, supposedly indicating their likelihood of receiving payments they shouldn’t. Score too high, and you might face what recipients describe as invasive investigations (which include not just benefit claimants but their families and housemates too), your benefits suspended while bureaucrats rifle through your life looking for fraud that probably doesn’t even exist. In response, a coalition of human rights groups launched legal action against the French government, arguing the algorithm systematically discriminates against disabled people and single mothers. While the outcome of the case is pending, the French government has since launched a new institute, INESIA, to assess the safe and secure use of AI.
The Netherlands experienced perhaps the most devastating example of algorithmic injustice, with tens of thousands falsely accused of defrauding the child benefits system. Members of the Ghanaian community found themselves disproportionately targeted and the consequences cascaded far beyond simple repayment demands, with many families experiencing spiralling debt, destroyed credit ratings, and lives derailed by false accusations. Soizic Pénicaud, who teaches AI policy at Sciences Po Paris, argues that the problem is not in the technology itself but in the manner in which it’s used. “Using algorithms in the context of social policy comes with way more risks than it comes with benefits,” she says. “I haven’t seen any example in Europe or in the world in which these systems have been used with positive results.”
Breaking the law
Even simpler AI applications can go spectacularly wrong when governments rush to deploy without proper testing. New York City learned this the hard way after launching an AI chatbot in October 2024, which was supposed to help residents navigate the complexities of starting and running a business in the city. The bot looked professional, responded confidently, and dispensed advice that was often completely wrong. Ask about tenant rights, and the bot would cheerfully inform you that landlords could lock out tenants and charge whatever rent they pleased, when in reality, both actions would have been highly illegal. The bot seemed equally confused about worker protections, incorrectly advising that employers could take a cut of tips and change schedules without notice. For five months, this authoritative-sounding system spread dangerous misinformation about fundamental legal rights, potentially causing real harm to anyone who trusted its wildly inaccurate guidance.
Learnings
The promise of government AI was always going to collide with the messy reality of how public services actually work. There’s something almost endearingly naive about the optimism – this belief that we could automate away decades of bureaucratic dysfunction with our clever algorithms. As if the problem was ever really about processing speed. What nobody talks about enough is how government services shape the texture of ordinary life. These aren’t just transactions; they’re the moments when people are often at their most vulnerable, navigating systems that feel designed to exhaust rather than help. The technology itself is neutral, but it arrives at a moment when trust between citizens and institutions feels particularly fragile.
The early experiments reveal an uncomfortable truth: AI makes visible what was always there. The biases, the broken processes, the assumptions about who deserves scrutiny and who doesn’t. Automation doesn’t fix these problems – it just processes them faster, at scale, with a veneer of objectivity that makes them harder to challenge. Yet there’s something promising about forcing governments to confront their own dysfunction. When an algorithm starts making obviously terrible decisions, you can’t blame individual caseworkers or claim it’s just bad luck. The failure becomes systemic, undeniable, and demands an answer. Maybe that’s where the real opportunity lies: not in the efficiency gains or cost savings, but in this moment of forced transparency, where governments are having to ask fundamental questions about fairness, accountability, and what they owe their citizens.
Share via:
