Government’s AI makeover: how AI is transforming public services

Picture of Richard van Hooijdonk
Richard van Hooijdonk

Executive summary

Government agencies around the world are turning to artificial intelligence to tackle one of their biggest challenges: making bureaucracy actually work for the people it’s supposed to serve. As citizens grow increasingly frustrated with byzantine government processes, AI is offering a path out of the maze of red tape and endless waiting rooms that have defined the public sector for too long.

  • According to a 2024 SAS survey, 84% of government decision-makers expect the adoption of AI within the public sector to accelerate rapidly in 2025.
  • 51% of leaders and managers in the public sector today use AI multiple times per week, the 2024 EY Pulse survey found.
  • Government agencies worldwide are adopting AI to increase their efficiency, make better informed decisions, and improve citizen experiences.
  • The integration of AI into government operations also raises serious concerns about privacy, bias, and equal access to essential public services.
  • MIT’s Media Lab found that facial recognition systems have error rates of up to 35% for people with darker skin.

What happens next depends entirely on whether governments can resist the temptation to let savings trump fairness. The technology is powerful enough to revolutionise public service, but only if we implement it in ways that lift people up rather than leave them behind. The best outcome isn’t just faster, more efficient government – it’s government that finally works the way it should have all along.

Anyone who’s ever had to deal with government bureaucracy knows the special kind of frustration that comes part and parcel with it. You need to renew your driver’s license, apply for a permit, or file for benefits, and suddenly you’re trapped in a maze of convoluted procedures that seem designed to test your patience. There’s the endless phone menu that never quite has the right option, the website that crashes just as you’re about to submit your application, and don’t even get us started on those forms that ask for information you swear you already provided three times.

Then comes the waiting – whether it’s sitting in a government office for hours or refreshing a webpage hoping your application status has finally updated. Fortunately, AI might just offer us a way out of this mess. Thanks to its ability to automate a wide range of tasks, analyse complex data, and provide valuable, data-driven insights, AI is revolutionising how government agencies operate. It’s enabling them to significantly increase their efficiency, make better informed decisions, and improve citizen experiences, turning those dreaded government interactions into something that might actually be pleasant.

The role of AI in government

AI has the potential to level up government agencies, creating new efficiencies, enabling smarter decisions, and improving citizen experiences.

While the public sector isn’t exactly known for being at the forefront of technological innovation, government agencies are finally starting to wake up to AI’s transformative potential. According to a 2024 SAS survey, 84% of government decision-makers believe that the adoption of AI within the public sector will accelerate rapidly in 2025. Surprisingly enough, the government is widely expected to outpace other industries when it comes to AI spending over the same period.

So why the sudden enthusiasm? Well, AI has the potential to completely transform how government agencies operate and, more importantly, how they serve their citizens. Instead of having to wait on hold for ages or navigate confusing government websites, citizens could use AI-powered chatbots to get instant answers to their questions about available services, submit various service requests, share feedback on government performance, or suggest areas for improvement.

Streamlining the public sector

Of course, it’s not just citizens who stand to benefit from the implementation of AI. Government employees themselves could see their working lives dramatically improved. Think about all those mind-numbing tasks that eat up so much time: data entry, document management, scheduling. AI could automate these routine functions, freeing up staff to focus on the kind of strategic, high-impact work that actually makes a difference in people’s lives.

Beyond just saving time, AI could also help significantly reduce the number of errors, enabling governments to achieve greater consistency across their operations and improve the delivery of public services. Perhaps most tantalising of all, AI could dramatically alter how governments make decisions, enabling them to process and make sense of massive amounts of data and identify hidden patterns and trends that may not be apparent to human analysts. Insights obtained this way would allow governments to make better informed, data-driven choices about everything from how to spend public funds to how to improve public health initiatives and urban planning.

When asked which government services would benefit most from AI integration, 34% of government leaders highlighted traffic and transportation, reveals the 2023 Bloomberg Philanthropies survey. Next on the list was infrastructure with 24%, followed by public safety with 21%, environment and climate with 21%, and public schools with 18%. While it might be lagging behind parts of the private sector right now, the adoption of AI within government is rising fast. According to the 2024 EY Pulse survey, 51% of leaders and managers in the public sector use AI multiple times per week, while just 26% of respondents haven’t used AI.

“No one should be wasting time on something AI can do quicker and better.”

Peter Kyle, UK’s Technology Secretary

The AI-powered citizen experience

From chatbots that can answer your questions instantly to systems that can process thousands of applications in minutes, AI is reshaping how governments serve their people.

So, let’s take a closer look at how AI-powered citizen services work in practice. One of the most intriguing recent examples comes from the UK, where the government recently introduced an AI system named Humphrey. Developed as part of a wider initiative to increase productivity and reduce administrative overhead, Humphrey was deployed for the first time in Scotland, where it was used to review public responses to a government consultation about the regulation of non-surgical cosmetic procedures like lip fillers and laser hair removal. Overall, the government received more than 2,000 responses from the public, which were then reviewed by the AI and categorised by key themes. To ensure the accuracy of the system, each response was also reviewed by a human analyst, and the results were then compared against those produced by the AI. The two proved to be almost identical, only the AI was able to do it much faster.

According to the government, Humphrey is about 1,000 times faster than a human and is expected to save British taxpayers a cool £20 million per year, while also saving 75,000 hours of work annually. “No one should be wasting time on something AI can do quicker and better,” says Technology Secretary Peter Kyle. “After demonstrating such promising results, Humphrey will help us cut the costs of governing and make it easier to collect and comprehensively review what experts and the public are telling us on a range of crucial issues.” However, some experts are pointing out that using AI for this purpose comes with a few risks. “While in principle the idea is that a human will always be in the loop, in practice the reality is that a person will not always have that much time to check every time, and that is when the biases will creep in,” explains Michael Rovatsos, professor at the University of Edinburgh’s School of Informatics.

Chat with AI

Across the Atlantic, the Chicago Transit Authority (CTA) recently launched a chatbot named ‘Chat with CTA’. Developed alongside Google Public Sector and accessible through the transitchicago.com website, the chatbot enables riders to report issues they experience while using CTA’s services, provide feedback, and receive answers in real time. Riders can engage the chatbot on a wide variety of topics, including service disruptions, disruptive behaviour and employee feedback. However, it’s worth noting that it cannot respond to emergencies – riders still need to call 911 or reach out to station personnel. The chatbot converses in five different languages and even supports screen readers to enable those who are blind or visually impaired to access the service.

According to CTA, the public has responded positively to the chatbot. Since its launch in April 2024, riders have initiated over 8,000 conversations with the chatbot, 28% of which it was able to resolve entirely by itself. The agency further claims that the chatbot has enabled its customer service to expand its reach by 63% and increased the number of conversations completed by customers by 16%. “We’re pleased to see that the chatbot has received such a positive reaction thus far, and we’re seeing an uptick in feedback as riders are taking advantage of utilising the tool, which makes it easier than ever to report matters impacting the customer experience,” says CTA President Dorval R. Carter Jr. “For our riders, the good news is that this tool is allowing us to move forward in a proactive way and addressing their needs on the system in a faster capacity.”

Making cities safer

Of course, one of the government’s most critical functions is ensuring the safety of citizens. To this end, the city of San Francisco recently joined forces with software company LiveView Technologies (LVT) to introduce a new fleet of mobile security units intended to patrol neighbourhoods with high crime rates, acting as a potential deterrent to would-be criminals. Equipped with AI-powered cameras and sensors, the mobile units can detect all sorts of suspicious behaviour, including loitering and other unusual movement patterns indicative of criminal intent. This would enable the police to act more proactively and dispatch human officers to those areas before the crime takes place.

The critics have warned that this could potentially result in over-policing and endanger the rights of people in certain communities. However, city officials argue that mobile units use thermal imaging rather than facial recognition, which allows them to detect suspicious activity without invading anyone’s privacy. “We know that this technology serves as a crime deterrent,” said Sheriff Paul Miyamoto. “By integrating it with our existing surveillance tools, we are reinforcing our deputies and officers in the streets and enhancing our presence. These mobile trailers will not only boost our operational efficiency but provide 24-hour coverage to help solve crimes.”

“These tools are being anthropomorphised and framed as humanlike and superhuman. We risk inappropriately extending trust to the information produced by AI.”

Molly Crockett, a cognitive psychologist and neuroscientist at Princeton University

The risks of overreliance on AI

The benefits of AI are impossible to ignore, but experts are also warning about the risks associated with its use in government operations.

By now, you’re no doubt aware that AI can offer some serious benefits for governments, but we can’t ignore that it also introduces numerous risks that could undermine our democratic principles, exacerbate existing social and economic inequalities, and erode public trust. In addition to widely recognised risks related to privacy and generative AI’s tendency to produce factually inaccurate information, probably the biggest concern is that AI tools could perpetuate or even amplify human biases by making decisions that disproportionately disadvantage certain groups or individuals. Of course, the core issue here isn’t that AI is inherently malicious or discriminatory – it simply reflects the biases that already exist in its training data. When an AI is trained on data that lacks diversity, this bias will be carried forward into outcomes.

Take facial recognition, for instance. Research from MIT’s Media Lab found that these systems often have difficulties recognising people with darker skin, with error rates for dark-skinned women reaching a staggering 35%. On the other hand, the error rate for light-skinned men was just 0.8%, which can be attributed to the fact that images of lighter-skinned men accounted for the bulk of the training data. Similarly, a 2024 study conducted by Stanford’s Human-Centered AI Institute showed that large language models are equally guilty when it comes to perpetuating harmful stereotypes, often attributing negative traits like criminality or violence to names that sound African American, while European-sounding names were typically linked with more positive qualities. You can probably see where this is going – how many people of colour will be wrongfully convicted of crimes simply because of bad facial recognition?

There’s another concern that often gets overlooked in these discussions: the digital divide. As governments rush to automate services and move interactions online, they risk creating barriers for citizens who lack reliable internet access or digital literacy skills. The elderly person who prefers speaking to a human representative, the rural family with spotty broadband, the immigrant still learning to navigate digital systems – all of these populations could find themselves effectively locked out of services that are built around AI. Perhaps most concerning is our tendency to treat AI outputs as inherently trustworthy. “These tools are being anthropomorphised and framed as humanlike and superhuman. We risk inappropriately extending trust to the information produced by AI,” explains Molly Crockett, a cognitive psychologist and neuroscientist at Princeton University.

Learnings

Ultimately, the promise of AI in government isn’t just about shorter wait times or fewer forms to fill out – though those improvements would be warmly welcomed. It’s about fundamentally reimagining what it means to be served by the institutions we collectively support and depend on. But for every success story like Humphrey saving taxpayers millions or Chicago’s chatbot resolving rider concerns, there’s that uncomfortable reality check about bias and exclusion. It’s almost ironic that in our rush to make government more efficient and accessible, we might inadvertently create new forms of inequality that are even harder to detect and challenge than the old bureaucratic frustrations. The real measure of success won’t be how many hours of bureaucratic work we eliminate, but whether the people who need government services most can still access them when they need to.

Share via
Copy link