Executive summary
We are quickly moving past the phase of simply playing around with AI. As we head toward 2026, the technology has become a mainstay of countless organisations, speeding up decisions but also ramping up the pressure on leadership. The data shows that stress is high and burnout is real, which means sticking with small, isolated pilots likely won’t cut it anymore.
- By 2028, 58% of business functions will rely on AI agents to manage at least one process every day.
- 80% of chief human resources officers expect that employees and AI agents will work next to each other by 2030.
- Only 39% of companies report any profit from AI, according to McKinsey’s latest global AI survey.
- 95% of AI pilots fail to deliver ROI due to flawed enterprise integration and misaligned resource allocation, not technical limitations.
- While almost every company is now investing in AI, only 1% of leaders consider their companies to be “mature” when it comes to AI deployment.
- While 75% of leaders use AI weekly, only 51% of frontline employees do, hindering transformation efforts.
The winners won’t be determined by which AI models they deploy, but by their willingness to invest in people alongside technology. Organisations must reskill workers, train leaders to challenge AI effectively, and build the data foundations that enable transformation. The window to catch up is closing as early adopters pull further ahead through aggressive investment and bold execution.
The era of casual AI experimentation is quickly coming to an end. Pilot projects and proofs of concept have given way to something more consequential: AI systems embedded in the daily rhythm of work, quietly reshaping how teams communicate, strategies form, and decisions are made. Once regarded as a remote consideration at best, the technology now sits at the centre of operational reality, bringing with it a knotty web of ethical questions and human complexities that lie well beyond the solving capabilities of any one algorithm.
Leaders are feeling the weight of this moment. DDI’s Global Leadership Forecast 2025 found that 71% of leaders are experiencing heightened stress; indeed, 40% are weighing whether to walk away from their roles entirely. Figures this stark point to something deeper than garden-variety burnout; they reflect the cumulative toll of steering organisations through relentless change while absorbing pressure from every direction. AI, applied thoughtfully, might actually ease some of that burden by handling routine analysis and surfacing patterns that would otherwise take weeks to uncover.
Yet the real shift lies elsewhere. As AI takes on more analytical heavy lifting, leaders face a different kind of challenge: knowing when to trust the machine’s output… and when to override it. Algorithms can process information at abnormally fast speeds – far faster than any human – but nuance, ethics, and empathy remain stubbornly human territory. In times like these, the leaders who steer their organisations to success are those who treat AI as a collaborator worth questioning, pausing to probe assumptions, watching for bias, and anchoring every insight in the messier realities that data alone cannot capture.
From assistance to autonomy: the rise of the agentic enterprise
Companies are increasingly moving from basic tools designed to streamline work processes towards sophisticated AI agents that can handle complex tasks without any human input.
Few shifts have promised to reshape how organisations operate as fundamentally as AI. Maybe the industrial revolution, if you want to gaze that far back. Now, a multitude of organisations are starting to deploy virtual AI agents across a wide spectrum: basic tools that help people work faster, systems that automate entire workflows, and increasingly, AI-first environments where agents handle complex tasks with minimal human input. Yet despite the momentum, there’s a major disconnect. While 78% of executives acknowledge that getting the most from agentic AI means adopting a new, always-on operating model, more than three-quarters have invested in AI primarily just to make existing processes more efficient rather than building wholly new capabilities.
Capgemini forecasts that by 2028, 58% of business functions will rely on AI agents to manage at least one process every day. That sounds transformative until you ask a harder question: if these agents can’t work together effectively, are they actually intelligent, or just fancier versions of the automation silos companies have been building for decades? The real competitive edge has already moved past creating the single smartest agent. What matters now is orchestrating networks of specialised agents that can collaborate securely, efficiently, and at the scale businesses actually need. Nearly 50% of the vendors surveyed by Gartner cited AI orchestration as their primary differentiator, which tells you where the market thinks the value lies.
Multi-agent orchestration
As AI graduates from solo assistants to autonomous agents, multi-agent orchestration frameworks are becoming the connective tissue of enterprise systems. Vyoma Gajjar, AI Technical Solutions Architect at IBM, sees this happening faster than most organisations realise. “We’re at the very beginning of this shift, but it’s moving fast. AI orchestrators could easily become the backbone of enterprise AI systems this year – connecting multiple agents, optimising AI workflows and handling multilingual and multimedia data,” she says. But she’s quick to add a warning: “Scaling these systems will need strong compliance frameworks to keep things running smoothly without sacrificing accountability.”
The push toward autonomy shows up in what agents can already do. The latest generation can buy, sell, and negotiate on their users’ behalf, offering us an early glimpse of AI operating as an active economic participant rather than a passive tool. Marshall Van Alstyne, a digital fellow at MIT IDE and Boston University professor, expects these agents will soon make routine decisions without waiting for human approval. However, that raises governance questions companies haven’t had to answer before. “This leads you to a learning-authority dilemma,” he explains. “What happens when an agent’s decision ability exceeds its formal authority?” Preparing for that means adapting platforms so agents can actually use them, establishing rules that govern agent behaviour, and making clear-eyed choices about which decisions should be automated and which need to stay under human oversight.
The hybrid workforce: your next hire may be an algorithm
As more companies welcome AI agents into the workplace, human-AI hybrid teams could soon become a common occurrence.
Some organisations have started treating AI agents in a manner similar to their new hires, assigning them defined roles, granting system access, and slotting them into reporting structures. According to a 2025 Korn Ferry report, more than half of leaders plan to bring autonomous agents onto their teams next year, and many have already begun creating employee records for AI agents within their HR systems. The expectation among HR executives is that blended workforces will become standard practice. Salesforce’s 2025 global survey found that 80% of chief human resources officers expect to see employees and AI agents working alongside each other by 2030, while 86% view integrating this “digital labour” as a critical part of their evolving responsibilities.
Expectations are high. Once agentic AI is fully implemented, CHROs predict an average 30% uplift in employee productivity and a 19% reduction in labour costs. Nathalie Scardino, President and Chief People Officer at Salesforce, calls it “a once-in-a-lifetime transformation of work with digital labour that is unlocking new levels of productivity, autonomy, and agency at a speed never before thought possible.” She further expects that this will lead to major changes across the board. “Every industry must redesign jobs, reskill, and redeploy talent, and every employee will need to learn new human, agent, and business skills to thrive in the digital labour revolution,” adds Scardino.
The human-AI pairing
BNY Mellon offers an early glimpse of how human-AI teams might operate in practice. The bank now has dozens of AI-powered “digital employees” who have their own system logins, report to human line managers, and are assigned to specific teams. The bank’s AI Hub has designed two core personas so far: one dedicated to identifying and fixing code vulnerabilities, the other to validating payment instructions. The two personas run in multiple instances, each embedded in a team with carefully limited access, so no agent can roam freely across the organisation.
Within those boundaries, the agents already act with a degree of independence. For example, one can spot a coding flaw, draft a patch, and submit it to a manager for approval entirely within the company’s environment. New use cases are under review, including extending communication capabilities so AI agents can contact managers directly via tools such as email or Microsoft Teams when escalation is required.
The liability trap
The addition of AI workers requires businesses to rethink security from the ground up. Microsoft made notable strides in that direction by launching its Entra Agent ID system, which issues unique digital identities to AI agents and enforces least-privilege access controls: essentially applying human-grade identity security to non-human actors. This approach essentially allows organisations to treat agents as though they were trusted team members, while maintaining zero-trust principles across the board.
Still, the harder problem may be accountability: when an AI agent makes a mistake, who is supposed to answer for it? Businesses are discovering that they remain liable for AI-driven errors even when an autonomous agent acts of its own accord. The unpredictability inherent in agentic systems makes it extremely difficult to trace responsibility when things go wrong. To address this, companies are starting to define new oversight roles, such as AI managers and ethics reviewers, and building incident response protocols specifically designed for AI failures. The same governance frameworks that work for human employees just don’t translate cleanly, and figuring out what does is becoming an urgent operational question.
The great value divide: how the rich get richer
While almost every company is investing in AI, only a small majority is seeing any profit from it. What’s their secret?
How much value is your company actually generating from its AI investments? This question is echoing through boardrooms and investor calls with increasing frequency, and the answers are often deeply discouraging. BCG’s 2025 global survey found that only 5% of firms can be described as being truly “future-built” for AI at scale. That small cohort of forward-thinking organisations, though? They’re pulling ahead, reporting five times the revenue increase and three times the cost reduction from AI initiatives compared to companies still finding their footing.
One driver of this disparity is aggressive investment. Future-built companies are allocating on average 64% more of their IT budgets to AI efforts, with overall IT spending running about 26% higher than their peers. The returns from early AI wins get reinvested into further AI development, creating a compounding advantage that grows harder to match with each passing quarter. Meanwhile, organisations at the other end of the spectrum are still struggling to see meaningful impact. McKinsey’s latest global AI survey found that only 39% of companies report any profit from AI at the enterprise level, a reminder that for the majority, AI remains a cost rather than a value driver.
It’s not a tech issue, it’s a courage issue
What makes this particularly jarring is that the strategies high performers use aren’t secret. Indeed, the same technologies, frameworks, and best practices are available to everyone willing to seek them out. The widening divide actually has less to do with access to information and more to do with the willingness to act on it. Many leadership teams understand what needs to happen but hesitate to commit resources, restructure operations, or push through the organisational friction that real transformation requires. The gap, in other words, is a leadership problem dressed up as a technology one.
Boards and CEOs now face a defining choice about which side of this divide they want to land on. Companies stuck cycling through pilots without scaling what works risk becoming structurally disadvantaged – not because they lacked the insight, but because they waited too long to make their move. The window to close the gap is narrowing, and the cost of indecision is compounding alongside the advantages of those who’ve already committed to bold, organisation-wide changes.
The 95% failure problem: why most AI projects don’t succeed
Despite massive investment, the vast majority of AI initiatives collapse due to poor integration, misaligned resources, and inadequate data foundations.
MIT’s NANDA initiative recently published its State of AI in Business 2025 report, and the findings should give pause to any executive expecting generative AI to deliver quick wins. Despite considerable investment and enthusiasm, 95% of AI pilot programmes are failing to produce measurable ROI. When executives try to explain what went wrong, they often point to regulation or model performance. However, MIT’s research suggests the real culprit lies elsewhere: flawed enterprise integration. Tools like ChatGPT work well for individuals precisely because they’re flexible and general-purpose. In enterprise settings, that same flexibility becomes a liability; generic models don’t learn from company-specific workflows or adapt to the rhythms of how work actually gets done.
The data also exposes a significant misalignment in how companies allocate resources. More than half of generative AI spending flows into sales and marketing tools, but MIT found the strongest returns in back-office automation: eliminating outsourced processes, reducing reliance on external agencies, and streamlining internal operations. In other words, companies are chasing the flashiest use cases while overlooking the ones most likely to pay off.
Escaping the pilot purgatory
Scalability is another obstacle. McKinsey’s latest State of AI survey shows that nearly two-thirds of companies remain stuck in isolated pilot phases, having not yet begun to roll AI out across the enterprise. For many, that’s where the journey will end. Gartner projects that 60% of AI projects will be abandoned by 2026 if organisations don’t establish “AI-ready” data practices, and a 2024 survey found that 63% of companies admit they lack the data management foundation required to make AI work. Pilots built on siloed or poor-quality data simply cannot transition into production value.
Through interviews with senior executives, MIT identified four areas that determine whether organisations can push past these barriers. Strategy comes first: AI investments need to align with business goals and deliver value that can actually be measured and scaled. Systems follow close behind; enterprises need modular, interoperable platforms and data ecosystems capable of supporting intelligence across the organisation rather than within isolated pockets. Synchronisation addresses the human side, requiring companies to build AI-ready roles, teams, and redesigned workflows that integrate AI capabilities into how people work. Finally, stewardship ensures that compliance, transparency, and human-centred principles are embedded into AI practices from the start rather than bolted on as an afterthought. Organisations that treat these as interconnected challenges rather than separate checklists stand a better chance of escaping the pilot trap.
The real competitive advantage: knowing when to say ‘no’
Embracing AI doesn’t mean trusting it blindly. Leaders need to learn to ask the right questions, evaluate AI’s output, and defer to human judgment when the situation calls for it.
Nearly every company is investing in AI at this point, yet only 1% of leaders describe their organisations as “mature” on the deployment spectrum, meaning AI is fully integrated into workflows and actually driving substantial business outcomes. McKinsey’s research points to a surprising culprit: the biggest barrier to scaling isn’t employees, who seem ready for the change. It’s the leaders who aren’t steering fast enough. Employees anticipate that AI will dramatically reshape their work, but more than a fifth report receiving minimal to no support in preparing for that shift.
Leaders who want their companies to join the small minority that succeeds need to start by looking inward. They need to collectively define where real value lies, how AI will drive that value, and how risks will be mitigated along the way. That means establishing clear metrics for evaluating performance and knowing when to recalibrate investments. Some organisations are appointing dedicated gen AI value and risk leaders or creating enterprise-wide orchestration functions to keep business, technology, and risk teams aligned. None of this is easy. Aligning leadership around AI requires difficult conversations and genuine accountability. Without it, AI projects remain scattered, liability exposure grows, and the transformative outcomes everyone hoped for never materialise.
The value of human judgment
Digital skills have become foundational to leadership success. Leaders today should understand how AI, machine learning, and data analytics contribute to decision-making and empower their teams. Technical knowledge gets you only partway there, though. Ethical judgment remains a distinctly human skill, and one that grows more important as AI systems take on increasingly complex decisions. Leaders carry responsibility for ensuring those decisions align with company values and address legitimate concerns about fairness, privacy, and transparency. The most effective AI-first leaders manage to hold both capabilities together: embracing technological possibility while staying anchored in human-centred leadership. They’re willing to abandon methods that no longer work and create environments where their teams feel safe to experiment and speak openly.
What matters most is keeping human judgment at the centre. Leaders can’t afford to defer blindly to AI recommendations, but they also can’t reject AI out of fear or stubbornness. They need to learn how to ask the right questions, interpret what AI is telling them, and feel confident vetoing or adjusting recommendations when something feels off. “Human judgment has never been more important,” argues Anna Catalano, an expert on board governance and leadership. “This judgment is needed to test whether the direction AI is taking us reflects the values of an organisation and reflects the values of our society.” That responsibility can’t be delegated to algorithms.
The silicon ceiling: closing the frontline adoption gap
With only about half of frontline employees using AI on a regular basis, many companies are struggling to get ahead in their transformation efforts. What’s the solution?
The lack of training described above creates a cascading effect that hinders wider AI adoption. BCG’s third annual AI at Work survey found that more than three-quarters of leaders and managers use generative AI several times a week, but regular use among frontline employees has plateaued at 51%. That gap matters more than it might seem. Companies are waking up to the reality that dropping AI tools into existing workflows won’t unlock much value. The real gains come from reshaping how work gets done end-to-end, and that kind of transformation depends heavily on frontline engagement. If half the workforce isn’t using AI regularly, the scope of what organisations can reimagine shrinks considerably.
Breaking through this “silicon ceiling” requires deliberate effort on several fronts. Leadership support makes perhaps the most measurable difference: when employees see their leaders actively championing AI, they’re more likely to use it regularly, enjoy their jobs, and feel optimistic about their careers. BCG’s data shows that the share of employees who feel positive about generative AI jumps from 15% to 55% when strong leadership support is present. The problem is that only about a quarter of frontline employees say they actually receive that support. Access to the right tools matters too. When employees lack the AI resources they need, more than half report finding workarounds on their own, a pattern that breeds frustration, introduces security vulnerabilities, and fragments organisational efforts. Training rounds out the picture. Employees who receive at least five hours of AI training, particularly with in-person coaching, are far more likely to become confident, regular users.
Ride the wave of change
Walmart offers a glimpse of what a more intentional approach looks like at scale. Rather than using AI as a pretext for headcount reduction, the company is reskilling its 2.1 million employees for AI-driven roles. In 2024, Walmart partnered with OpenAI to provide free AI training and certifications to workers across stores and corporate offices, an effort designed to prepare staff for evolving job requirements and, as leadership put it, “help everyone make it to the other side” of the AI transition.
Internally, the company established an AI council to track which roles are most likely to change and has rolled out tools like the Ask Sam assistant and supply chain optimisers to augment employee productivity. “Walmart has a history of getting stronger in moments of change, and with AI, we’re not waiting around – we’re leaning in to make it work for our customers, associates, and partners,” a company spokesperson explained. Whether other organisations can match that level of commitment will likely determine how quickly the frontline adoption gap closes.
In closing
A thread runs through each of these trends: the technology is rarely the limiting factor. The companies pulling ahead aren’t necessarily the ones with the most sophisticated models or the biggest AI budgets. They’re the ones whose leaders have done the harder work of reshaping workflows, establishing accountability structures, and investing in their people. They’ve recognised that AI amplifies whatever organisational strengths and weaknesses already exist. Get the foundations right, and AI accelerates progress. Skip them, and it accelerates dysfunction.
Human judgment plays a key role in all of this. AI can surface patterns, automate processes, and operate at speeds no team could match. It cannot decide what an organisation should value, how to weigh competing priorities, or when a recommendation should be overridden because the context demands it. Those calls remain irreducibly human, and the leaders who develop the skills to make them well will define which organisations thrive in the years ahead. The window to act is narrowing. If your company hasn’t yet moved beyond pilots and experiments, the question worth asking isn’t whether AI will reshape your industry. It’s whether you’ll be the one doing the reshaping.
Share via:
