Executive summary
A striking contradiction sits at the heart of modern workplaces. Despite clear evidence that AI tools boost efficiency and job satisfaction, nearly half of workers are hiding their use of AI from colleagues, fearing being perceived as lazy, incompetent, or unethical when they use these technologies. This phenomenon, dubbed ‘AI shame’ by researchers, has emerged as one of the most significant barriers to successful digital transformation.
- Workers using AI report 64% higher productivity and 81% greater job satisfaction than non-users.
- 48.8% of employees conceal AI use to avoid judgment.
- Duke University found that AI users are rated as ‘lazier, less competent, and less diligent’ by colleagues.
- “While AI can significantly enhance work performance, using it may damage your professional reputation,” says Jessica Reif, lead author of the Duke University study.
- “Leadership should share examples of practical ways to use AI and shine a light on areas where AI should not be used. This transparency takes away the fear and shame that some employees associate with using AI on the job,” says Sam DeMase, Career Expert at ZipRecruiter.
The solution lies in deliberate culture change. Leaders who openly embrace AI use, implement comprehensive training programmes, and establish clear ethical guidelines create environments where employees can harness these powerful tools without fear. The organisations that master this transition will capture the full benefits of AI adoption, while those that ignore the problem will watch productivity gains slip through their fingers as competitors gain ground.
Introduction
AI has become an inextricable part of our everyday routines. ChatGPT helps you draft emails. Claude assists with research and analysis. Grammarly polishes your writing. GitHub Copilot speeds up your coding. Millions of professionals now weave AI tools into their workflows, while others turn to them for creative projects, learning new skills, or merely satiating their curiosity about what AI can accomplish. But have you ever felt a twinge of guilt when using AI to help with work tasks? Maybe you’ve crafted the perfect presentation with AI assistance, then hesitated before sharing it with colleagues. Or perhaps you’ve found yourself downplaying how much an AI tool contributed to your latest project, or even omitting it entirely from conversations about your process. You’re far from alone if any of this resonates.
Across industries and job functions, professionals report similar experiences – a quiet discomfort that accompanies their AI usage, an unspoken sense that they should be accomplishing these tasks purely through human effort. What might appear as a minor workplace quirk actually signals something more substantial brewing beneath the surface. As AI tools become increasingly sophisticated and ubiquitous across business environments, this psychological response, which researchers have begun referring to as ‘AI shame’, emerges as one of the most significant obstacles standing between organisations and successful digital transformation initiatives. So, how do you resolve this problem? And how do you ensure that employees who want to use AI tools don’t feel compelled to use them in secret or worry about judgment from their peers?
“While AI can significantly enhance work performance, using it may damage your professional reputation.”
Jessica Reif, Duke University
The anatomy of AI shame
Why do people feel the need to hide that they are using AI for work?
But before we get into practical advice on how to overcome this issue, let’s first examine the reasons behind it. While many among us now integrate AI tools into our daily workflows, a substantial portion of the workforce remains on the outside looking in. Many colleagues, supervisors, and even senior leaders have never actually used applications like ChatGPT, remain largely unaware of their capabilities, and maintain deep suspicions about their impact on work quality and professional ethics. Yet despite their limited direct experience, these individuals often hold remarkably strong opinions about how AI should and should not be used.
They often view AI tools as forms of ‘cheating’, ‘taking shortcuts,’ being ‘professionally underhand,’ or ‘acting unethically.’ When these viewpoints surface in workplace conversations – whether through casual comments, policy discussions, or performance reviews – they can quickly escalate into intentional or inadvertent shaming of colleagues who are perceived as acting inappropriately. As a result, those who do use AI tools begin downplaying or concealing their usage, even when that technology genuinely improves their productivity and work quality.
A deeply embedded stigma
Recent data reveals just how widespread this dynamic has become. Slack’s 2024 Workforce Index found that daily AI users report 64% higher productivity and 81% higher job satisfaction compared to non-users. However, nearly 50% of US employees admitted they felt ashamed to tell colleagues they were using AI because they were concerned they’d be perceived as lazy or engaging in dishonest practices. “Our research shows that even if AI helped you complete a task more quickly and efficiently, plenty of people wouldn’t want their bosses to know they used it,” says Christina Janzer, Head of Slack’s Workforce Lab. “Leaders need to understand that this technology doesn’t just exist in a business context of ‘Can I get the job done as quickly and effectively as possible,’ but in a social context of ‘What will people think if they know I used this tool for help?’”
Additional research from WalkMe’s 2025 AI in the Workplace survey reinforces these findings. Nearly half of the over 1,000 US workers surveyed (48.8%) admit to hiding their AI use at work to avoid judgment. What makes this data particularly striking is that the discomfort doesn’t decrease as you move up the organisational hierarchy – quite the opposite. Among C-suite leaders, as many as 53.4% conceal their AI habits, despite being the most frequent users of these technologies. “Let’s be honest: employees aren’t hiding their AI use because they’re trying to get away with something – they’re hiding it because they’re trying to get ahead without getting in trouble,” explains David Torosyan, HR & Payroll Manager at J&Y Law Firm.
Unfortunately, these fears may have a strong basis in reality. A 2024 Duke University study found that workers consistently rated colleagues who used AI as ‘lazier, less competent, and less diligent’ than those completing identical tasks manually. The research also showed that managers were demonstrably less likely to hire or promote AI-using candidates. Most concerning, this bias appeared consistent across age, gender, and organisational status, suggesting a deeply embedded stigma. “Our findings reveal a tension between productivity and perception. While AI can significantly enhance work performance, using it may damage your professional reputation,” argues Jessica Reif, lead author of the study. “People anticipate being judged harshly for using AI. This creates a situation where employees might hide their AI use, even when that AI use is beneficial for the organisation.”
“Leadership should share examples of practical ways to use AI and shine a light on areas where AI should not be used. This transparency takes away the fear and shame that some employees associate with using AI on the job.”
Sam DeMase, Career Expert at ZipRecruiter
Overcoming shame and resistance
Practical steps that can help companies overcome AI shame and ensure that those who use AI can do so without fear of being judged.
So, what practical steps can companies take to tackle the problem of AI shame in the workplace? It all begins with fostering a culture that treats innovation and continuous learning as core business values. Organisations looking to successfully integrate AI need to actively encourage employees to develop their understanding of these tools through structured learning opportunities. Consider hosting internal events, arranging expert-led webinars, or sponsoring attendance at industry workshops. When leadership demonstrates genuine investment in employee AI literacy, they create psychological safety around experimentation and skill development. “The problem isn’t that employees are using AI – it’s that they are afraid to talk about it,” says Justin Hale, Master Trainer at Crucial Learning. “Leaders need to go in thinking, I’m just going into the conversation expecting there’s going to be people that are fearful or unsure or uneasy, and so I need to double down on my invitation, my curiosity, my openness.”
Companies should also invest in bringing AI training directly into their workplace environment. Internal programmes create spaces for collaboration and hands-on skill-building that feel more relevant and immediately applicable than generic courses. Establishing partnerships with educational institutions or specialised online learning platforms can help organisations develop industry-specific AI programmes tailored to their teams’ actual responsibilities and challenges. When employees can experiment and practice with AI tools in low-stakes environments, they develop both technical competence and confidence. More importantly, they begin to see how their newfound knowledge applies to real client work and business outcomes, transforming abstract concepts into practical capabilities.
As comfort and capabilities grow across your organisation, addressing ethical considerations becomes equally crucial for sustainable AI integration. Companies need to define clear guidelines that align AI usage with their established brand values and principles, outlining how AI tools will be used, what types of data will be collected and analysed, and how decisions informed by AI insights will be made and communicated. Sam DeMase, Career Expert at ZipRecruiter, emphasises the importance of transparent communication in this process: “Leadership should share examples of practical ways to use AI and shine a light on areas where AI should not be used. This transparency takes away the fear and shame that some employees associate with using AI on the job. The reality is, AI is becoming increasingly prevalent and, when used effectively, can boost productivity and free up employees’ time for innovative thinking.”
Empowering people to do better
The global pharmaceutical company Novo Nordisk provides a compelling example of how organisations can tackle AI shame directly during major technology rollouts. When the company deployed Microsoft Copilot across its workforce in 2024, many employees were initially highly suspicious of the new tool, worrying that using AI assistance would be perceived as cheating or taking unethical shortcuts. Rather than dismissing these concerns or mandating adoption, Novo Nordisk leadership chose to address the cultural and communication dimensions head-on. They established clear messaging that framed Copilot as an empowerment tool designed to enhance human capabilities rather than replace human judgment or circumvent important processes.
Corporate VP Mark Navas communicated this philosophy directly to staff: “Copilot is about empowering our people to do better work, not cutting corners.” Leadership reinforced this message through hands-on training sessions where employees could see exactly how the tool functioned and ask questions about appropriate usage scenarios. The company also created open forums where teams could discuss their experiences, share concerns, and learn from colleagues who had found effective ways to integrate AI into their workflows. By embedding AI champions throughout the organisation and establishing safe feedback channels, Novo Nordisk gradually transformed employee attitudes from suspicion to curiosity.
Learnings
The phenomenon of AI shame reveals a profound truth about technological transformation: the greatest barriers to progress are often social rather than technical. Whilst we’ve built AI systems capable of amplifying human intelligence and creativity, we haven’t yet built the cultural frameworks that allow people to embrace these tools without fear. Organisations that successfully break through the shame barrier all share common characteristics: visible leadership commitment, comprehensive education programmes, clear ethical guidelines, and cultures that celebrate augmentation rather than hide it.
The ultimate lesson from this moment in history may be that authentic leadership requires acknowledging and addressing the human dimensions of digital transformation. Workers hiding in corners to use productivity tools aren’t displaying weakness – they’re revealing the inadequacy of our current organisational cultures to support the augmented workforce. As we move toward a future where AI proficiency becomes as fundamental as literacy itself, the organisations that thrive will be those that transformed shame into empowerment, fear into curiosity, and secrecy into collaborative exploration.
Share via: