The six biggest mistakes schools make when embracing AI

Picture of Richard van Hooijdonk
Richard van Hooijdonk
AI promises to transform education, yet most schools are making the same errors as they rush to adopt these tools. What separates successful AI implementation from costly failures?

AI is fast making its way into classrooms around the world, promising a new era of personalised learning, administrative efficiency, and unimaginable educational possibilities. According to new research by the Center for Democracy & Technology, 70% of high school students reported using AI in 2024, up from 58% the previous year – a signal we’re watching a fundamental shift in how education happens. Yet amid the enthusiasm, schools can easily stumble into pitfalls that undermine the very advantages they seek to gain. Education experts and researchers are now sounding the alarm that, without careful planning, adopting AI in schools can lead to serious missteps – from eroding student privacy to widening achievement gaps, from creating dependency on unreliable systems to fundamentally misunderstanding what the technology can and cannot do.

The stakes are particularly high because educational decisions affect young minds during formative years. Besides being a giant waste of money, a poorly implemented AI system can also negatively shape how students learn, think, and develop crucial skills… possibly for the rest of their lives. Indeed, schools that rush to adopt AI without understanding its limitations risk creating problems that long outlast any potential benefits. In this article, we’ll explore the six biggest mistakes schools make when embracing AI, why these mistakes happen, and how to avoid them. Each represents a common trap that well-intentioned educators fall into, often because the pressure to innovate overshadows the need for thoughtful implementation. Understanding these pitfalls can help schools harness AI’s genuine potential while protecting what matters most: student learning, development, and wellbeing.

Adopting AI for technology’s sake

AI needs to address a real teaching or learning need, rather than being adopted for its own sake.

One of the most common mistakes schools make is rushing to implement AI simply because it feels like the timely thing to do. The pressure comes from multiple directions – edtech vendors hawking their latest products, other districts announcing their AI initiatives, or parents asking why their children aren’t using cutting-edge technology. Schools end up adopting AI for its own sake rather than to meet specific teaching or learning needs. Before anyone has asked whether teachers actually need these tools or whether students will benefit from them, the purchase orders are signed, and implementation begins. The result is expensive tools gathering dust or being disastrously misused while the original problems they were supposed to solve remain untouched.

Why does this keep happening? Part of it stems from how we’ve been conditioned to think about progress. For decades, schools have been told that technology adoption equals innovation, and innovation equals progress and improvement. AI has intensified this pressure because it carries such cultural heft – nobody wants to be the one who missed out on the next big thing. But education experts consistently emphasise that technology should serve educational goals, not define them. Adding AI to a classroom doesn’t automatically improve learning any more than adding a hammer to a toolbox automatically builds a house.

When schools implement AI without proper planning or evidence-based reasoning, the consequences go beyond wasted budgets. Teachers quickly grow frustrated trying to integrate tools that don’t align with their usual teaching methods. Students become confused by technology that complicates rather than clarifies their learning. Perhaps worst of all, the failed implementation makes everyone more sceptical of future technology initiatives, even ones that might genuinely help. 

Don’t buy the hype

Benjamin Riley, founder and chief executive of think tank Cognitive Resonance, warns educators not to buy the AI hype blindly. He points out that schools shouldn’t adopt AI just because it exists – especially given its well-documented limitations and tendency to produce hallucinatory ‘errors’. What matters first is identifying a specific educational problem, then carefully evaluating whether AI tools actually offer a viable solution. If teachers struggle to provide timely feedback on student writing, then an AI writing assistant might indeed make sense. But if you’re buying the tool first and searching for a problem second, you’re already heading in the wrong direction.

The solution requires discipline that can feel uncomfortable in our fast-moving landscape. Schools need to base their AI decisions on evidence and research rather than vendor promises and hollow enthusiasm. Running small pilot programmes before committing to full implementation can help reveal whether a tool actually works in your specific context. Also, just because an AI tool worked for another school doesn’t mean it will also work for you – you won’t know until you test it with your teachers and students. Most importantly, any AI implementation needs to tie directly to curriculum objectives and measurable student outcomes. Acting quickly but strategically means maintaining focus on long-term student progress rather than short-term technological ‘achievements’. If you can’t clearly explain to parents how a specific AI tool will help specific students learn specific skills better, you probably shouldn’t be touching it with a ten-foot pole. 

Neglecting teacher training

To take full advantage of AI tools, teachers need to be properly trained on how to use them.

Another major pitfall is rolling out AI tools to classrooms without adequately training the teachers who will use them. No matter how sophisticated an AI application might be, it won‘t fulfil its promise if educators don’t know how to integrate it into their instruction or feel uncomfortable using it. Unfortunately, many teachers have been left to figure out AI on their own. In the UK and US, roughly 70-76% of teachers report receiving little to no formal AI training from their schools. The consequences of this neglect play out in predictable ways. Teachers who feel anxious about AI tend to use it minimally or incorrectly. Some blindly trust whatever the system produces, while others view AI primarily as a threat to their careers, established teaching methods or even academic integrity. Until proper training addresses these concerns, the technology remains more of an obstacle than an opportunity.

No room for professional development

So, why do schools consistently skip teacher training when implementing AI? Sometimes administrators fall into the trap of assuming that younger ‘digital native’ teachers will naturally pick up any new technology. After all, if they can use Instagram and TikTok, surely they can figure out an AI teaching assistant? Unfortunately, possessing basic digital skills may not translate to understanding how to integrate AI into lesson planning or evaluate its outputs for educational value. Budget and time constraints push teacher training even further down the priority list. After spending thousands of dollars on the technology itself, there’s often little left for professional development. Districts find themselves taking DIY approaches or scheduling minimal training sessions that barely scratch the surface. 

The scarcity of external experts who truly understand both AI and education compounds the problem. Many consultants know the technology but not the classroom, or vice versa. When teachers lack confidence with AI, the entire initiative can unravel. Tools that cost vast sums sit unused because nobody feels comfortable incorporating them into daily instruction. Worse, teachers who feel forced to use technology they don’t understand may actively resist or even sabotage the implementation. The backlash can poison attitudes toward future technology initiatives for years.

An open, ongoing conversation

The path forward requires treating teacher development as seriously as the technology itself. Professional development around AI needs to be an ongoing conversation, not a one-time event. Teachers need extensive hands-on experience with the actual tools they’ll be using, not just theoretical buzzword-y discussions about AI’s potential. Starting with the basics – how AI actually works, what it can and cannot do, and the ethical considerations – gives teachers the foundation they need before moving to practical classroom strategies. Creating safe spaces for teachers to voice their scepticism matters more than many administrators realise. When a veteran teacher worries about job security, dismissing their concern as irrational doesn’t help. Similarly, when they fear students will use AI to cheat, telling them to ‘embrace change’ won’t address the underlying issue – especially when there is no shortage of substantiation for that concern. 

These kinds of conversations need to happen openly, with concrete examples of how AI can enhance rather than replace teaching, and practical strategies for maintaining academic integrity. The most successful implementations often rely on teacher leaders who can bridge the gap between administration and classroom. When teachers see a respected colleague using AI to provide differentiated instruction or save hours on grading, they become curious rather than defensive. Moreover, teachers who understand the fundamentals of how AI works can adapt as new technologies emerge. They can make informed decisions about which tools serve their students’ needs and which are just expensive distractions. Most importantly, they can model for students how to approach new technology with both openness and critical thinking – a skill that will serve those students long after they leave the classroom.

Failing to ensure equitable access

If schools don’t take the necessary steps to ensure equitable access, AI tools will only serve to deepen the existing digital divide even further.

Introducing AI in schools without planning for equitable access is a serious mistake that can deepen the already-problematic ‘digital divide’. AI tools typically demand reliable internet connectivity, up-to-date devices, and a baseline of digital literacy – resources that may not be equally available to everyone. When a school rolls out an AI-powered learning platform but half the students can’t connect from home, or when only certain classrooms get the necessary hardware, the technology becomes yet another advantage for students who already had an upper hand. Even in the US, where pandemic-era investments significantly improved device access, over one-third of teachers say their classroom internet remains too slow or unreliable to properly support online tools, while nearly 60% of students report connectivity issues at school.

Rural communities face particularly steep challenges. Students might have devices but no broadband access at home, forcing them to complete homework in library parking lots or – let’s be realistic – McDonald’s. One study found that students with a computer at home scored consistently higher on nationwide math assessments than those without, regardless of socioeconomic status. The computer itself didn’t magically improve math skills, but it enabled access to practice tools, educational videos, and yes, AI tutoring systems that helped students master concepts. Students without that access simply can’t compete, no matter how motivated or talented they might be.

Challenges preventing equitable access

Many schools today lack the bandwidth needed to run advanced educational technology. So, when they introduce AI without addressing these infrastructure gaps, they risk creating a two-tier system for learning. But equitable access goes beyond hardware and Wi-Fi speeds. AI resources also need to be inclusive and genuinely usable for all learners. An AI tutoring system available only in English leaves out students who speak other languages at home, while content that reflects a narrow cultural perspective can alienate students with diverse backgrounds. Tools designed without considering disabilities leave out students who need them most – the very students who might benefit tremendously from AI’s ability to provide personalised, adaptive support. If underserved groups cannot fully participate, AI becomes another technology that widens rather than narrows the digital divide.

An opportunity for all

Addressing these issues requires deliberate planning before AI tools ever reach classrooms. Schools need to audit their infrastructure honestly – not just whether they have Wi-Fi, but whether that Wi-Fi can handle what they’re planning to do with it. Can the network support thirty students streaming AI-generated content at once? Do students have reliable access to devices outside of school, or will the learning stop when they leave the building? Some schools address this by lending laptops and mobile hotspots for home use. But if most students have smartphones but limited computer access, picking a mobile-friendly AI application may be better than desktop software. Moreover, offline capabilities become essential in areas where connectivity remains unpredictable. 

But even with the right infrastructure and tools in place, some students and families will need additional support to use AI effectively. Digital skills workshops can help parents and students who feel uncertain about new technology make the leap successfully. If schools can demystify these tools for households across the socioeconomic spectrum, they will ensure that students in all communities understand how to use AI productively rather than feeling intimidated or left behind. Furthermore, teachers working with students who have special needs often require their own training and resources to help those learners get genuine value from AI-powered instruction.

Overlooking responsible AI use

The use of AI in educational settings raises a host of ethical concerns that need to be carefully addressed.

Troublingly, the schools most eager to implement AI often skip past the harder questions about whether they’re using it responsibly. The technology gets deployed quickly, and only later do administrators discover they’ve opened a Pandora’s box of problems they didn’t anticipate: algorithmic bias that harms certain students, privacy breaches that expose sensitive information, or ethical violations that erode trust between families and the school. AI systems learn from data, which means they absorb whatever biases exist in that data. If the training data reflects societal prejudices, the AI will too. In educational settings, this can play out in disturbing ways.

For instance, an AI essay grader might consistently score EFL learners lower, not because their ideas lack merit but because their English skills differ from the patterns in its training data. Behaviour monitoring systems might flag normal expressions of emotion differently based on a student’s race or gender, leading to disproportionate disciplinary actions for some groups. A recent Common Sense Media report revealed just how pervasive these biases can be: popular AI tools that many teachers today rely on, including MagicSchool, Khanmingo, Curipod, and Google Gemini for Education, all produced racially biased intervention plans when tested. When presented with identical scenarios about struggling students, these systems recommended harsher, more punitive responses for students with black-coded names while suggesting supportive, empathetic measures for those with white-coded names.

The ethics of AI

Privacy concerns inevitably add another layer of risk to the equation. Many of today’s AI tools work by collecting and analysing student data – everything from test scores and attendance records to essay content and behavioural patterns. Where does that data go? Who can access it? A teacher trying to save time might upload student information to a free AI service without realising the terms of service allow that data to be used for training the company’s models or shared with third parties. Even well-intentioned use can create serious vulnerabilities if the systems aren’t properly secured.

The ethical challenges extend beyond the obvious concerns about bias and privacy. How transparent are AI systems about how they make decisions? When an AI tutoring programme recommends that a student needs remedial work, what criteria led to that conclusion? If an AI gives harmful advice or makes a serious error that affects a student’s education, who bears responsibility – the school, the teacher who deployed it, or the company that built it? And where should schools draw boundaries around AI surveillance of students, balancing safety concerns against students’ reasonable expectations of privacy? Ignoring these considerations can have serious consequences. Parents might lose trust when they discover their child’s data was shared without clear consent. Students may disengage when they sense they’re being unfairly judged by systems they don’t understand. And teachers could become wary of tools that seem to undermine their professional judgment or put them in legally precarious positions.

Preparing the ground for AI implementation

To navigate this web of dangers, schools will need clear policies in place before AI tools go live in classrooms. Those policies should spell out what data can and cannot be shared with AI systems; absolutely no uploading of sensitive student records to external services without proper safeguards. The vetting process for new AI tools needs to include fairness audits, not just functionality tests. Can the vendor demonstrate they’ve tested for bias? What happens to student data after it’s processed? How is the information stored, and who can access it? Before adopting any AI software, schools should read the fine print on terms of service and privacy policies carefully, not just skim them. Data should be encrypted and stored securely, with regular security audits to catch vulnerabilities before they turn into devastating breaches.

Everyone in the school community needs to understand what responsible AI use looks like. Teachers should learn to recognise signs of algorithmic bias and know how to verify AI-generated information before using it with students. Students themselves can learn to spot bias in AI outputs and should have a clear process for reporting harmful or incorrect content when they encounter it. Building AI ethics into both professional development and digital citizenship curricula helps create a culture where people think critically about these tools rather than accepting their outputs at face value. When teachers and students understand how AI makes decisions, why human oversight matters, and what the consequences of misuse can be, they’ll be better positioned to use the technology thoughtfully and to speak up when something seems wrong.

Using AI in superficial ways

Time constraints, insufficient training, and scepticism towards AI result in the technology largely being used superficially.

Of course, having AI tools in the classroom doesn’t guarantee they’re actually changing how students learn. A 2022 national survey of US students conducted by Project Tomorrow, one of the leading education non-profit organisations in the US, found that most classroom technology use remains passive and skin-deep – students take online quizzes, watch videos, or maybe type up assignments on a computer instead of writing them by hand. Meanwhile, the deeper possibilities of technology for interactive or creative learning barely get touched. School administrators see the same pattern: over 70% report that teachers mainly use technology for activities that could just as easily happen with paper and pencil, and fewer than half say their teachers are using technology to enable learning activities that weren’t possible before.

Barriers to meaningful use of technology

Several factors push AI toward these superficial uses even when schools have the best intentions. Time constraints sit at the top of the list. Teachers report that insufficient training, scarce support materials, and too little planning time create real barriers to using technology meaningfully. Redesigning a lesson to genuinely integrate AI takes thought and experimentation – where does the tool fit into the flow of instruction, what role should students play versus the AI, and how does it connect to what came before and what comes next? Without dedicated time to work through these questions, teachers default to familiar approaches and bolt the AI tool onto existing practices rather than weaving it in thoughtfully.

Scepticism towards the technology only serves to compound the problem. In a 2023 Pew Research Center survey, only 6% of US teachers said AI tools do more good than harm in education, while around a quarter believed the opposite. That level of doubt means many teachers feel hesitant to trust AI systems with core teaching responsibilities. School leaders sometimes make the situation worse by focusing on procurement without clearly articulating how teachers should actually use what they’re buying. Under these circumstances, AI becomes just one more thing to juggle rather than an integral teaching aid. Many teachers, already overwhelmed and overworked, simply park it on the periphery where it won’t disrupt their established busy routines.

The disconnect between AI tools and the actual curriculum creates another barrier to substantive use. If students spend time with an AI app that doesn’t meaningfully connect to what appears on tests or what teachers emphasise in class, everyone quickly loses interest. Students wonder why they’re bothering with something that won’t affect their grades. Teachers see it as a distraction from ‘real’ learning. Parents question why their children are playing with computers instead of studying. The AI tool, no matter how sophisticated, becomes educational decoration – something to flaunt when the subject of funding comes up – rather than a functional part of learning.

How to ensure AI is used the right way

The most successful implementations start by aligning AI use with specific pedagogical goals and curriculum standards. Before any technology enters the classroom, schools need clear answers to basic questions: when will this tool be used? For which specific learning objectives? How will it enhance rather than replace good teaching? If the goal is improving writing revision skills, then the AI writing assistant needs to be woven into the revision process, not treated as an optional add-on that students might use if they have extra time.

Teachers need real support to redesign their instruction around AI. That might mean adjusting curricula to make room for new approaches, providing detailed example lesson plans that show how the AI tool fits into a unit, or giving teachers collaborative planning time to work through implementation challenges together. Additionally, focusing on depth makes more sense than trying to use many different AI tools superficially. Pick one or two high-impact applications and work on embedding them thoroughly into instruction rather than dabbling with a dozen platforms that never quite become part of daily practice. Then measure what’s actually happening: are students more engaged? Are they improving in the target skills? Are teachers finding the tool genuinely helpful? That feedback helps refine practice and shows whether the investment is truly paying off and worth sticking with.

Ignoring student voice and wellbeing

One of the biggest mistakes schools make when introducing AI is not involving students in the process or considering how the technology might affect their wellbeing.

At the end of the day, schools exist to serve students, which makes it particularly troubling when AI initiatives leave them feeling alienated, anxious, or cut off from the human relationships that make learning meaningful. The temptation to automate everything runs strong – AI can grade assignments, answer student questions, provide practice problems, and track progress. For cash-strapped districts, it could also save quite a bit of money. But leaning too heavily on these systems can hollow out what matters most about education. Increased reliance on AI in schools may result in a loss of human connection between students and educators, potentially damaging motivation, social development, and the overall learning experience.

Students consistently cite supportive relationships with teachers as crucial to their success. Learning happens through social and emotional processes as much as cognitive ones. When a student mostly interacts with an AI tutor instead of getting personal feedback from a teacher who knows their struggles and strengths, something essential gets lost along the way. The student might master the content through algorithmic instruction but inadvertently feel less seen, less understood, and less motivated to push through difficult material.

The problem deepens when schools implement AI without asking students what they actually think about it. Top-down technology decisions risk missing how these tools affect learners’ daily experiences. Students will often notice the things adults overlook – when an AI grading system feels arbitrary or unfair, when a chatbot’s responses fall short of actually helping them understand, or when the constant presence of AI tools starts changing how they think and work. Additionally, some students report that heavy use of AI for schoolwork is affecting their own cognitive habits. Others express fear about an AI-centric future where algorithms make educational decisions without human judgment. They worry about scenarios where AI becomes solely responsible for who gets what. When schools brush aside or never solicit this feedback, they risk undermining student agency and potentially harming the mental growth they’re trying to foster.

Human voices still matter most

Avoiding these pitfalls requires keeping humans firmly in the loop. Teachers need to remain the heart of education – the people who inspire curiosity, recognise when a student needs encouragement or challenge, and create moments of connection that algorithms can’t replicate. AI should supplement these relationships, handling routine tasks that free teachers to spend more time on what humans do best. That means maintaining plenty of face-to-face discussion, mentorship, and collaborative activities so students continue experiencing the warmth and personal attention that make learning feel meaningful.

Students deserve a voice in how AI shapes their education. To this end, schools can include student representatives on technology committees, conduct surveys or focus groups about new AI applications, and invite students to test tools and share honest reactions before wide implementation. When students feel heard, they’re more likely to engage constructively with new technology and more willing to speak up when something isn’t quite working as intended. Schools should also watch out for signs that students feel overwhelmed by constant interaction with AI tools and balance screen time with real-world activities and social interaction. Teaching students to self-regulate their AI use matters too – helping them understand when to lean on AI assistance and when to challenge themselves without it. The goal is to enhance learning, not automate it to the point where students disengage or lose confidence in their own abilities.

Share via
Copy link