- How do teachers feel about generative AI disrupting education?
- Teachers’ concerns regarding students’ use of generative AI
- Educational institutions that are already implementing gen-AI
The advent of the Fourth Industrial Revolution has woven technology into the very fabric of our daily lives, from the way we communicate to the way we work. Not surprisingly, the educational sector has also been touched by this digital metamorphosis, and more specifically by generative AI, a technology that can produce content ranging from stories and articles to music, art, and even software code. But how does this technology impact the educational ecosystem? For students, the benefits are manifold. Generative AI can provide personalised learning experiences, adapting content in real-time based on individual performance and preferences. Struggling with algebra? The AI can break it down step-by-step. Need a deep dive into Renaissance art? The AI can curate an in-depth lesson tailored just for you. For teachers, this technology can act as an assistant, offering resources, generating test questions, or even suggesting different teaching methodologies based on individual or class performance. However, as with any innovative tool, there are potential pitfalls. Over-reliance on AI might result in reduced critical thinking skills among students, and the ‘human touch’ that makes education a transformative experience could be at risk. There’s also the valid concern of data privacy, with algorithms constantly analysing student behaviour and performance. Are we ready for such detailed scrutiny, and where do we draw the line? In this article, we will look at educators’ perspectives and highlight schools that are trailblazing the incorporation of generative AI in their curriculum.
It’s clear that the implementation of generative AI in education is a multifaceted issue, with educators worldwide oscillating between optimism for its potential and caution against its pitfalls.
How do teachers feel about generative AI disrupting education?
The education sector has seen a significant shift with the advent of AI, particularly through tools like ChatGPT, altering the engagement dynamics for students. Rather than sticking to traditional methods and ignoring the wave of change, it’s vital to understand its benefits and find ways how best to adapt. The emergence of AI and ChatGPT offers a window to proactively engage with these innovations, and to carefully consider adoption of these technologies in education, instead of apprehensively rejecting them. A survey by research firm McCrindle reveals that 62 per cent of classroom teachers are enthusiastic about implementing AI into their teaching methodologies, whereas 38 per cent approach it with apprehension. From these numbers, it is clear that overall there’s a keen interest in understanding how to best implement these tools for the benefit of student success.
While educators are trying to figure out ways to seamlessly integrate this technology into the learning process, the future does hold promising prospects. As we weigh the best ways to incorporate technology in educational processes and settings, it’s important to clarify the functions of the teacher, the tech tools, and the evaluation processes. And we will need to understand that any departure from what’s customary might spark initial reservations. While teachers are notably open to innovative changes, they consistently prioritise discernment when assessing new educational practices, often showing restraint towards the allure of the new, which clearly shows their commitment to evidence-driven teaching. A surge of unfavourable outcomes could easily sway their current mostly positive perspective. It’s clear that the implementation of generative AI in education is a multifaceted issue, with educators worldwide oscillating between optimism for its potential and caution against its pitfalls. As the technology evolves and becomes more entrenched in educational settings, these perspectives will continue to develop and shift.
Teachers’ concerns regarding students’ use of generative AI
The advent of generative AI presents a double-edged sword. While it promises numerous benefits in personalised learning and resource generation, it also raises significant concerns among educators, especially related to academic honesty and the integrity of the learning process. The reason why some of the most prominent gen-AI tools, such as ChatGPT, have come under scrutiny is because of their potential misuse by students. Many educators fear that students might lean on these tools to write their assignments or complete homework, thereby bypassing the essential cognitive and analytical processes that such tasks are designed to instill. This apprehension isn’t limited to ChatGPT. Over the years, other resources like Google, Wikipedia, and YouTube have come under the spotlight. However, what distinguishes gen-AI from these platforms is its ability to generate content that may be indistinguishable from a student’s work. If students rely (too much) on AI-generated content, they miss out on the iterative learning process that comes with researching, understanding, and articulating knowledge. The line between research and mere reproduction, then, becomes increasingly blurred.
Math and spelling tools further compound these concerns. While they are designed to aid in learning, in the hands of a student seeking shortcuts, they might just serve as a means to an end — finishing an assignment without truly engaging with the content. This presents educators with a conundrum: How do you provide students with tools to facilitate learning without enabling academic dishonesty? In an attempt to tackle this challenge, some school systems have responded by blocking student access to ChatGPT, while still allowing teachers to utilise the tool. This decision is grounded in the belief that educators, with their professional training and responsibility, are less likely to misuse the tool and more likely to leverage it to benefit the larger educational process. However, a widespread belief among educators is that a mere ban is not a comprehensive solution. This digital-native generation of students, which is adept with technology, will invariably find ways around these obstacles. Moreover, imposing restrictions might send the wrong message, for instance that technology is inherently detrimental to learning, rather than emphasising the importance of using it reponsibly. So, the heart of the matter isn’t about blocking access, but rather about education itself, and educational institutions need to invest time in understanding the nuances of these technologies.
Educators should not only be equipped to use these tools responsibly, but also instruct students on their ethical implications. Moreover, an in-depth understanding of gen-AI and its capabilities can aid teachers in identifying AI-generated content, allowing them to maintain the integrity of the grading process. Another point worth considering is adapting the curriculum to focus on skills that can’t easily be replicated by AI, such as critical thinking, problem-solving, and creative expression. By shifting the emphasis from rote learning to these core competencies, educators can ensure that students derive genuine value from their academic pursuits. While the concerns surrounding gen-AI in the educational context are genuine and valid, they also present an opportunity. By proactively addressing these concerns, educators can shape a future where technology and learning coexist harmoniously, each enriching the other.
Educational institutions that are already implementing gen-AI
While many educational institutions are still exploring the potential of gen-AI, a select few have already made significant strides, integrating it as a core component of their teaching methodologies. These schools understand that with strategic implementation, gen-AI can be more than just a supplementary tool — it can redefine the educational process. To understand this transformation better, let’s examine some schools that are setting benchmarks in the effective utilisation of gen-AI in their curricular framework.
Harvard introduces Chat-GPT-powered teaching for its students
Harvard University, one of America’s premier institutions, is incorporating AI into its teaching for a widely-attended coding course. The university is considering a ChatGPT-driven teaching assistant for its introductory CS50 coding class. Notably, CS50 isn’t just a hit on Harvard’s main campus; it also stands out on edX, a prominent online learning platform jointly developed by Harvard and MIT that attracts roughly 1.000 students every semester. The move, according to professor David Malan who oversees the course, aligns with the tradition of integrating the latest software into their curriculum. Malan says: “Our own hope is that, through AI, we can eventually approximate a 1:1 teacher:student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually. Course staff are currently experimenting with both GPT 3.5 and GPT 4 models. The course has always incorporated new software, meaning that employing an AI teacher is just an evolution of that tradition”. Professor Malan is optimistic that the AI will lessen the load of routine tasks for course staff, enabling them to have more meaningful, personalised sessions with students. Malan added: “We’ll make clear to students that they should always think critically when taking in information as input, be it from humans or software”. With this step, Harvard is setting a precedent that might reshape the landscape of higher education, blending traditional teaching methods with advanced AI-driven support.
“We are on the cusp of an AI revolution. Its impact on how we educate and learn will be profound”.Sunny Thakral, award-winning global teacher of the year 2018
Brighton College, Bangkok, integrates AI in its classrooms
UK-based educator, award-winning global teacher of the year 2018, and former director of ICT at The British School in Kathmandu, Nepal, Sunny Thakral, shares his experience with gen-AI, including some of the ways in which Brighton College Bangkok has started to integrate the technology into their classrooms. Thakral says: “We are on the cusp of an AI revolution. Its impact on how we educate and learn will be profound”. At the heart of the school’s AI strategy are three principles: lightening teacher workload, individualising student learning paths, and readying students for a symbiotic future with AI. Beyond just ChatGPT, the school also leverages Education Copilot in pioneering AI-driven pedagogies. Some exam-oriented classes have adopted ChatGPT as a tailored tool for the students, which offers them immediate interaction and a wealth of information at their fingertips. Comparing personal essays with those corrected by AI offers students insights into better writing. When engaging with ChatGPT, asking questions and seeking clarifications, even students previously challenged by lesson topics begin to grasp them more effectively. Gen-AI tools empower teachers at Brighton College to personalise instruction for every student. Plagiarism is a concern, but its root causes — like external (social) pressures or student confusion — are generally systemic and thankfully the school, like many other educational facilities, has established protocols to address these issues. The school also offers lessons in research methods, emphasising original thinking through techniques like linking ideas and preliminary writing, which boosts students’ confidence in their own writing style. Teachers at Brighton College blend AI into their lessons to keep personal interaction alive and prevent students from becoming overly dependent on the technology alone.
South Korea implements AI-based systems at public schools
Schools in South Korea are also increasingly implementing AI-based systems. According to an announcement by the education minister, each child will get access to a personalised AI tutor and an online learning platform. Learning assignments and homework will be tailored to students’ individual educational levels and learning behaviours. This transition to AI-based systems enables teachers to focus more on hands-on lessons and social-emotional connections. To ensure public schools can offer the same enriched and tailored educational experiences as private educational institutions and move away from the emphasis on memorisation, the education minister believes changes like these are essential. He expects a transition to assessing students in their day-to-day work and moving away from the traditional end-of-course exams. According to Asia Daily, the minister recently announced that AI digital books will be introduced in elementary and middle schools in 2025, in a bid to meet the growing demand for diversified educational content for students as well as educators. Digital books integrated with AI can instantly assess a student’s overall understanding and adjust the teaching to their needs. Moreover, AI can determine how quickly a student is advancing and suggest essential courses for those requiring added assistance. As AI may influence many jobs in the future, countries like China and South Korea believe in introducing AI education to children from a young age. This approach aims to equip the next generation with the necessary skills to navigate a future dominated by AI, ensuring they’re prepared to seize the opportunities and address the challenges that lie ahead.
With great power comes great responsibility. As we embrace these technological advancements, it’s paramount that we also remain vigilant about the possible pitfalls. Ethical concerns, such as data privacy and the potential for AI to perpetuate biases, must be forefront in our minds.
Generative AI’s growing presence in the educational sphere presents both unprecedented opportunities and challenges. Its capacity to reshape conventional teaching methods is undeniable; from automating tasks that once consumed hours of a teacher’s time to offering tailor-made assistance for every student, ensuring that each individual’s learning needs are addressed. But, as is the case with most cutting-edge technologies, there are downsides to consider as well. Students that lean too heavily on AI might not develop robust critical thinking skills, and the essence of personal connection in education may be jeopardised. Constant algorithmic tracking of students also raises data privacy concerns, and then there’s the potential for AI to perpetuate biases. So, how do we balance innovation with privacy and ethics, and where do we draw the line? Indeed, with great power comes great responsibility, and as we embrace these technological advancements, it’s paramount that we also remain vigilant about the possible pitfalls. The onus will be on educational institutions to craft rigorous policies and regular training sessions, ensuring that educators aren’t just passive users but informed and discerning adopters of AI tools. By tackling these potential issues, educators can work towards a future in which technology and education thrive side by side, amplifying each other’s strengths.