Skip to content

AI in Education: Big Tech’s Race for the Classroom

“Education is about knowledge, understanding—very different from the goals of big tech, which ultimately seeks market share and monetization.”

David Davies, Professor of Anthropology at Hamline University

As major tech companies accelerate their push into AI-driven education tools, Google’s latest expansion—integrating its large language model, Gemini, into Google Classroom, Chromebook features, and video creation tools—signals a critical shift in how AI is shaping learning. With Microsoft, Apple, and others also launching AI-powered education solutions, the competition to define the future of classroom technology is intensifying.

As AI becomes an unavoidable force in the classroom, educators, policymakers, and institutions must grapple with its implications. Will it serve as a tool for enhancing education, or will it undermine core academic principles? We spoke with an expert on digital anthropology integrating AI into his teaching to learn more about AI-enhanced learning classroom technology adoption and the broader societal impact of this rapid transformation.

Meet the Expert: David Davies

David Davies

David Davies, PhD, Professor of Anthropology at Hamline University

Dr. David Davies is a professor of anthropology at Hamline University, a small liberal arts college in Saint Paul, Minnesota. In the past, Dr. Davies also served as the American co-director at the Johns Hopkins University-Nanjing University Center in China in 2011, and from 2016 to 2018.

While teaching and developing new classes for Hamline students takes up quite a lot of time, he has managed to maintain an active research agenda–publishing on a wide range of topics, from the social memory and nostalgia for socialism in China to the rise of celebrity business people in China’s freewheeling market economy. Recently he published two chapters on Wal-Mart’s corporate culture and localization in China in a book of collected scholarly essays on Wal-Mart in China. He also teaches courses on Digital Anthropology.

Dr. Davies holds a PhD in socio-cultural anthropology from the University of Washington. He has extensive research and work experience in East Asia with a primary emphasis on China. For the past 25 years, he has published on a wide range of topics from the social memory and nostalgia for socialism in China to the rise of celebrity entrepreneurs in the country’s freewheeling market economy.

The Changing Role of Educators in the AI Era

For centuries, the role of educators has been defined by their ability to distill knowledge, curate learning materials, and deliver lectures. This traditional model—the “sage on the stage” approach—places the teacher at the center of the classroom as the primary source of information. However, AI is increasingly challenging this framework. As powerful language models like Google’s Gemini become capable of generating detailed explanations, synthesizing complex concepts, and even responding to student queries in real time, the need for a human lecturer as the primary vessel of knowledge is fading.

David Davies, Professor of Anthropology at Hamline University, sees this shift as inevitable. “Why would you listen to a person at a podium when you can watch a YouTube video, read on your own?” he asks. “I think AI is ultimately going to be the end of that model.”

The reality, he suggests, is that students no longer need to rely solely on professors for content delivery. Instead, AI allows them to access vast amounts of information on demand, forcing educators to rethink their role in the classroom.

This transformation presents both challenges and opportunities. In large lecture halls at major universities, where hundreds of students passively absorb material, AI may fundamentally disrupt the format. Yet, in smaller, discussion-based settings—such as those at liberal arts colleges—AI can serve as a complementary tool rather than a replacement. Davies, who teaches at a smaller institution, describes how he envisions using AI to enhance classroom discussions.

“Students read things, they bring that information to class, we discuss it in class, and we arrive at a synthesis,” he explains. “AI will actually be great for students to practice, to make sure they got the text, so that when they come to class, they can more adeptly deploy the information.”

In this evolving landscape, the teacher’s role may shift from a lecturer to a facilitator—guiding students in analyzing and interpreting AI-generated content rather than simply delivering information. Rather than competing with AI, educators may need to focus on fostering deeper discussion, critical thinking, and contextual understanding—skills that remain uniquely human.

AI’s Impact on Student Learning and Academic Integrity

As AI-powered tools become more accessible, their influence on student learning is undeniable. While these tools offer personalized study aids, automated tutoring, and streamlined research assistance, they also challenge long-standing academic traditions—particularly the written essay, a cornerstone of higher education.

“AI is not going anywhere. AI is free right now. AI is easy to have access to anywhere,” Davies observes. “Students are increasingly busy… they will cut corners, even the ones that are like, ‘I want to learn, I want to learn.’”

Davies has already seen this shift firsthand. “By spring of 2023, I decided on no more papers. Why would I have somebody write a paper when the temptation to use AI is too great? It’s no longer useful.” His decision reflects a growing recognition among educators that traditional written assignments may no longer be effective measures of student comprehension. If AI can generate a well-structured essay in seconds, what does that mean for academic integrity?

This challenge extends beyond plagiarism. AI-generated content is not just a shortcut for students—it alters the very nature of learning itself. In the past, writing a paper required students to research, synthesize information, and construct original arguments. The process of writing was, in many ways, a tool for thinking. But now, with AI readily available to generate ideas, draft essays, and even mimic writing styles, students risk outsourcing the cognitive work that once defined academic rigor.

Yet, rather than banning AI outright, some educators are exploring new ways to assess student learning. Davies has shifted from written assignments to more creative, synthesis-driven projects.

“I had a student who painted because she liked to paint, another who composed music, and another who programmed in a language they were learning,” he explains. These projects require students to transform information across different media, making it much harder to rely on AI alone. “You still have to engage with the material,” Davies says. “You still have to think critically and explain your decisions in class.”

By shifting from text-based assessments to interdisciplinary, project-based learning, educators may not only preserve academic integrity but also encourage students to develop uniquely human skills—creativity, problem-solving, and self-expression. However, the question remains: Will higher education institutions adapt quickly enough to redefine learning in the AI era, or will academic standards erode under the weight of automation?

The Broader Role of Big Tech in Education

The integration of AI into classrooms is not just a pedagogical shift—it’s a high-stakes battleground for Big Tech. Companies like Google, Microsoft, and Apple are competing aggressively to dominate the education sector, embedding their platforms into the daily workflows of students and teachers. Whether through AI-powered learning assistants, cloud-based storage, or digital classroom management tools, these corporations are positioning themselves as indispensable to modern education.

David Davies sees this as part of a broader trend in corporate expansion. “Google, Microsoft, Apple—these companies are carving out pieces of higher education, just like they do with healthcare and government,” he notes. Their presence in education is not new—Google Drive, Microsoft Office 365, and Apple’s iPads have long been classroom staples. However, the integration of AI is accelerating Big Tech’s control over educational infrastructure, raising concerns about dependency, data privacy, and long-term influence over curriculum design.

At the heart of this competition is a fundamental tension: education, at its core, is a non-profit endeavor designed to foster knowledge, while tech companies operate with profit motives.

“Education is about knowledge, understanding—very different from the goals of big tech, which ultimately seeks market share and monetization,” Davies points out. While partnerships between universities and tech companies can provide valuable tools, they also risk shifting priorities away from student learning and toward corporate interests.

Beyond classroom software, AI-powered platforms could further commercialize education, leading to new disparities in access. If tools like Google’s Gemini or Microsoft’s AI tutors become essential for coursework, will schools—and students—be forced to pay for premium AI services?

“If ChatGPT starts charging an exorbitant amount and universities and students can’t get access to it, that would be very problematic,” Davies warns.

Without careful oversight, AI’s role in education may shift from an equalizer to a gatekeeper, where only those who can afford cutting-edge tools receive the full benefits of AI-driven learning.

As Big Tech embeds itself deeper into academia, institutions must carefully consider how much control they are willing to relinquish. If AI-powered tools become essential for learning, schools risk becoming customers rather than stewards of education. The challenge lies in leveraging AI’s potential while ensuring that corporate influence does not dictate the future of learning.

AI, Social, and Environmental Implications

The growing integration of AI into education also carries broader social and environmental consequences that are often overlooked. While AI promises to streamline learning and make education more accessible, it also raises concerns about inequality, sustainability, and the ethical implications of an increasingly automated academic system.

One of the most pressing concerns is the potential for AI to deepen existing educational inequalities. Wealthier schools and universities may have the resources to provide students with the latest AI-powered learning tools, while underfunded institutions struggle to keep up. This imbalance could create a two-tiered education system, where elite students receive highly personalized, AI-enhanced learning experiences, while others are left with outdated resources.

David Davies warns of this possibility: “I fear we’ll see a future where wealthy elites get people in classrooms while everyone else gets AI.” His concern is that AI, rather than democratizing education, could instead become a substitute for human instruction in less privileged environments. “If you can’t afford a brick-and-mortar education, why pay for a professor when ChatGPT will do it for you?” he adds. If AI becomes a default teaching tool in lower-income schools while wealthier students continue to benefit from direct human instruction, it risks exacerbating existing disparities rather than reducing them.

Beyond social inequality, AI’s environmental impact is an often-ignored consequence of its widespread adoption. Training and operating large-scale AI models require immense computational power, which translates to significant energy consumption and carbon emissions. AI-driven education tools, particularly those reliant on cloud computing and continuous processing, could contribute to the growing environmental footprint of digital infrastructure.

Davies has considered this issue in his own work. “Every time I use ChatGPT, I think—that query is generating more carbon.” This hidden cost of AI-driven education raises questions about sustainability, especially at a time when institutions are increasingly prioritizing climate-conscious policies. If AI becomes deeply embedded in daily learning practices, universities and schools may need to reassess their commitments to reducing energy consumption and carbon emissions.

And then there are the psychological and cognitive impacts of over-reliance on AI to consider. As students turn to AI for research, writing, and problem-solving, they risk outsourcing key cognitive processes that have traditionally been fundamental to education. The ability to struggle through a difficult concept, refine an argument, and independently synthesize information are skills that AI can easily short-circuit.

Davies reflects on this challenge through his own experience: “Good thinking comes through writing. AI can assist, but if students don’t develop those foundational skills, what are they left with?” He argues that while AI can be a valuable collaborator, students must still engage deeply with material to truly understand and apply it.

This raises important ethical considerations. If education increasingly shifts toward AI-driven automation, are students being equipped to think critically, or simply to operate within an AI-mediated learning environment? Schools and policymakers must consider safeguards that prevent over-reliance on AI while still leveraging its potential as a supportive tool.

The implications of AI in education go far beyond convenience and efficiency. Its integration has the power to reshape who has access to high-quality education, how knowledge is produced, and what it means to be an independent thinker. Without careful oversight, AI could reinforce class divisions, increase environmental strain, and erode key cognitive skills. Yet, with responsible implementation, it could also open new doors for accessibility and personalized learning.

The Long-Term Implications for Higher Education

The rapid adoption of AI in education is forcing institutions to rethink fundamental aspects of teaching and learning. While AI presents undeniable opportunities for efficiency, accessibility, and personalization, it also challenges the very purpose of higher education. At its best, AI can serve as an enhancement to human learning, providing tools that streamline administrative work and offer adaptive study resources. But at its worst, it risks undermining the development of essential intellectual skills, creating a generation of students who rely on automation rather than independent thought.

David Davies has spent significant time reflecting on this shift. He describes a recent experience where he used AI to refine a course syllabus based on an unpublished book proposal. “I spent two hours chatting with ChatGPT about what to include, how to structure the course. By the end, I had refined it into a twelve-week curriculum,” he explains. This kind of interaction shows AI’s potential as a collaborative tool for educators, helping them refine ideas and generate materials efficiently. But Davies acknowledges a key difference: he already had the knowledge, the expertise, and the ability to critically engage with AI’s suggestions. “I felt very happy that I learned all those skills before AI came along,” he admits. “Because I can benefit from it in a different way.”

That distinction raises concerns about students who are still developing foundational skills. If AI is introduced too early as a primary tool for learning, will students ever acquire the ability to generate their own ideas from scratch? “Good thinking comes through writing,” Davies emphasizes. “AI can assist, but if students don’t develop those foundational skills, what are they left with?” This question is at the heart of the long-term debate surrounding AI in education. Will universities continue to emphasize deep intellectual engagement, or will education become a process of managing AI-generated content?

Another challenge is the evolving role of universities themselves. The traditional higher education model has long been built on personal mentorship, critical discourse, and the production of new knowledge through research. If AI-driven platforms begin to replace certain elements of this process—especially in large lecture-based courses—will universities still provide the same value to students? The increasing presence of AI also raises practical concerns about how institutions will regulate and standardize its use. Some universities may embrace AI-driven coursework, while others may impose strict limitations, creating inconsistencies in how students across different institutions engage with the technology.

As AI continues to evolve, its role in academia remains an open question. It has the power to make education more efficient and personalized, but also to redefine what it means to learn, think, and create. Whether AI strengthens or weakens higher education will depend not on the technology itself, but on how institutions, educators, and students choose to engage with it.

Moving Forward

AI is no longer on the horizon—it is embedded in the classroom, transforming how students learn and educators teach. Google’s expansion of Gemini into education is just one piece of a larger shift, with Big Tech vying to define the future of learning. While AI offers efficiency and personalization, it also forces institutions to confront deeper questions about academic integrity, equity, and the preservation of critical thinking skills.

As AI automates tasks once central to education—information delivery, study aids, even writing—the role of educators is evolving. Rather than being the primary source of knowledge, teachers must become facilitators of deeper discussion, guiding students in critical analysis and problem-solving. However, this transition is not without risks. Over-reliance on AI could erode essential cognitive skills, making students passive recipients of information rather than active thinkers.

Beyond pedagogy, AI’s impact extends to access, ethics, and sustainability. If left unchecked, it could widen educational disparities, with wealthier students benefiting from human instruction while others rely on AI-driven learning. Its environmental footprint also raises concerns, as large-scale AI systems demand significant energy resources, posing a challenge to institutions prioritizing sustainability. These factors highlight the urgent need for universities and schools to shape AI’s role responsibly rather than allowing it to dictate the terms of education.

The future of AI in education is not predetermined—it will be defined by how institutions, educators, and policymakers choose to engage with it. The challenge is not whether AI belongs in the classroom, but how to ensure it serves as a tool for learning rather than a substitute for it.

Chelsea Toczauer

Chelsea Toczauer is a journalist with experience managing publications at several global universities and companies related to higher education, logistics, and trade. She holds two BAs in international relations and asian languages and cultures from the University of Southern California, as well as a double accredited US-Chinese MA in international studies from the Johns Hopkins University-Nanjing University joint degree program. Toczauer speaks Mandarin and Russian.