Skip to content

Understanding Generative AI’s Role in Higher Education: A Scientific Approach to ChatGPT

“The clearest pattern we’ve observed among students is that they are generally comfortable using and experimenting with AI, but they feel the need for guidance from faculty about what is allowed—and advisable—and what isn’t.”

Soulaymane Kachani, PhD, Senior Vice Provost, Professor of Industrial Engineering and Operations Research, Columbia University

The rise of generative AI tools like ChatGPT has sparked a transformative shift in higher education. These tools are reshaping how students learn, how instructors teach, and how institutions navigate the complex intersection of technology and pedagogy. But while much of the discourse has focused on challenges such as academic integrity, there is a growing need to move beyond surface-level debates and adopt a more scientific approach to understanding the impact of these tools.

How do students and faculty currently engage with AI? How can generative AI tools improve teaching and learning outcomes? And what ethical considerations should guide their integration into academia?

Columbia University’s Science of Learning Research Initiative (SOLER) is leading this charge, employing rigorous methodologies to explore how AI can enhance learning outcomes, guide ethical practices, and transform the educational landscape. We spoke with Columbia’s SOLER experts, including Dr. Adam Brown, program director, and Dr. Soulaymane Kachani, senior vice provost, to gain better insight into these questions and their implications for learning.

Meet the Expert: Adam Brown, PhD, Program Director of the Science of Learning Research (SOLER) Initiative at Columbia University

Adam Brown

Dr. Adam Brown serves as program director for the Science of Learning Research Initiative (SOLER) at Columbia University. In this role, Brown facilitates and develops infrastructure for a variety of science of learning projects conducted in departments across the University. Working closely with faculty, graduate students, and colleagues at the Center for Teaching and Learning, Office of Teaching, Learning, and Innovation, Columbia University Information Technology, and the Institutional Review Board, he operates at the intersection of academics, research, data science, and administration.

Dr. Brown earned a PhD in computational neuroscience from the University of Chicago, where he also served as a Graduate Fellow at the Chicago Center for Teaching. He then came to Columbia as a postdoctoral science fellow, serving as a seminar instructor and curriculum developer for the College’s core science course, “Frontiers of Science.”

Meet the Expert: Soulamayne Kachani, PhD, Senior Vice Provost and Professor of Industrial Engineering and Operations Research

Soulamayne Kachani

Dr. Soulaymane Kachani serves as the senior vice provost of Columbia University and a professor of industrial engineering and operations research. He oversees Columbia University’s initiatives in teaching, learning, and educational innovation, and led the establishment of Columbia+, Columbia University’s online platform to engage alumni and learners worldwide through non-degree non-credit courses, events, and podcasts. He also focuses on propelling Columbia’s excellence as a global university, working to enhance existing international academic partnerships and develop new ones, and to make its campuses ever more welcoming to students and scholars from around the world.

Dr. Kachani received a PhD in operations research from the Massachusetts Institute of Technology. He also holds a master of science in operations research from MIT and a diplôme d’ingénieur in applied mathematics from École Centrale Paris.

The Role of a Scientific Approach

The complexity of integrating generative AI tools like ChatGPT into higher education demands more than anecdotal solutions or reactive policies. A scientific approach provides the foundation for understanding these tools in depth, ensuring that their use aligns with the principles of effective pedagogy and institutional integrity.

Dr. Brown underscores the importance of a data-driven strategy explaining, “Artificial intelligence tools will continue to evolve quickly, and the way they are used will change. To keep up, we have to regularly collect and analyze new data. A systematic, scientific approach ensures we ask the right questions and address these shifts efficiently.”

SOLER’s research framework incorporates diverse methodologies, including true experiments, observational studies, controlled experiments, and hybrid approaches. This comprehensive strategy ensures that no aspect of AI’s educational impact is overlooked. He further highlights the value of varied research methods commenting, “We also need a good balance of different kinds of research methods—including surveys and focus groups of instructors and students—to ensure that we are asking the right questions.”

This rigorous approach not only identifies immediate challenges but also uncovers opportunities for AI tools to complement traditional teaching methods. By prioritizing empirical evidence, SOLER sets the stage for actionable insights that balance innovation with the core values of higher education: equity, academic rigor, and inclusivity.

Key Research Findings

Through observational research, SOLER has identified significant patterns in how students and faculty interact with ChatGPT. Students, generally open to experimenting with AI, often praise its efficiency in completing tasks. However, this enthusiasm is tempered by concerns about its ethical and academic implications. Dr. Kachani explains, “The clearest pattern we’ve observed among students is that they are generally comfortable using and experimenting with AI, but they feel the need for guidance from faculty about what is allowed—and advisable—and what isn’t.”

Without clear institutional policies, students are often left to interpret academic integrity standards on their own, leading to uncertainty and hesitation. Many students question whether relying on AI could undermine their learning, even if it adheres to the rules. As Dr. Kachani notes, “They feel uncomfortable about having to rely on their own judgment regarding whether their use of the tools constitutes a violation of standards of academic integrity.”

Faculty attitudes toward AI are far more polarized. While some instructors are optimistic about its potential and have adapted their courses to integrate AI meaningfully, others remain skeptical and have implemented restrictive measures to limit its influence. Dr. Kachani observes, “Among faculty who are pessimistic about AI, we find a whole spectrum, from those who have taken extensive steps to ‘AI-proof’ their courses…to those who simply tell students that AI is not allowed.”

This divergence between student expectations and faculty practices highlights a critical gap in the current discourse. Collaborative efforts between students and educators are essential to developing shared guidelines and fostering a cohesive understanding of AI’s role in academia.

Experimental Insights

SOLER’s experimental research offers further clarity on how AI tools like ChatGPT influence educational outcomes. In a study conducted within a real estate finance course, students who used ChatGPT to complete computation-based assignments underperformed in subsequent assessments compared to peers who received traditional instruction.

Dr. Brown reflects on the findings sharing, “Students in the ChatGPT group reported that the tool helped them complete the assignment ‘efficiently’ but not necessarily ‘accurately,’ and they generally did not view the technology as a replacement for in-person learning.”

In contrast, another study focusing on advanced Chinese language learners revealed the benefits of generative AI in targeted applications. Students used a custom AI chatbot to practice interviewing skills in Mandarin, receiving tailored feedback that significantly boosted their confidence and proficiency. As Dr. Brown notes, “In another study, a custom AI chatbot helped students practice their interviewing skills in an advanced Chinese language course, significantly boosting their confidence.”

These experiments illustrate both the strengths and limitations of AI in education. While ChatGPT excels in skill-building and practice-oriented tasks, it cannot replace traditional instruction where deeper comprehension and critical thinking are required. The results emphasize the need for context-specific strategies to maximize AI’s educational potential.

Ethical Considerations and Broader Implications

The integration of generative AI into higher education raises pressing ethical questions, particularly around access, equity, and inclusivity. Dr. Kachani highlights one solution for addressing disparities saying, “Providing enterprise licenses for AI tools is an increasingly common way to ensure that all students have access to the technology.” Institutions must ensure that AI tools are not only available but also designed to minimize biases and promote equitable outcomes.

Clear guidelines for responsible AI use are also critical. As Dr. Kachani notes, “Institutions—perhaps through programming developed in an institution’s teaching and learning center—should support educators with resources and workshops to explore integrating AI tools into their courses to enhance student learning.”

At the policy level, governments can play a pivotal role by funding research and encouraging collaboration between academia and industry. Dr. Kachani underscores this point and says, “Policymakers can support these efforts by allocating funding for research on equitable AI use in education and for infrastructure to support its integration.”

By addressing these ethical considerations, institutions and policymakers can create a framework that ensures generative AI benefits all students while preserving the integrity of education.

Looking Forward

The rise of generative AI tools like ChatGPT represents both an extraordinary opportunity and a formidable challenge for higher education. SOLER’s work demonstrates that the integration of these tools requires a deliberate and evidence-based approach to maximize their potential while addressing their limitations.

Observational research and experimental findings reveal the dual nature of AI—as its capacity to enhance learning in targeted applications and its limitations when used as a substitute for traditional instruction. As Dr. Brown aptly states, “The more data we have, the better positioned we are to guide students and faculty in using AI responsibly.”

By fostering dialogue, supporting faculty and students, and addressing ethical concerns, institutions can harness the transformative potential of generative AI. The future of AI in higher education is still unfolding, but initiatives like SOLER provide a roadmap for thoughtful integration—one that prioritizes innovation, equity, and the foundational principles of learning.

Chelsea Toczauer

Chelsea Toczauer is a journalist with experience managing publications at several global universities and companies related to higher education, logistics, and trade. She holds two BAs in international relations and asian languages and cultures from the University of Southern California, as well as a double accredited US-Chinese MA in international studies from the Johns Hopkins University-Nanjing University joint degree program. Toczauer speaks Mandarin and Russian.