Skip to content

California Bars AI From Replacing College Instructors

On July 2, 2024, California’s Governor Gavin Newsom signed into law AB 2370. This landmark legislation is designed to ensure that only qualified human instructors teach courses in the state’s community colleges—not generative artificial intelligence chatbots like ChatGPT.

With a unanimous vote in the Assembly, the measure easily sailed through both houses of California’s legislature—even though it doesn’t appear that anyone had proposed that the California Community Colleges replace instructors with generative AI platforms. The legislation aims to “provide guardrails on the integration of AI in classrooms while ensuring that community college students are taught by human faculty.” That’s according to a statement from the bill’s author, Assemblymember Sabrina Cervantes, a Democrat from the Inland Empire college town of Riverside east of Los Angeles.

No Prohibitions of AI Tools

However, the clever way that the legislation’s language accomplishes this objective isn’t by prohibiting AI tools within the community college system. Instead, the legislation specifies that the instructor of record for any community college course must meet the minimum qualifications to teach as a faculty member, which are set by the system’s Board of Governors. Because no artificial intelligence chatbot or platform can meet those minimum qualifications, AB 2370 will ensure that only qualified human faculty teach students in the state’s community colleges.

Wendy Brill-Wynkoop, the president of the Faculty Association of California Community Colleges, which sponsored the effort driving the bill’s passage, told EdSurge’s Jeffrey Young that some legislators wouldn’t support an earlier version of the bill that explicitly blocked the use of AI. Here she describes how the FACCC overcame that challenge through proposing the compromise that won passage:

“We don’t even need the words AI in the bill, we just need to make sure humans are at the center,” she says. So the final language of the very brief proposed legislation reads: “This bill would explicitly require the instructor of record for a course of instruction to be a person who meets the above-described minimum qualifications to serve as a faculty member teaching credit instruction.”

“Our intent was not to put a giant brick wall in front of AI,” Brill-Wynkoop says. “That’s nuts. It’s a fast-moving train. We’re not against tech, but the question is ‘How do we use it thoughtfully?’”

And she admits that she doesn’t think there’s some “evil mastermind in Sacramento saying, ‘I want to get rid of these nasty faculty members.’” But, she adds, in California “education has been grossly underfunded for years, and with limited budgets, there are several tech companies right there that say, ‘How can we help you with your limited budgets by spurring efficiency.’”

In fact, Assemblymember Cervantes acknowledges that “there is room for artificial intelligence to contribute to community college classrooms.” For example, nothing in the legislation’s final language prevents a community college from using an AI platform for grading, creating education materials like study aids, performing administrative and clerical tasks, or providing student support.

Targeting the Threat from AI’s Replacing Workers

To see how the revised bill’s legislative intent was not to prohibit the colleges from using AI per se, one only needs to examine remarks by Cervantes during two events. Those instances include her testimony before the Assembly’s Standing Committee on Higher Education, followed by her address to the Assembly advocating the measure’s passage.

During both events she tells her fellow lawmakers that although one objective of the bill is to address the rapid deployment of AI tools within California’s community colleges, there’s another purpose to the bill as well. “As with many sectors of our society, we must contend with the target threat of Al replacing human workers,” she says.

In that sense, AB 2370 might turn into a trailblazing precedent that could pave the way for a string of bills to prevent artificial intelligence from eliminating jobs within California. At the time of this article’s publication in August 2024, there are 29 more AI bills before the state’s legislature, and four of them appear to at least indirectly restrict AI’s replacement of human workers.

For example, one such bill (SB 1288) is another education measure that convenes a working group to evaluate artificial intelligence-enabled teaching and learning practices, similar to proposed state legislation in Minnesota, which we discuss below. Three other non-education bills mandate human safety operators in self-driving trucks over 10,000 pounds (AB 2286), prevent replacing workers with AI in state-funded call centers that manage public benefits (SB 1220), and limit the use of AI in self-serve retail checkout devices (SB 1446).

Three Illusory Instructional Opportunities for AI

During her remarks, Cervantes also references a May 2023 article in Digital Futures, the community college system’s professional development newsletter for faculty and staff. That article, “Transforming Education: The Rise of AI in the California Community Colleges,” apparently alarmed her because of its proposals to provide more personalized learning by integrating artificial intelligence into community college instruction.

Plus, Cervantes couldn’t have missed the article’s strange and ominous illustration depicting a small classroom with not one but two robots—one on a video screen and a second standing robot who had replaced the instructor.

However, two of the AI opportunities proposed by that article—individualized instruction and class reviews—are traditionally among the most important responsibilities of college instructors. Although the third opportunity of tutoring may have attracted tremendous resources from Silicon Valley’s tech industry as potentially the first AI “killer application,” the likely benefits in practice from tutoring via AI remain controversial.

For example, in May 2024, OpenAI demonstrated using its new GPT-4o platform as a mathematics tutor in two separate videos. One of those clips featured an AI enthusiast who’s also the world’s most famous tutor: CEO Sal Khan of the Silicon Valley-based Khan Academy.

However, the New York Times later reported that the multimodal, GPT-4o computer vision functionality portrayed in those demos is not publicly available. Khan also disclosed that he had learned from OpenAI that functionality would not be available for at least another six months to a year, meaning that the public will not get access to this technology before November 2024 at the earliest.

Moreover, several authorities have recently questioned tutoring’s purported “two sigma” reputation for boosting student performance by two standard deviations above the average results for students educated in classrooms. That reputation created in a famous 1984 essay by educational psychologist Benjamin Bloom from Northwestern University and the University of Chicago was criticized in March 2024 by Dr. Paul von Hippel at the University of Texas at Austin for containing “elements of fiction.”

Dr. von Hippel cites several research studies demonstrating that, on average, tutoring in practice cannot even come close to replicating Dr. Bloom’s purported two-sigma gains. For example, a 2020 meta-analysis of 96 tutoring studies only found an average performance boost of about 14 percentile points. The authors considered those results “impressive,” but they were also far below two sigmas. Additionally, not a single study from that meta-analysis reported students performing two standard deviations better than classroom-educated students.

And for an AI platform to create a performance boost equal to or greater than that 14 percent, that system would need to analyze a student’s problem-solving approach in ways similar to those of an experienced human tutor, asserts Glenda Morgan with the Scottsdale, Arizona-based edtech consulting firm Phil Hill and Associates. Morgan argues that the kinds of tutoring AI offers can’t possibly accomplish such a remarkable feat. That’s because the current technology cannot observe students’ thought processes and identify the core issues they don’t understand.

Minnesota’s Proposed Legislation

California might be the first state to pass legislation precluding artificial intelligence platforms from functioning as instructors, but it’s not the only state where comparable legislation has been proposed. In Minnesota, State Representative Dan Wolgamott, a member of the Minnesota Democratic–Farmer–Labor Party from St. Cloud, has introduced a similar bill. His proposal will prohibit universities and colleges in the state from using a generative artificial intelligence platform as “the primary instructor for a credit-bearing course.”

Wolgamott’s proposed legislation has a far wider scope than California’s AB 2370 because his version applies to any college or university, not only community colleges. According to Axios Twin Cities, the bill also proposes to develop a working group that would “develop policies and procedures for the safe and ethical use” of generative artificial intelligence in higher education within Minnesota.

But what’s curious is that the language in Wolgamott’s bill doesn’t appear to block colleges from using AI as secondary instructors. The absence of such a provision would allow universities to replace graduate teaching assistants within large lecture courses who teach discussion sections and grade assessments such as quizzes, tests and final examinations.

A Boston University dean’s suggestion that professors temporarily replace their striking teaching assistants with AI platforms in May drew fierce criticism from the union representing the striking TAs. University officials later claimed that they had no plans to use generative AI systems to replace those teaching assistants and do not appear to have taken any steps to do so.

The threat that artificial intelligence technology could result in reduced human interactions has surfaced not only in higher education but also as a concern among K-12 educators. America’s largest teachers union with 1.4 million members, the National Education Association, on July 5 voted to approve an educational policy statement on AI’s use in elementary and high schools. That language states that human teachers should “remain at the center of education.”

Douglas Mark

While a partner in a San Francisco marketing and design firm, for over 20 years Douglas Mark wrote online and print content for the world’s biggest brands, including United Airlines, Union Bank, Ziff Davis, Sebastiani and AT&T.

Since his first magazine article appeared in MacUser in 1995, he’s also written on finance and graduate business education in addition to mobile online devices, apps, and technology. He graduated in the top 1 percent of his class with a business administration degree from the University of Illinois and studied computer science at Stanford University.