If Dante had needed a 10th circle of hell, he might have chosen a faculty meeting. A half-dozen universities were already operating in Italy by the time his Inferno began circulating in the 14th century. Readers who have spent time in these gatherings — at any level — will likely know what I mean.
Yet, even faculty gatherings produce, on rare occasions, more than just bitter battles over small stakes. I attended one of these gems last week at the Fowler School of Law at Chapman University. Professor Mario Mainero gave a riveting presentation on artificial intelligence and legal education.
AI has been much in the headlines recently after the debut of ChatGPT, a relatively young iteration of software with untold room to improve. And yet: ChatGPT can pass law school-level exam questions. With help from a son who graduated from Stanford with not one but two degrees in computer science, Mainero created trial questions and presented the unnervingly impressive answers from the entirely digital student.
The Chapman professoriate looked up from their devices and files and actually followed the PowerPoint slides. This is a game changer. Law students, like all students everywhere, score along a bell curve on almost every exam or set of papers. I suspect the same curve applies to undergraduates, high school students and even the average K-8 population.
As students become proficient with AI such as ChatGPT, the curve is going to change — but how, exactly? Are we on the verge of a war of robot brains, where the professors deploy their own AI to catch the students who let bots do their work for them? So far, Mainero told us, available AI detectors aren’t very good at ferreting out AI-assisted student work.
I left the meeting with my head slightly spinning. Clearly, campus handbooks must be rewritten — and fast. But what should the new guidelines say? It’s not plagiarism under the traditional definition to have a virtual assistant draft one’s papers. Nor is it copying from another student’s work. (Thankfully, Mainero did not propose a dreaded “special committee” to study the issue and return with recommendations.)
On a hunch, after the meeting, I asked Assistant Dean of Admissions and Diversity Initiatives Justin Cruz whether they still use personal essays as part of the application process. Yes, he reported — just like most, if not all, schools. Is AI a boon, then, for parents who will no longer have to guide their students through the “personal statement” portion of college applications? Maybe this levels the playing field for less-privileged kids who now can feed a few prompts into the bot and receive a first-draft essay ready for a bit polish — same as the rich kids.
What is true of college applications and law school exam questions will be true of most undergraduate papers, and even legal briefs. Even brand-name columnists will be looking over their shoulders once ChatGPT and its competitors evolve from the 3.5 version to 7.0 or 12.2.5.
I asked technologist and investor Peter Thiel how worried I should be about the future — and my own livelihood.
“I think you’re a little bit wrong to worry about it that much,” Thiel told me after I described Mainero’s presentation. AI “is probably somewhat overstated,” he continued. “Everything in Silicon Valley has been overstated for decades. … People have been, you know, overpredicting technology in one way or another.”
I recalled the Obamacare promise of 2011 that data would drive down health-care costs, and felt a bit better about the limits of our techno-overlords.
“But,” Thiel added, “at the same time, this is probably a big event. I think it’s probably the biggest event in tech since maybe the launch of the iPhone.” ChatGPT doesn’t change everything, in other words, but it might point to the change of everything.
Let’s hope that the change is liberating. Imagine high school English teachers so flummoxed by AI-written term papers that they stop assigning dreary old “Silas Marner” and engage students with writing as a pleasure instead of a chore. Imagine colleges forced to offer hands-on learning experiences for their exorbitant tuitions, instead of 1,000-word essay questions graded by overloaded graduate students.
Good writing will probably survive. Great writing will, for sure. But does a judge care in a routine case if routine lawyers file a ChatGPT-assisted brief? Does a news consumer care if a mundane story about weather or stock prices or the latest empty congressional posturing is written by AI?
Chances are: Not in the least.