X

ChatGPT Can Pass the Bar Exam. Does That Actually Matter?

The question is what happens next.

Daniel Van Boom Senior Writer
Daniel Van Boom is an award-winning Senior Writer based in Sydney, Australia. Daniel Van Boom covers cryptocurrency, NFTs, culture and global issues. When not writing, Daniel Van Boom practices Brazilian Jiu-Jitsu, reads as much as he can, and speaks about himself in the third person.
Expertise Cryptocurrency, Culture, International News
Daniel Van Boom
4 min read
Phone screen showing "GPT-4" on a background of green and purple bars

ChatGPT got an update in mid-March that's made it much, much smarter. 

NurPhoto/Getty

Back when I studied journalism, we had an assignment called News Day, designed to replicate a day in the life of a reporter. You arrived at university in the morning and were assigned a story to be filed by the end of the day. I've forgotten what my specific story assignment was -- it was 12 years ago -- but it had something to do with climate change. What I do remember, with painful lucidity, is an interview with an academic who'd agreed to help me. 

After about 10 minutes, he correctly intuited from my questions that I didn't understand the issue, whatever it was. He told me to call him back after I'd done more research. It's been 12 years and I still remember that incident every time I conduct an interview. Something like that has yet to happen again, probably because fear of a repeat has compelled me to do more than the minimum research possible.

Journalism units made up one third of my communications degree. These units were designed to be as practical as possible, to mimic actual journalism. Beyond being completely owned by that environmental science professor, I remember getting practical tips about who does and doesn't make a good source, how to structure long-form feature articles and how to pitch articles to an editor. 

On the flip side, I remember almost nothing from my communications classes, which were made up of impractically dense texts and essays on topics like "language and discourse." It's a blur. 

chatgpt

ChatGPT would have passed these communications classes with ease. In mid-March, artificial intelligence company OpenAI announced that, thanks to a new update, its ChatGPT chatbot is now smart enough to not only pass the bar exam, but score in the top 10%. 

It's easy to react to that with fear or awe. I certainly had a moment of undirected dread. But ChatGPT technically being qualified to practice law is not necessarily a sign of the AI apocalypse. It may even be a good thing.

It pays to remember what the premise of a large-language-model AI like ChatGPT is to begin with. It's an artificial intelligence trained on enormous troves of data, from webpages to scholarly texts, which then uses a complex predictive mechanism to generate human-sounding text. It's amazing, and ChatGPT is impressive for being able to ace such a high-level challenge as the bar exam. But it's not the paradigm shift it may sound like.

Exams are performances. They don't test how knowledgeable students are -- they test how much students can cram into their brain and regurgitate for a few hours. Much of that information is subsequently forgotten, often with glee. Exams are closer to tests of dedication than they are of knowledge. Of course humans can't compete with software when it comes to aggregating information. But that's not the point of an exam.

Exams as a format are still relatively safe, especially if schools revert to the good ol' pen and page. Homework and college essays are on flimsier ground. 

ChatGPT has scared many educators. Students in New York and Los Angeles are banned from using the app, as are kids in many Australian high schools. Prestigious universities, including Oxford and Cambridge, have also banned your friendly neighborhood chatbot.

Educators fretting about new technology is an old story. In the 1970s some worried calculators would ruin math, but in fact they facilitated the teaching of more complicated equations. AI like ChatGPT can do a lot of homework, but so can Google. AI is flashier and more sophisticated, but it's hardly a new threat. Teachers are now asking kids not to cheat using ChatGPT, just like teachers told me not to use Wikipedia. 

I didn't listen. I used Wikipedia anyway. I still use Wikipedia. Every day. 

Asking me not to was unrealistic, just like it's unrealistic to expect students not to take advantage of ChatGPT and the flood of services that are about to be unleashed by Google, Microsoft and Meta. Even if such a rule could be enforced, it would be counterproductive. If AI is going to be part of life, it's better for students to figure out how to best work with it rather than against it. We've all got a bit of Luddite spirit within us, but the actual Luddites' fight was futile. 

After years of artificial intelligence hype, ChatGPT solidified the idea that AI probably will disrupt many industries. But disruption can mean change rather than destruction. In many cases, that change will be for the better. Education is a prime candidate for improvement. The industry is notoriously slow moving, and can be prodded by AI without the risk of mass layoffs since teachers' jobs are typically more secure than many others. 

The question is whether high schools, colleges and universities will adapt to the challenge. Elon Musk tweeted that AI may usher in the end of homework. Maybe. What if education adapts? Essays can be replaced by presentations or practical tasks relevant to the area of study. In essay assignments that remain, students can be marked more on how persuasive they are rather than a basic ability to relay course concepts.

If university courses, for example, can't be made more practical, maybe that's a sign they were never teaching anything practical in the first place. 

If I could time travel and use ChatGPT in my university course, it would only help me cheat on the assignments that were the least educational in the first place. It wouldn't have made a difference to the journalism assignments that were actually useful -- the ones that taught me to be prepared for interviews, to always use the rule of three and to end commentary articles with a nod to the opening paragraph. 

Editors' note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.