On August 14th, UC Berkeley students received an email from the Office of the Executive Vice Chancellor & Provost that provided an all-access service for Google’s AI large-language model (LLM), Gemini. Vice Chancellor Hermalin and his colleagues crafted an email that consists of two main parts: the first section of the email outlines the services Gemini will provide, the latter half is a lengthy statement that waxes on about AI usage in classrooms. With no clear regulatory framework in sight, the Vice Chancellors expect both staff and students to compromise instead of holding them accountable for their endorsement. The email is an obituary for what learning once was.
Fundamentally, LLMs have no place in the undergraduate learning experience. It is imperative that undergraduates first sharpen their ability to organically craft work before defaulting to tools like AI. Just like grade schoolers aren’t allowed calculators, undergrads who are just entering the throes of their professional careers should also be learning unassisted. Owen Tortora, a newly graduated Graduate Student Instructor (GSI), echoes this sentiment: “The purpose of higher education [is] to help people get the skills they need to make it in the workforce.” Students won’t be able to effectively do this if they rely on LLMs, as “a crutch” to lean themselves upon. If you aren’t actively learning problem-solving through trial and error during the brainstorming process or rerunning code for the umpteenth time, how might you tackle a real-world problem where ChatGPT won’t be there as a safety net? LLMs also still have a long way to go as far as complex problem-solving skills are concerned. When asked questions that deviate from a general knowledge basis, LLMs will confidently boast incorrect answers, a phenomenon known as “hallucinations”. So, even if relied upon as an aid to problem solving, chances are LLMs could serve you something that is almost entirely a farce, reaffirming the importance of building these skills independently of technology. Reasoning is a powerful tool that outsmarts AI and its hallucinations time and time again. Think of all the times you might have used ChatGPT and it completely overlooked what you were asking it. Now think of how many times you’ve had to restructure your questions for ChatGPT to understand you. That in itself is using problem-solving skills to reach the end goal–in this case, being understood. In all that time it took you to get ChatGPT on the same page, your own analytical skills were what you needed to rely on the most. Even in a low stakes environment, critical thinking is a necessity.
However, when skills like reasoning aren’t practiced enough, the brain doesn’t get the stimulation it needs to grow. This concept of “cognitive offloading”, coined by Swiss researcher professor Michael Gerlich, is a looming threat to cognitive development. The brain’s ability to reason, reflect, and interact with the world around it defines the human experience. However, the overuse of AI inhibits an individual’s ability to ruminate deeply on complex problems. Once the brain can no longer reflect upon complex concepts, there is a gradual decline in cognitive capabilities. Creativity, revolution, and everything that makes humans unique become obsolete. At that point, there wouldn’t be much of a difference between students and the average pigeon we stumble upon when crossing the intersection of Bancroft and Telegraph Avenue, wandering aimlessly without an anchoring purpose. We would become shells when juxtaposed with our predecessors that explored deep thought.
Nonetheless, AI is here to stay, primarily due to its praise-worthy successes. Advancements in medical research, finance, and data analysis are just a few examples of its benefits. Even in education, students have been using AI for personalized learning. Instead of charging outrageous hourly rates that most families can’t afford, LLMs grant a low-cost alternative. Students can converse with AI for hours, allowing them to take as long as they need to understand abstract concepts. At face value, there is cause for rejoicing; part of the solution for equitable education has been found. But before raucous celebration can ensue, consider the following: what about brainstorming concepts for a paper? Well, how about giving an example thesis statement for the essay prompt that was just assigned? Or even just giving a rough draft of what the essay should look like? AI integration sounds great, until the slippery slope of convenience overpowers the desire for knowledge. The moment the trade-off occurs, it is incredibly difficult to bounce back to a learning experience devoid of AI. Addictive in nature, LLMs would take an extensive amount of time to reacclimate from simply because their modelling encourages users to continue returning to them. As such, the process of using them becomes instinctual and subconscious, effectively leading to cognitive offloading. It becomes a breeding ground for misinformation and an overall degeneration of brain development because analytical skills that would find these errors are not utilized. The years spent in undergraduate academics are a particularly significant period of development. If this growth is forsaken, we are allowing for a decrease in quality education because thinking is no longer involved.
While we can close the coffin on the academia of the past, we must not forget to continue upholding its tenets of deep, analytical thinking as we move forward. When learning to write, early learners don’t skip from spelling to crafting lengthy essays. Students are expected to practice skills to master them. A clear framework would uphold that students at a formative level should be prohibited from using AI tools for the sake of efficiency. Foundational skills are taught because they need to be understood in their entirety before incorporating tools for efficiency. Once a skill is mastered without aid, the ability to ponder acutely is refined. Alas, without endorsement from those who lead institutions of higher learning, none of these regulations may come to fruition. By remaining complicit for the sake of liability purposes, leaders discredit the education they claim to provide. With no voice to recognize the ever-changing world of academia, leaders at UC Berkeley are choosing to allow AI to completely kill off whatever is left of education. Releasing a statement that is carefully crafted to wipe the hands of culprits in this homicide clean demonstrates a willingness to sacrifice integrity in the wake of uncertainty. We must work together towards a new culture of academia where students’ learning cohesively addresses an ever-changing landscape of education.
Featured Image Source: Berkeley QB3

