Everyone is cheating, nobody is cheating

Surprise, surprise: at a technical institute, people like technology. Here at Illinois Tech, nearly a third of students major in some kind of computer science, including those studying artificial intelligence. It simply makes sense that students would want to use it. I’ve seen students regularly use AI to write assignments for them and answer questions. Professors are using AI, too, for everything from grading to writing assignments. For certain classes, my peers and I have had to pay to use sites that grade using AI. Is AI a tool, or is it cheating? For students: Is it our friend, or working against us?

Since ChatGPT’s release in 2022, students have been walking the line between what’s blatant cheating and what’s simply efficient. High schools and universities have had to scramble to establish AI use policies. As for Illinois Tech’s policy, the handbook states: “It is vital to prepare students to critically and productively engage with new and innovative technologies—such as generative artificial intelligence—to be leaders and innovators in the future”. The examples given go on to explain that productive engagement with AI is defined as clearly delineating when AI is used in an assignment, and should only be used when explicitly allowed by the professor. I’m sure most of us have seen people stretch those boundaries more than a bit by now.

So, is it actually helping? Most students have likely seen or heard about the now-famous MIT research study, in which LLM (large language model) use is linked to a decline in engagement with tasks and decreased cognitive performance. The group assigned to use an LLM, then to use their own brain, for a similar task struggled to even quote their own work. Meanwhile, the group assigned to do the opposite, use their own brain and then receive help from an LLM, performed noticeably better and had higher cognitive engagement during the task. To say it simply, if you’re paying to go to college to learn, you’re probably not absorbing much if you’re using AI for your assignments. In an interview with New York Magazine, Professor Troy Jollimore of Cal State Chico worries that “massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” both literally as well as culturally. 

The other stance – endorsing AI use in schools – can be argued. Some universities have recently embraced the LLM industry in the last few years. For example, Columbia University, a well-known Ivy League school, has embraced the expansion of AI by partnering with OpenAI, the company that created ChatGPT. In their words, they’re hoping to “[pave] the way for innovative solutions to the complex challenges of modern education and research”. While they are supporting the development of LLM technology within university pedagogy, that doesn’t mean they’re encouraging using it to replace human learning. Proponents of AI use, especially in school environments, often also endorse restrictions and caveats. Another school that has been working on developing and advancing AI, Stanford University, suggests strict rules for professors, such as requiring detailed citations. Similarly, on the professors’ side of things, there’s a learning curve going the other direction, too. 

While we might see it as efficient if we use AI, students have felt demoralized at the discovery of their professors using it, too. In an interview with the New York Times, a student at Southern New Hampshire University found that her professor used ChatGPT to give feedback for an essay she wrote. She was more disappointed than offended – she knew that her professors had hundreds of students and “[s]he could understand the temptation to use A.I.”, but she still felt wronged. After another professor seemed to do the same, the student transferred to a different college. I’ve had professors seem to do the same, and each time, it feels nearly like a personal slight, even if I know it’s not. We have to ask ourselves: Is it fair if students use AI to guide their work, but not professors? 

Ultimately, it comes down to deciding where to draw the line. What’s fair, what’s productivity, what’s a cop-out, what’s cheating? In my own opinion, AI is a tool – not a replacement for our own learning. It can break down how to solve math problems faster than a TA and revise your essay a bit less cryptically than a professor. But it shouldn’t do your work for you – after all, we’re paying to be here, and outsourcing our degrees feels a bit like a rip-off. While you might feel like you’re getting ahead, you might actually be falling behind.

Related Posts