
By Hanna Cheung
“I don’t want my doctor getting their knowledge from ChatGPT.”
Be it tight deadlines, confusing concepts, or unexplained class notes, many students find themselves resorting to artificial intelligence when it comes to dealing with school work. Professors have caught on, though, and recent syllabi are setting limits or outright bans on the use of generative AI for course assignments.
Are Brown students breaking the rules?
In the Brown Opinion Project’s October 2024 poll, 38% of sampled students indicated that they had violated a course policy on artificial intelligence, while 47% reported that they have not.
Different concentrators indicated different frequencies of illicit Chat GPT use. For some, functionality — not morality — was the main concern.
A senior in literary studies stated, “When I give [ChatGPT] a random problem, it is not the best.”
Humanities and arts concentrators were least likely to indicate a violation of course policies using AI, with only 34% stating that they had made a violation, 56% reporting they had not, and 9% reporting they were unsure.
In contrast, STEM students showed strong reliance on ChatGPT. Students in physical sciences reported the highest illegitimate use of ChatGPT: 43% reported that they had violated course policy, while just 39% had not.
One pre-med first-year explained, “I use ChatGPT to better understand things that the textbooks and professors' lectures that are hard to understand, so to make things easier.”
Coding appears to be another common trigger for using AI, as another student stated that they use ChatGPT to “help me code, because I'm not really good at that.”
Interestingly, use of marrijuana was positively correlated with use of generative AI. 76% of students who use marijuana daily have violated course policies. In contrast, just 31% of students who have never used marijuana have violated course policies.
Perhaps this is just a measure of risk-averseness.
45% of students who don’t lock their doors have violated AI policies, while only 33% of students who do lock their doors said the same.
These discrepancies raise an important question: are you at a disadvantage if everyone else in your cohort is using ChatGPT?
One senior commented, “I can recognize that people that do follow the rules are at a set disadvantage.”
However, a sophomore who had never used ChatGPT provided a different perspective: “I have thought about [feeling behind] before, because it is becoming so prevalent, and I'm wondering if other people are getting advantages.”
“People who use it don’t have to read their own readings and they don’t have to think of their own ideas and stuff, and they’re probably living an easier life. But I honestly think ChatGPT cannot come up with as good of essay ideas as real people, so I don't feel behind, and I think that my papers are probably better,” said the sophomore.
Despite the frequency in using ChatGPT, students recognized that ultimately, academic integrity and one’s own education were the priority.
One student explained, “I usually [use it] when I don’t know how to outline an essay. It will be writing an intro, and writing a paragraph talking about a particular aspect of the essay. Most of the time I will ignore most of it, but it’s a good place to get me started.”
Learning to coexist with AI seemed to be the most viable path forward.
“I think the rise of AI is really scary, and technology is advancing so much faster than legislation and policies around the control of AI are being made, so it’s being used for really nefarious reasons,” said one sophomore.
Students agreed on the classification of AI as a “tool,” similar to a calculator in assisting learning.
One person elaborated, “You can use that tool in good and bad ways, but I think that Brown should recognize that. A lot of classes are just like, ‘don’t touch it.’ People are going to whether you want them to or not. Recognizing that it’s a tool and learning how to utilize that tool in some aspects would be possibly a better idea.”
That being said, one student put their foot down, saying, “I don’t want my doctor getting their knowledge from ChatGPT.”
In a world where we are already so reliant on artificial intelligence to optimize efficiency and performance, related policies are still a work-in-progress. Preferences in using ChatGPT also offer curious revelation to a students’ other identities. We will see how these discrepancies change overtime.
Note: The Brown Opinion Project (BOP) conducted its October 2024 poll from October 8 - October 10. BOP solicited responses from Brown undergraduate students near Faunce Hall and Sciences Park. BOP representatives asked every person who passed their location if they were interested in taking a quick anonymous poll. Responses were anonymously collected using Google Forms. Over the course of the polling process, BOP collected 936 total responses from Brown undergraduate students.
Comments