Wednesday, 02 October 2024 04:44

I quit teaching because of ChatGPT - Victoria Livingstone

Rate this item
(0 votes)
Victoria Livingstone Victoria Livingstone

This fall is the first in nearly 20 years that I am not returning to the classroom. For most of my career, I taught writing, literature, and language, primarily to university students. I quit, in large part, because of large language models (LLMs) like ChatGPT.

Virtually all experienced scholars know that writing, as historian Lynn Hunthas argued, “is “not the transcription of thoughts already consciously present in [the writer’s] mind.” Rather, writing is a process closely tied to thinking. In graduate school, I spent months trying to fit pieces of my dissertation together in my mind and eventually found I could solve the puzzle only through writing. Writing is hard work. It is sometimes frightening. With the easy temptation of AI, many—possibly most—of my students were no longer willing to push through discomfort.

In my most recent job, I taught academic writing to doctoral students at a technical college. My graduate students, many of whom were computer scientists, understood the mechanisms of generative AI better than I do. They recognized LLMs as unreliable research tools that hallucinate and invent citations. They acknowledged the environmental impact and ethical problemsof the technology. They knew that models are trained on existing data and therefore cannot produce novel research. However, that knowledge did not stop my students from relying heavily on generative AI. Several students admitted to drafting their research in note form and asking ChatGPT to write their articles.

As an experienced teacher, I am familiar with pedagogical best practices. I scaffoldedassignments. I researched ways to incorporate generative AI in my lesson plans, and I designed activities to draw attention to its limitations. I reminded students that ChatGPT may alter the meaning of a text when prompted to revise, that it can yield biased and inaccurate information, that it does not generate stylistically strong writing and, for those grade-oriented students, that it does not result in A-level work. It did not matter. The students still used it.

In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. “It makes my writing look fancy,” one PhD student protested when I pointed to weaknesses in AI-revised text.

My students also relied heavily on AI-powered paraphrasing tools such as Quillbot. Paraphrasing well, like drafting original research, is a process of deepening understanding. Recent high-profile examples of “duplicative language” are a reminder that paraphrasing is hard work. It is not surprising, then, that many students are tempted by AI-powered paraphrasing tools. These technologies, however, often result in inconsistent writing style, do not always help students avoid plagiarism, and allow the writer to gloss over understanding. Online paraphrasing tools are useful only when students have already developed a deep knowledge of the craft of writing.

Students who outsource their writing to AI lose an opportunity to think more deeply about their research. In a recent article on art and generative AI, author Ted Chiang put it this way: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.” Chiang also notes that the hundreds of small choices we make as writers are just as important as the initial conception. Chiang is a writer of fiction, but the logic applies equally to scholarly writing. Decisions regarding syntax, vocabulary, and other elements of style imbue a text with meaning nearly as much as the underlying research.

Generative AI is, in some ways, a democratizing tool. Many of my students were non-native speakers of English. Their writing frequently contained grammatical errors. Generative AI is effective at correcting grammar. However, the technology often changes vocabulary and alters meaning even when the only prompt is “fix the grammar.” My students lacked the skills to identify and correct subtle shifts in meaning. I could not convince them of the need for stylistic consistency or the need to develop voices as research writers.

The problem was not recognizing AI-generated or AI-revised text. At the start of every semester, I had students write in class. With that baseline sample as a point of comparison, it was easy for me distinguish between my students’ writing and text generated by ChatGPT. I am also familiar with AI detectors, which purport to indicate whether something has been generated by AI. These detectors, however, are faulty. AI-assisted writing is easy to identify but hard to prove.

As a result, I found myself spending many hours grading writing that I knew was generated by AI. I noted where arguments were unsound. I pointed to weaknesses such as stylistic quirks that I knew to be common to ChatGPT (I noticed a sudden surge of phrases such as “delves into”).  That is, I found myself spending more time giving feedback to AI than to my students.

So I quit.

The best educators will adapt to AI. In some ways, the changes will be positive. Teachers must move away from mechanical activities or assigning simple summaries. They will find ways to encourage students to think critically and learn that writing is a way of generating ideas, revealing contradictions, and clarifying methodologies.

However, those lessons require that students be willing to sit with the temporary discomfort of not knowing. Students must learn to move forward with faith in their own cognitive abilities as they write and revise their way into clarity. With few exceptions, my students were not willing to enter those uncomfortable spaces or remain there long enough to discover the revelatory power of writing.

 

Time

November 22, 2024

All good leaders have this mindset, no matter their background

David G. Ewing In today’s tech-driven business world, the ability to navigate technological challenges isn’t…
November 22, 2024

Tinubu’s borrowing spree retuning Nigeria back into debt peonage - Atiku

Former Vice President Atiku Abubakar has criticized the President Bola Tinubu-led administration for the increasing…
November 18, 2024

The magic and the minefield of confidence: Self doubt, hubris and everything in between - The Economist

Confidence is contagious. Someone declaring a position with ringing certainty is more likely to inspire…
November 16, 2024

Influencer eats pig feed in extreme attempt to save money

Popular Douyin streamer Kong Yufeng recently sparked controversy in China by eating pig feed on…
November 22, 2024

FG excited as pro-Biafra agitator Simon Ekpa arrested in Finland on terrorism charges

Simon Ekpa, the controversial leader of the pro-Biafra faction Autopilot, was arrested by Finnish authorities…
November 22, 2024

Here’s the latest as Israel-Hamas war enters Day 413

ICC issues arrest warrants for Israel's Netanyahu, Gallant and Hamas leader The International Criminal Court…
November 21, 2024

Nigeria comes top in instant payment system inclusivity index in Africa

Nigeria’s instant payment system is projected to advance to the maturity inclusion spectrum ahead of…
October 27, 2024

Nigeria awarded 3-0 win over Libya after airport fiasco

Nigeria have been awarded a 3-0 victory over Libya, and three vital points, from their…

NEWSSCROLL TEAM: 'Sina Kawonise: Publisher/Editor-in-Chief; Prof Wale Are Olaitan: Editorial Consultant; Femi Kawonise: Head, Production & Administration; Afolabi Ajibola: IT Manager;
Contact Us: [email protected] Tel/WhatsApp: +234 811 395 4049

Copyright © 2015 - 2024 NewsScroll. All rights reserved.