The Teaching of Writing in the AI Era (Updated)


“Every professor I know wasted countless hours of 2024 in the prevention or detection of AI-powered cheating. It is a miserable war of attrition that seems doomed to defeat. Perhaps the time has come, then, to declare a strategic withdrawal from writing as pedagogy?”

That’s Regina Rini (York) in the Times Literary Supplement.

She observes that “the problem is double-barrelled; writing has simultaneously become less valuable and much harder to teach.”

On the first point, about the perceived value of writing:

few students will need to compose essays after leaving school. Think of the adults you know [besides your fellow humanities academics]… When was the last time they needed to write an argument longer than a social media reply? Probably not more recently than the last time they did long division by hand.

Now try to persuade the arriving generation of college students—nearly 90 per cent of whom admit to using ChatGPT for “help” with high-school homework, according to a recent survey in the US—that writing is a skill they must internalize for future success. Brace for eyeroll impact. An ever-increasing share of adults will regard AI writing tools as just more productivity apps on their phone, no more sensible to abjure than calculators.

And on the difficulty of teaching writing:

As for the other side—the terrible cost for educators struggling to hold the line against AI cheating—at stake is the personal indignity of seeing one’s time treated as worthless…. Reading with the care of a surgeon, trying to get inside a student’s head and guess why they used that inappropriate word or missed this obvious argumentative strategy, calibrating advice for varietals of arrogance and fragility—none of this is easy or quick. And all is wasted on a student who cheats by submitting work they did not compose. A pedagogical future of thousands of hours analysing the semantic output of uneducable robots sounds like a particularly sadistic existentialist hell.

What to do? Are you still teaching writing? How are you doing it? For which kinds of classes? If not, what are you doing instead?

Does this seem like a big loss because it really is one, or because we (I?) lack the foresight or imagination to see the overwhelming benefits?

We discussed this about a year ago. I suspect that parts of that previous conversation will strike us now as naive, and that many teachers over the past year have had their eyes opened to the problems (opportunities?[grimace]) ChatGPT / large language models (LLMs) / AI brings to the teaching of writing.

Of course, the problem is not just with writing, is it? It’s with reading, too. And with the thinking skills that develop during the difficult parts of writing and reading.

That’s the pessimistic view.

But… perhaps this is just a rough patch on our way to Matrix-level technology in which we can simply upload into our minds those (AI-powered?) thinking skills without any of that pesky work.

Of course, there’s the matter of whether anyone then would see the value of uploading such skills as one scrolls through the various options. I suppose that’s the really pessimistic view.


UPDATE: A recently published study on the impact of AI tool use on critical thinking has found that “higher usage of AI tools is associated with reduced critical thinking skills, and cognitive offloading plays a significant role in this relationship.” “Cognitive offloading” is “the delegation of cognitive tasks to external tools, thereby reducing the cognitive load on individuals.”

In “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking” published in Societies, Michael Gerlich (SBS-Swiss Business School / London School of Economics) writes:

The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage. These results highlight the potential cognitive costs of AI tool reliance, emphasising the need for educational strategies that promote critical engagement with AI technologies… The findings underscore the importance of fostering critical thinking in an AI-driven world.

You can look over the study, published open-access, here.

Subscribe
Notify of
guest

59 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Frank
Frank
3 months ago

Plato is having quite a year.

G.D.
G.D.
3 months ago

I remember reading about the sea squirts: the larvae swim around until they find a suitable spot, attach themselves to the sea floor and then proceed to consume their own brain and sensory organs since these were needed for finding a good spot but no longer needed once attached (but energy-rich and nutritious).

At the time, it seemed like a quirky, striking and perhaps surprising example of evolution and adaptation. It seems somewhat more disconcerting these days.

M G
M G
3 months ago

Do we have them write papers because we want them to be good at writing papers, or because going through the torture of writing a paper is a good way for them to learn philosophy (or otherwise become intellectually more able)?

To the extent that it’s the latter, then the (alleged) fact that they won’t need to write essays post-graduation isn’t so important.

I’m not sure how effective it is, but I tell my students that using an LLM to write your paper is like bringing a forklift to the gym. The weights are going up and down, but you’re not getting stronger. Of course I don’t know what percentage of them find this compelling.

L J
L J
Reply to  M G
3 months ago

Fantastic example

Michael Kates
Michael Kates
3 months ago

One contrary data point: My wife is a communications director for an environmental non-profit and she spends most of her time writing memos, talking points, etc. So, writing is definitely not something that only us academics do.

JDRox
JDRox
Reply to  Michael Kates
3 months ago

Yeah, but isn’t AI pretty good at writing memos and talking points? I do think that being able to produce good/compelling writing will be an important skill for many of our students to have later in life, the problem is that they will almost certainly be doing it with the help of AI, and so if we really wanted to help them prepare for real life we’d be letting them write their essays with the help of AI. The problem is that that makes fair grading almost impossible, as far as I can see at least.

Michel
Reply to  JDRox
3 months ago

Before they can do a good job of writing essays with AI, they need to know how to write an essay for themselves. Similarly, in order to code with AI, you actually need to know how to code for yourself.

Nicolas Delon
Nicolas Delon
Reply to  Michel
3 months ago

That’s what I’m trying to explain to them. I use AI because I find it helpful. But I know how to sort the wheat from the chaff. They don’t, and so what they and I can do with AI is completely different. This is consistent with the claim that they should also learn how to use AI well.

JDRox
JDRox
Reply to  Michel
3 months ago

My main point is that writing-with-AI and writing-without-AI are significantly different skills, and I’m not sure how much proficiency with the latter you need to be proficient with the former.

Graham Harman
3 months ago

Eventually, writing will be something done by human/AI teams. That just seems too strange to us now, and we also don’t know how ti distinguish the wheat from the chaff with such joint products, In any case, I’m taking a one-semester break from paper writing in my classes to give myself some time to think. Papers will be replaced with an in-class written midterm and a final oral exam (10 minutes per student, which I’ve found in the past is plenty of time to find out who’s really done the work).

Caligula's Goat
Caligula's Goat
Reply to  Graham Harman
3 months ago

I think for many people in executive positions this is how writing has looked for decades: they lay out the underlying values or message they want to put out and they have paid staff whose job it’s always been to write these memos or public statements.

It’s unlikely, for example, that Biden, or Zuckerberg, or even the president of your university literally writes all of their public statements.

What matters is that the speaker *takes responsibility* for the speech written by others and, in doing so, it becomes theirs. I think this is how AI is also already being used (i.e.,if we asked an LLM to write me a letter explaining why a student failed an assignment I take responsibility for that content when I send it and it thus becomes MY communication).

The upshot of this, to me, is that what’s happening now is a democratization of delegated speech. What was once a form of speech available only to CEOs and administrators is now available to everyone. By itself this isn’t a bad thing. What we need to do is to teach people the importance of taking responsibility for speech and the skills necessary to recognize good speech (that we should feel okay taking responsibility for) from bad speech.

Another Philosopher
Another Philosopher
Reply to  Caligula's Goat
3 months ago

That’s fine—and I can see a use case for the machines. The problem starts when you start outsourcing thinking. Not because the product will be worse or because it is difficult to say who should be responsible or get credit.

No, it’s just for the reason that you yourself will be a worse thinker. I think that’s bad.

People in the future, worse thinkers as they will be, will probably not agree with me. I’m not much moved.

Dan Barnett
Dan Barnett
Reply to  Justin Weinberg
3 months ago

A university student once told me “it’s not cheating if it’s GE.” Many students triage, and if AI writing can get them by in a General Ed class, that frees them to focus on major work. If GE has been cheapened even in instructors’ minds, making a compelling case for student responsibility becomes difficult indeed.

Stephen John
Stephen John
Reply to  Dan Barnett
3 months ago

I agree entirely with Justin that the comment from Caligula’s Goat is fascinating. I’ll add that the notion of delegated writing is a very good way of thinking about lots of academic writing in the sciences, where papers are credited to large teams, but the actual writing is often done by a much smaller subset of the team (often, more junior members). There’s a very interesting literature on this phenomenon, which raises questions at the intersection of philosophy of science and social epistemology (as a provocative starting point, try Quill Kukla’s ‘Author TBD’). It never occurred to me before that there must be a link between these debates and worries about the use of LLMs, but I think there must be one. It certainly seems more productive to view ChatGPT etc as democratising (potentially problematic) existing trends in academic writing than to engage in a kind of AI exceptionalism.

Paige
Paige
Reply to  Caligula's Goat
3 months ago

My little kid and my elderly father both gave me the same explanation for how and they use AI to generate their personal communication: they agree with what was written then hit “send.”

Matt L
Reply to  Caligula's Goat
3 months ago

It’s unlikely, for example, that Biden, or Zuckerberg, or even the president of your university literally writes all of their public statements.
What matters is that the speaker *takes responsibility* for the speech written by others and, in doing so, it becomes theirs.

If you’d use a real name, I’d say that you could buy me a beer some time and I’d talk to you about how a lot of judicial opinions get written. But, even knowning that, the idea of LLMs – certainly as they are now, but even as they might be later – being used to write such things would fill me with dread and despair.

Marc Champagne
3 months ago

I don’t understand. If we have no trouble calling it “artificial intelligence,” then that means what we have is “GENUINE intelligence.” How can that not be a SUPERIOR value — something worth keeping away from the pockets of Sam Altman et al.?

Geoffrey Bagwell
Geoffrey Bagwell
Reply to  Marc Champagne
3 months ago

The use of the term “AI” to refer to these large language models is pronomial. As far as I can tell, its usage does not indicate that someone regards the LLM as “genuinely intelligent” in any ordinary sense of that term.

Nicolas Delon
Nicolas Delon
Reply to  Geoffrey Bagwell
3 months ago

I took Marc to be saying we humans have genuine intelligence, which is superior to artificial intelligence. (I don’t know how long that is going to be true but whatever.)

Geoffrey Bagwell
Geoffrey Bagwell
Reply to  Nicolas Delon
3 months ago

I too understood Marc to be saying that we have genuine intelligence while LLMs do not. I offered my comment as a way of underscoring his point from another angle; recent discourse around ChatGPT and other LLMs has extended the meaning of “Artificial Intelligence” to the point where the perceived distance between genuine and artificial has increased rather decreased. The term “AI” now includes too much. I should have made the purpose of my comment clearer. Thanks!

Nicolas Delon
Nicolas Delon
Reply to  Geoffrey Bagwell
3 months ago

I see, that makes sense. Thanks for clarifying.

Current Academic/Former Executive
Current Academic/Former Executive
3 months ago

As someone who arrived to academia late, after several successful years in corporate America, it is glaringly obvious how many academics are clueless about life outside the academy.

To think that writing skills are not valuable in the workforce because people don’t “compose essays” is so short-sighted. Learning to think for yourself, create a compelling argumentative narrative, and pull disparate ideas into a single coherent line of reasoning has immense value in the corporate life. Writing helps develop these skills even when the end product is not an essay. I secured several promotions and advanced quickly in my career due to my ability to weave a compelling narrative about my experiences and argue that they would result in future success if I was given the opportunity I was trying to acquire.

I was able to outperform others consistently specifically because I did not rely solely on the AI tools and pre-written prompts that were provided to me. I knew how to adjust my writing and presentation to my audience in ways that would resonate with whomever I was working with. Writing is important, critical thinking is important, the humanities are important.

We need to stop conceding to these arguments of “when will I ever use this?” They’re tired arguments that show one’s inability to look beyond immediate application. Turn the question around, “how COULD you use this?” Students need to learn how to take the things they are taught and FIGURE OUT how to make them useful. That is the skill that will take them the furthest.

Patrick Lin
Reply to  Current Academic/Former Executive
3 months ago

As someone with work experience beyond the academy, I was also going to say that we academics need to be in much, much closer touch with how other jobs and careers work in the outside world, if we’re going to counsel our students on what’s needed or not for their future jobs.

If all a student wants to do is to be a code-monkey, then great: no need for you to learn how to write well (putting aside critical-thinking development). But if you have any ambitions to be more than a cog or a middle-manager, you need to know how to persuade, inspire, defend, and think critically, and much of this is achieve with writing (incl. speeches).

The content AI/LLMs generate might be passable, but they’re average at best and don’t have much style, specifically your style and voice. If you rely on that, then you can be replaced by any LLM user.

Has anyone tried roping in their Career Services into the classroom to help explain why it’s important to develop academic skills for life after graduation? Seems that they should be part of the convo, if we want to pull the “you need this for your future job” lever…

Tom Hurka
Tom Hurka
Reply to  Current Academic/Former Executive
3 months ago

Yes. My understanding is that technical skills (coding, accounting, etc.) are important in entry-level jobs in business but at higher levels communication, analytical, and other more abstract skills — the kind you might think are fostered by writing essays — are most important. So for those who want to aim higher …

Greg Guy
Greg Guy
Reply to  Tom Hurka
3 months ago

My experience of the corporate life was that at higher levels what more more was what college you went to, who mummy and daddy were, how wealthy your background, which country club you belonged to, and so on. The idea that those higher on the corporate ladder has some kind of special and exceptional skills is the kind of naive theory you can only get in academia.

Nick
Reply to  Current Academic/Former Executive
3 months ago

 Learning to think for yourself, create a compelling argumentative narrative, and pull disparate ideas into a single coherent line of reasoning has immense value in the corporate life.”

Just want to point out that doing take-home essays on a computer is not the only way, nor even the best way, to develop these skills.

CallMeALudditeBut
CallMeALudditeBut
3 months ago

I am becoming increasingly convinced that the cultural/intellectual costs of generative AI far outweigh its benefits.

Eric Steinhart
3 months ago

So make your students use these AIs and grade them on how well they use them. They’re just tools that can be used well or poorly. Most people play violins very poorly, a few play them brilliantly well.

This can be operationalized pretty quickly. It’s easy to get a bland Wikipedia summary from an AI. Give that a D or a C. Harder to get it to do real arguing. Give that a B. And very hard to get it to really do something original or creative. Give that an A.

For this to work, we have to set different test problems. These can now be much harder than they used to be. For example: give me an ontological argument for the existence of the iterative hierarchy of pure sets. I’ve challenged most LLMs with that very question. It’s easy to get crappy results. With some effort, you can start to get interesting results. But you need to know something about the ontological argument, and about the iterative hierarchy, to know whether you got anything interesting. And it’s not hard at all to grade these different outputs.

So start handing out really challenging problems, and let them go nuts with the robots. This sort of problem-solving is exactly the kind they’re going to have to do when they work with AIs in their careers, so there’s an incentive for them to learn how to do it.

Dave Burkett
Dave Burkett
Reply to  Eric Steinhart
3 months ago

I agree.

Employers don’t care what tools are used, whether sources are copied, how many hours are toiled on it, and don’t ask for footnotes. They simply want an effective proposal, summary, competitive comparison, documentation, customer response, or whatever. My employer is actually asking employees to use AI. They are going to want to hire graduates who “have the reps at the gym” as one commenter said above.

Here is a few ideas for teaching in the era of AI:
1) Allow free use of AI, but require attachment of every prompt and response. Give a portion of the grade on how they use AI to improve their work.

2) Give an assignment what would previously have been impossible (ie, a 3 page essay with a 30 minute time limit). Grade on the same rubric that would have been used for untimed papers in the past.

3) assign a topic that involves current events unknown to LLM’s so that any incorporation of AI must be creatively woven in by a human writer.

You must adapt or become obsolete.

Dave

cecil burrow
cecil burrow
Reply to  Eric Steinhart
3 months ago

I’m pretty sure I couldn’t give an ‘ontological argument for the iterative hierarchy of pure sets’, and I work in philosophy of mathematics.

Eric Steinhart
Reply to  cecil burrow
3 months ago

Just take the iterative hierarchy to be the maximally perfect being. It’s a pretty direct translation. (I’m not saying this is a good or bad argument; it’s just a case of analogically reasoning.)

Animal Symbolicum
Reply to  Eric Steinhart
3 months ago

They’re just tools that can be used well or poorly.

Let me simply observe that this is a controversial assumption. There is an entire tradition of philosophical thinking about technology that questions the idea that a given artifact-type is axiologically neutral in this way. (Imagine trying to argue: “Television doesn’t stupefy people; people stupefy people.”)

David
David
Reply to  Animal Symbolicum
3 months ago

Do you have some pointers to good starting points in that tradition for someone interested?

jch
jch
Reply to  David
3 months ago

A few starting points: anything by Ibo van de Poel, Helen Nissenbaum, Paul Thompson, Pieter Vermass, Peter Paul Verbeek (consider: “Moralizing Technology” Chicago, 2011), Andrew Feenberg (one place to start – https://www.sfu.ca/~andrewf/books/critbio.pdf), or Shannon Valor (“Technology and the Virtues,” 2016 or most recently “AI Mirror” Oxford 2024).

Animal Symbolicum
Reply to  David
3 months ago

I second jch’s suggestions. And here are some canonical pieces I can think of off the top of my head:

Albert Borgmann, Technology and the Character of Contemporary Life (1984).Langdon Winner, “Do Artifacts Have Politics?” (1980).Lewis Mumford, Technics and Civilization (1934).Ivan Illich, Tools for Conviviality (1973).Jacques Ellul, The Technological Society (English trans. 1964).For a contemporary overview, you might check out Peter Kroes and Peter-Paul Verbeek, eds., The Moral Status of Technical Artefacts (2014).

Another contemporary piece is Antón Barba-Kay, A Web of Our Own Making (2023), from which I shamelessly cribbed the quip above about television.

Last edited 3 months ago by Animal Symbolicum
Michel
Reply to  Animal Symbolicum
3 months ago

Also in the classic oldies, Dallas Smythe’s ‘The Audience Commodity’.

JDRox
JDRox
Reply to  Eric Steinhart
3 months ago

I’ve thought a fair bit about this approach, but my worry is that this is going to make it very difficult for the less intelligent students. I student who doesn’t get it can still, if they work hard, produce a well-written essay that describes a debate more or less correctly and tries to make a point about it. That’s not a great paper, but I’m ok with giving it a B- or maybe a B. But a paper like that would not be getting a D or maybe at best a C, right? If you or anyone else has experience using this strategy I’d be interested in hearing how it went.

Eric Steinhart
Reply to  JDRox
3 months ago

I’m thinking that even the less intelligent student will use AI. But you’re right to point to possible adverse side-effects of this or any strategy. The strategy I mentioned is just one possible strategy. More generally, we really need to figure out new and positive ways to use these AI tools. Most of these new ways are going to be very strange.

ikj
ikj
3 months ago

this isn’t a writing related anecdote but it’s worth throwing out there.

as i often do, i’m teaching thoreau’s walden and i was reworking a slideshow yesterday with a section on thoreau’s allusion to chaucer’s “the nun’s priest’s tale.” i knew thoreau had written at least one essay on chaucer but had forgotten the name (it’s “homer, ossian, chaucer”), so i *googled* the following (sans quote marks): “thoreau canterbury tales.”

the google ai overview appeared first on my search query. it said, and i quote b/c i took a screenshot, “henry david thoreau was inspired by many things, including mythology, art, and architecture but there is no evidence that he read the canterbury tales by geoffrey chaucer.” a link was provided after this sentence. that link was to a site called “the art of manliness,” which apparently has a section called “the libraries of famous men.” on “the art of manliness’s” page, the canterbury tales are listed as being in thoreau’s library.

to recap: google’s bazillion dollar web-searching ai confidently made a false claim that was linked to a website with no credibility that actually said the opposite of what google’s ai said (and happens to be correct). this is three basic failures in one sentence and i think it illustrates the foolishness of “teaching ai” posted in this thread and elsewhere.

Nicolas Delon
Nicolas Delon
Reply to  ikj
3 months ago

That’s not what I get when I do the same query (result copy pasted below). The claim about lack of evidence is not there. But Google’s AI search lab is notoriously bad. That’s not what most people talk about when they talk about using AI. Try recent models of Claude, Gemini or GPT with good prompts. It will spit out some BS but also some very interesting stuff. The good prompts part is important; AI’s bland results often reflect users’ lazy thinking; that’s where you have to train your students, but it’s the same as teaching them how ask good questions and follow-ups in class discussions.

Henry David Thoreau was an American essayist, poet, and philosopher. He was inspired by a variety of sources, including mythology, Stoic texts, and the writings of other transcendentalists. Geoffrey Chaucer wrote The Canterbury Tales, a collection of stories set in the late 1300s. [1, 2, 3, 4]

Henry David Thoreau [5]

• Known for his book Walden, which reflects on simple living in nature [5]
• Also known for his essay “Civil Disobedience”, which argues for citizen disobedience against unjust governments [5]
• Inspired by mythology, Stoic texts, and the writings of other transcendentalists [2]

The Canterbury Tales [3]

• A collection of stories written in Middle English by Geoffrey Chaucer between 1387 and 1400 [3]
• The stories are told by pilgrims on a journey to the shrine of Thomas Becket in Canterbury, Kent [3, 4]
• The stories are prized for their portraits of the storytellers and the rich social fabric of 14th-century England [4]

Generative AI is experimental.

[1] https://www.britannica.com/biography/Henry-David-Thoreau[2] https://www.artofmanliness.com/living/reading/the-libraries-of-famous-men-henry-david-thoreau/[3] https://www.britannica.com/summary/Henry-David-Thoreau[4] https://www.amazon.com/Selected-Canterbury-Tales-Geoffrey-Chaucer/dp/0486282414[5] https://en.wikipedia.org/wiki/Henry_David_Thoreau

Last edited 3 months ago by Nicolas Delon
ikj
ikj
Reply to  Nicolas Delon
3 months ago

i guess you won the lottery, nic!

screenshot-2025-01-08-at-1.44.27pm
Nicolas Delon
Nicolas Delon
Reply to  ikj
3 months ago

Google search results are notoriously sensitive to previous queries. This is why we’re not getting the same results.

ikj
ikj
Reply to  Nicolas Delon
3 months ago

the fact that we get different results doesn’t bolster that case that ai is a valuable way of finding information!

Nicolas Delon
Nicolas Delon
Reply to  ikj
3 months ago

No, but it does bolster my claim that we shouldn’t take google search results as indicative of what LLMs can do. You’re talking about something different.

ikj
ikj
Reply to  Nicolas Delon
3 months ago

i wrote a longer post about this that either got lost or not approved but i disagree with you here and with your claim below this. this search result is not the most robust ai example, true, but it indicates the kind of factual issues that persist in llms. in addition the notion of teaching the ai relies on expertise of the instructor—we can teach students to engineer prompts and media literacy but we can’t teach them to know the things about history and the world they need to know in order to know when the ai is wrong while at the same time teaching them how to use the ai to correct results.

Nicolas Delon
Nicolas Delon
Reply to  ikj
3 months ago

It also bolsters the claim that we should teach students how to use AI. If their idea of research is typing three keywords in google, then it seems like they have ways to go before using AI intelligently. Help them do that rather than basing your estimation of AI on the laziest, lower bound version of ‘AI’.

ikj
ikj
Reply to  Nicolas Delon
3 months ago

this marks the second time you’ve been dismissive and condescending to me on this topic in d.n. threads. your assumptions about what i know about llms and how much i have used them are, like the llms themselves, factually incorrect. i would suggest that if you have a substantive view about how llms are not error-prone and that they do more than algorithmically produce strings of texts from training models that you actually support those and refrain from imputations of ignorance or bad faith.

Nicolas Delon
Nicolas Delon
Reply to  ikj
3 months ago

1) When has googling stuff, without any filter or verification, been a reliable way of finding information? This predates LLMs. I still don’t understand how your anecdote is supposed to illustrate anything about LLMs. Students used to copy paste content from Quora. That they would do the same with AI says more about them than about AI. As you note, that’s a more general media literacy issue; I don’t think we disagree.

2) My original point is in large agreement with you: that Google’s AI search labs is bad and should not be trusted. I pointed out that it’s quite different from the other widely used chatbots that are under discussion, and so that what lessons we can draw from your anecdote is at best unclear. I’ve learned stuff a hundred times more efficiently using Claude than Google. Both require caution and critical thinking. Again, not unique to LLMs.

3) I don’t assume anything about your knowledge of how LLMs work in part because I don’t know who you are. I believe I have been polite; I apologize if I haven’t. I’ll just say that I’m writing under my own name and that you implied that my own anecdote was analogous to winning the lottery (so presumably less representative than yours?), and called me Nic while hiding behind a pseudonym. Please don’t lecture me about etiquette.

4) You’re being extremely dismissive of people sharing their experience with AI on DN, under their own name, while hiding behind your pseudonym. You did it with Rebecca Lowe, you’re not doing it with Regina Rini and other commenters. Your OP ends with: “I think it illustrates the foolishness of “teaching ai” posted in this thread and elsewhere“. This is fine to an extent, but you shouldn’t be surprised when people disagree with you and act as if you’re the one being talked down to. I’d say to your face if I could that googling keywords is a poor version of doing research. That’s not condescending. And I’m sure that’s not what you really do, though I concede ‘lazy’ sounds bad and I’m sorry I used that word. I didn’t mean it to apply to you in particular.

Last edited 3 months ago by Nicolas Delon
Nicolas Delon
Nicolas Delon
Reply to  Nicolas Delon
3 months ago

The comment appeared!

*now doing it (not ‘not doing it’).

Last edited 3 months ago by Nicolas Delon
Nicolas Delon
Nicolas Delon
Reply to  ikj
3 months ago

I wrote a long reply that’s in limbo awaiting moderation. Maybe it’ll come out, if not, oh well.

ikj
ikj
Reply to  ikj
3 months ago

from “the art of manliness” linked

screenshot-2025-01-08-at-1.50.09pm
ikj
ikj
Reply to  Nicolas Delon
3 months ago

worth noting i got this result twice over the course of two days. i forgot to take a screenshot the first time.

ikj
ikj
Reply to  Nicolas Delon
3 months ago

while it is true that different models are more or less successful at particular things, and that prompts do matter in querying llms, the fundamental issue remains that llm ai is not only capable of making fundamental errors but does so with regularity.

certainly when we know better we can make better prompts that will get us the results we want. the issue with students is that they do not know better. i can almost guarantee that none of the 40 200-level students i taught yesterday know that thoreau knew chaucer well. hell, they may not even know who chaucer was (not a judgement). as a result, they have no way of knowing that they need to engineer their prompts or that they are getting false information.

it has also been sufficiently demonstrated to me that media literacy is low with many students of the past 5-7 years, meaning that they often do not see that the material the llms look to and were trained on may or may not be reliable/credible. now you can argue that we ought to teach them all of this but questions arise here: 1) what does a humanities course have to do with teaching students good llm practices when all the information is available to them *without* the use of ai, 2) what is it exactly that ai is offering these students besides shortcutting the process of research, drafting, and revising, 3) if we teach research, drafting, and revising through the llm rather than the traditional way, are we actually saving time or doing a better job teaching these skills?

i’m not going to say that llms can’t improve student writing, it can. i’m also not going to say that llms can’t generate “insights” that humans can run with. but these are not game changers in my view and they are often and consistently demonstrably wrong—and this wrongness is baked into how they work. perhaps when you’re working purely abstractly, as alpha-fold did, ai can do amazing things. but if you are trying to link your teaching to “the real world” in terms of history of thought, it’s a tremendous failure as it stands.

further my and your students will use it to get the kinds of search results i just got yesterday and will take those incorrect answers on faith as correct (we all know the power of the first google result). encouraging ai use in college (humanities) courses (symbolic logic excepted?) is to send someone to a library which there is no distinction between fiction and non-fiction, popular and academic sources, fantasy and science.

William Lewis
William Lewis
Reply to  ikj
3 months ago

Yes, I got a similar result two days ago when I googled Althusser and Croce in order to (lazily) find a reference.. The AI search result got that Althusser had mentioned Croce but then made up both the secondary article to which it had referred as well as what Althusser had said about Croce. Maybe this will get better but I would not take responsibility for what AI writes

Last edited 3 months ago by William Lewis
Dzogpa
3 months ago

The current educational paradigm has never been successful at achieving mass literacy. Nearly 2/3 never achieved ‘proficiency’. It cant teach essay writing even over 12 years. Now AI dissolves this whole situation. The system’s focus is actually on training people for academic success within its own peculiar system. People learn to read and write when there are tangible benefits for them. Now we realize we have been in a literary illusion, in a Western culture that values a particular coded hierarchy. Most of humanity is rapidly moving beyond that into a multimodal system, often driven by algorithms. Education is totally out of synch w the digital world. It needs to focus on English as multimodal communications, which is a collaborative and networked cultural approach to reading and writing. Think, what makes anyone to want and need to read and write in this world, and there you find the means for students to own their literary development. Your job is to prepare and support most students to thrive in that environment. Yes, this environment finds means to ease cognitive load, and to abandon obsolete cognitive processes. Thats human nature. Teaching critical thinking is still essential. But writing literature and essays is not the most effective or sole means of exercising that. Where and how is critical thinking exercised in this world? You all are thinking through a literary lens. You need a new multimodal lens.

Patrick Lin
Reply to  Dzogpa
3 months ago

Your job is to prepare and support most students to thrive in that environment.”

Nah, it’s to teach philosophy. If we can also prepare students for their future careers, then great, but that’s more the university’s mission (maybe), not individual instructors’ job.

Yes, there are probably innovative ways to teach critical thinking in the age of AI/LLMs, but most philosophy courses aren’t focused on logic or critical thinking, even if they depend on that. If you don’t already know this, then I suspect you’re not a philosopher and therefore don’t have informed opinions about how the discipline should adapt to AI.

Cralky
Cralky
Reply to  Patrick Lin
3 months ago

OUCH! You totally destroyed Dzogpa. As the old expression goes: “On the Internet, nobody knows you’re a Dzogpa”.

Daniele Mezzadri
Daniele Mezzadri
3 months ago

I’m wondering whether anyone has access to the article by Rini (it is beyond a paywall). I’d like to read it in full.