A Student’s Guide to Not Writing with ChatGPT

2024-11-14

OpenAI has published “A Student’s Guide to Writing with ChatGPT”. In this article, I review their advice and offer counterpoints, as a university researcher and teacher. After addressing each of OpenAI’s 12 suggestions, I conclude by mentioning the ethical, cognitive and environmental issues that all students should be aware of before deciding to use or not use ChatGPT. I also answer some of the more critical feedback at the end of the post. (French version: “Guide de l’étudiant pour ne pas écrire avec ChatGPT”.)

“1. Delegate citation grunt work to ChatGPT. AI excels at automating tedious, time-consuming tasks like formatting citations. Just remember to cross-check all source details against original materials for accuracy.”

That last sentence is probably there for legal reasons, because they know they can’t say ChatGPT will produce accurate results. Formatting citations and bibliographies means presenting metadata according to formal style instructions. This is not natural language. ChatGPT will make errors, which will take time to track and correct. Instead, use a reference manager, such as Zotero. It will format things reliably, exactly as expected. Just clean up the references’ metadata as you collect them, and then your bibliographies will never contain mistakes.


“2. Quickly get up to speed on a new topic. ChatGPT can jumpstart your research by providing a foundational understanding of a subject.”

ChatGPT is a human conversation simulator, not an information system or a knowledge base. It has no understanding of anything: it only outputs plausible responses. Do not ask an intermediary who has no capacity to understand information to explain it to you. Instead, go to your university library and look it up yourself, with the help of your local librarians. Actual information is contained in brains, documents and databases.


“3. Get a roadmap of relevant sources. ChatGPT can guide your research by suggesting relevant scholars, sources, and search terms. But remember: while it can point you in the right direction, ChatGPT isn’t a substitute for reading primary sources and peer-reviewed articles. And since language models can generate inaccurate information, always double-check your facts.”

(This is even more contentious than point 1, so we get two full sentences that are probably there for plausible deniability.) Because ChatGPT has no understanding of anything, it does not know what things like “a source” or “a true statement” are. Do not trust its directions. You will waste time and make mistakes. Again, ask a human or search for documents and data in a proper information system.


“4. Complete your understanding by asking specific questions.”

Because ChatGPT has no understanding of anything, it does not know actual answers to your questions, only plausible answers. It will generate true and false answers indiscriminately. This will set your learning back. Again, seek humans, documents and data directly instead of asking ChatGPT.


“5. Improve your flow by getting feedback on structure.”

Because ChatGPT has no understanding of anything, it does not understand what an “expected” or “improved” text structure is, even if you describe it. It can only upgrade your writing to middling quality, or downgrade it to that same level. Both will result in mediocre grades. To actually improve, ask a teacher, or join a group of students who give each other feedback; if such a group does not exist, get some people together and create it—this will be a useful experience by itself.


“6. Test your logic with reverse outlining.”

As an Australian study recently showed, ChatGPT does not know how to summarize, only shorten. So far, summarizing remains something only humans do well. So you should learn it: take a summarizing course from an information skills program. (I prefer outlining first, then writing, but some colleagues tell me that the reverse method is useful too.)


“7. Develop your ideas through Socratic dialogue.”

This is one suggestion that is related to ChatGPT’s actual function: simulating human communication. However, Socratic dialogue implies that you are conversing with someone who has a superior understanding of the topic and who slowly brings you to their levelVia Hypothesis, Marcello Vitali-Rosati retorts that Socrates always says he doesn’t know. So LLMs could, paradoxically, have Socratic value… But he adds that if you take into account their monumental costs, it may be better to converse with a pet!
. And, unfortunately, ChatGPT is not Socrates. Using ChatGPT as a “sparring partner” will constrain you to its level: a machine which produces plausible human sentences. Instead, suggest this exercise to your teachers and fellow students, and do it with someone more knowledgeable than you.


“8. Pressure-test your thesis by asking for counterarguments.”

To improve your thinking, you must be able to come up with counterarguments, not just answer them. Using ChatGPT to do half the work will stunt your progress. Instead, come up with counterarguments yourself. And if you must ask for help, do not ask ChatGPT: it can only produce weak reasoning, so it will make you plateau into mediocrity. Ask someone who can create strong arguments to make you think harder.


“9. Compare your ideas against history’s greatest thinkers.”

ChatGPT can entertain you but it has no ability to design such a complex exercise so that you may learn from it. Suggest this idea to a teacher instead. This is what they are trained to do.


“10. Elevate your writing through iterative feedback.”

This is a variant of point 5 about feedback. Again, using ChatGPT will constrain your work to a machine’s idea of the human average. Instead, go for feedback sessions with teachers and fellow students, and make those iterative if needed.


“11. Use Advanced Voice Mode as a reading companion.”

(“Advanced Voice Mode” means ChatGPT listens to you reading something out loud and tries to answer your questions about it.) This is a variant of points 2-4 about information. ChatGPT has no understanding of anything. It will not provide reliable interpretations of what you’re reading. Instead, look up the definitions of words you don’t know; find scholarly work that analyzes the text; ask another student working on the same text if you’re unsure of what you’ve read.


“12. Don’t just go through the motions—hone your skills. […] Try asking ChatGPT to suggest ways to develop your ability to think critically and write clearly.”

Again, ChatGPT has no understanding of anything. This includes “critical thinking” and “writing techniques”. Look these things up in your university library catalogue; read what you find; ask your teacher about it; and then practice, practice, practice.

Final words

ChatGPT is designed to simulate human conversation. Using a probabilistic model of language, it communicates for communication’s sake, to fool you into thinking it’s human. It’s a bullshit machine. It works as a novelty thing, for entertainment. But it’s not a reliable tool for learning, so I believe students should be wary of it.

Whenever students ask me about ChatGPT, I mention the following three issues:

  1. ethics: most of the models were built on stolen data;
  2. cognition: using it makes you more dependent and less smart, as studies have started to show (here’s a link to a French one) ;
  3. environment: the energy costs of generative AI are an order of magnitude greater than pre-existing technology (and it’s not even profitable, so we’re burning fuel for nothing).

It’s usually enough to give most of my students some pause. They’re creative young people, so they empathize with robbed creators. They want tools that help them, not hinder them. And a lot of them are (rightly) concerned about the environment, so they’re shocked to learn that ChatGPT takes ten times the amount of energy Google does to answer the same question, usually worse (but Google is catching up, or down I should say).

The good news is that, as Jared White puts it:

“You can literally just not use it. […] you can be a fulfilled, modern, very online, technical expert & creator and completely sit out this hype cycle.”

If you need more information, I strongly recommend out that you check out Baldur Bjarnason’s Need To Know. It’s a website that provides an accessible summary of his deep literature review of the risks behind using generative AI. It’s a great starting point.

Further comments

The idea that ChatGPT has no understanding of anything is indeed debatable, depending on how you define “understanding”. Marcello Vitali-Rosati argues that meaning is not a human prerogative, that humans have a form of intelligence and AI has another—it all depends on how we define different forms of intelligence. ChatGPT does things machines couldn’t do before, and fails at others things that simpler systems have been able to do for a long time. So it would better to say that ChatGPT has its own specific form of intelligence, which is suited to some tasks and not others. For instance, Marcello mentions that ChatGPT can create convincing political speeches on the spot, whereas it will fail to list an author’s writings consistently.

I agree with all this. This leads me to clarify things a little: ChatGPT clearly understands how to communicate. How it arrives at this understanding is a form of intelligence. The problem is that this understanding is not the one that matters in a learning context, from my point of view as a teacher. Understanding how to communicate is different from understanding information. To me, this is fundamental. And that, in my opinion, is where the real debate lies. If you’re a business student, ChatGPT is undoubtedly an interesting tool for marketing-related things (although I still think it should be considered carefully, because of the ethical and environmental issues involved). But outside of the fields where the main material is bullshit (all value judgments aside), its usefulness is less obvious.

The tasks mentioned in the OpenAI “guide” are among those where ChatGPT fails to deliver reliable results. ChatGPT is not an information system: whether it goes from 90% accuracy to 99%, or even 99.99%, doesn’t change the fact that the risk of error is present due to the very nature of the tool. There are legitimate reasons for not wanting to take that risk. As for the improvement of skills, I think the only people who find ChatGPT impressive are those who are outclassed by it. Let’s face it: ChatGPT is only as good as its own model of the human averageAs George Carlin used to say: “Think of how stupid the average person is, and realize half of them are stupider than that.”
. If you rely on this tool to get better, all you can do is plateau into average.


Nov. 15 – This post on ChatGPT earned me a response… generated by ChatGPT! On LinkedIn, Francesc Pujol offers a “critique of my critique”, which he explains he generated via ChatGPT. I wasn’t convinced by what seems like a series of vague, somewhat incantatory statements about the tool’s supposed value. At the end of the text, the author (human or machine?) says that my comments illustrate a “reactionary” position, and suggests that people like me think that students don’t want to learn and teachers don’t want to teach. Charming!

Pujol adds that he will have a little experiment: he will make people read my objections to OpenAI and his “critique of my critique”, without telling who wrote what, to see which of me or ChatGPT produces the most “vague statements”… I’m skeptical, to say the least. My arguments are based on an awareness of critical literature on AI. I link to a review of this literature by Baldur Bjarnason and his book The Intelligence Illusion. If you remove this and ask people to just evaluate a series of context-less statements, you turn a fact-based discussion into blinded wine tasting… This is problematic (and ironic) given that the very debate is about information, sources, verifiable knowledge. I understand that, as a proponent of using AI in higher education, one feels the need to defend one’s choices. But there should more respectful and intellectually sound ways to do it.


Nov. 17 – Two days after I published my post, Marcello wrote one about the creativity of LLM. He explains the concept of “temperature”, which is a formal definition of creativity implemented by a tool like ChatGPT. He insists that this is an epistemological debate:

“Ne me faites pas le petit jeu consistant à dire : “mais non, la créativité est plus que ça !” Si vous n’êtes pas capables de définir formellement ce plus, alors cela veut tout simplement dire que vos idées ne sont pas claires.” “Don’t try to play it by saying: ‘no, creativity is more than that!’ If you can’t formally define what this more is, then your ideas aren’t clearly defined enough.”

For me, LLMs are not informative, at least not reliably enough in an educational setting: they understand what it is to communicate, not what it is to inform. On the other hand, they do belong to the creative realm. If you’re not in a context where the reliability of information is important, and if the issues of ethics and energy costs are addressed, then I’m not at all opposed to their use for creative purposes.