alyaza

joined 3 years ago
MODERATOR OF
 

Like many teachers at every level of education, I have spent the past two years trying to wrap my head around the question of generative AI in my English classroom. To my thinking, this is a question that ought to concern all people who like to read and write, not just teachers and their students. Today’s English students are tomorrow’s writers and readers of literature. If you enjoy thoughtful, consequential, human-generated writing—or hope for your own human writing to be read by a wide human audience—you should want young people to learn to read and write. College is not the only place where this can happen, of course, but large public universities like UVA, where I teach, are institutions that reliably turn tax dollars into new readers and writers, among other public services. I see it happen all the time.

There are valid reasons why college students in particular might prefer that AI do their writing for them: most students are overcommitted; college is expensive, so they need good grades for a good return on their investment; and AI is everywhere, including the post-college workforce. There are also reasons I consider less valid (detailed in a despairing essay that went viral recently), which amount to opportunistic laziness: if you can get away with using AI, why not?

It was this line of thinking that led me to conduct an experiment in my English classroom. I attempted the experiment in four sections of my class during the 2024-2025 academic year, with a total of 72 student writers. Rather than taking an “abstinence-only” approach to AI, I decided to put the central, existential question to them directly: was it still necessary or valuable to learn to write? The choice would be theirs. We would look at the evidence, and at the end of the semester, they would decide by vote whether A.I. could replace me.

What could go wrong?


In the weeks that followed, I had my students complete a series of writing assignments with and without AI, so that we could compare the results.

My students liked to hate on AI, and tended toward food-based metaphors in their critiques: AI prose was generally “flavorless” or “bland” compared to human writing. They began to notice its tendency to hallucinate quotes and sources, as well as its telltale signs, such as the weird prevalence of em-dashes, which my students never use, and sentences that always include exactly three examples. These tics quickly became running jokes, which made class fun: flexing their powers of discernment proved to be a form of entertainment. Without realizing it, my students had become close readers.

During these conversations, my students expressed views that reaffirmed their initial survey choices, finding that AI wasn’t great for first drafts, but potentially useful in the pre- or post-writing stages of brainstorming and editing. I don’t want to overplay the significance of an experiment with only 72 subjects, but my sense of the current AI discourse is that my students’ views reflect broader assumptions about when AI is and isn’t ethical or effective.

It’s increasingly uncontroversial to use AI to brainstorm, and to affirm that you are doing so: just last week, the hosts of the New York Times’s tech podcast spoke enthusiastically about using AI to brainstorm for the podcast itself, including coming up with interview questions and summarizing and analyzing long documents, though of course you have to double-check AI’s work. One host compares AI chatbots to “a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time.”

 

WPlace is a desktop app that takes its cue from Reddit’s r/place, a sporadic experiment where users placed pixels on a small blank canvas every few minutes. On Wplace, anyone can sign up to add coloured pixels to a world map – each user able to place one every 30 seconds. By internet standards one pixel every 30 seconds is glacial, and that is part of what makes it so powerful. In just a few weeks since its launch tens, if not, hundreds of thousands of drawings have appeared.

Scrolling to my corner of Scotland, I found portraits of beloved pets, anime favourites, pride flags, football crests. In Kyiv, a giant Hatsune Miku dominates the sprawl alongside a remembrance garden where a user asked others to leave hand drawn flowers. Some pixels started movements. At one point there was just a single wooden ship flying a Brazilian flag off Portugal. Soon, a fleet appeared, a tongue-in-cheek invasion.

Across the diversity and chaos of the Wplace world map, nothing else feels like Gaza. In most cities, the art is made by those who live there. Palestinians do not have this opportunity: physical infrastructure is destroyed while people are murdered. Their voices, culture, and experiences are erased in real time. So, others show up for them, transforming the space on the map into a living mosaic of grief and care.

No algorithm, no leaders, but on Wplace, collective actions emerge organically. A movement stays visible only because people choose to maintain it, adding pixels, repairing any damage caused by others drawing over it. In that sense it works like any protest camp or memorial in the physical world: it survives only if people tend it. And here, those people are scattered across continents, bound not by geography but by a shared refusal to let what they care about disappear from view.