[Ed. note: While we take some time to rest up over the holidays and prepare for next year, we are re-publishing our top ten posts for the year. Please enjoy our favorite work this year and we’ll see you in 2026.]
“We shouldn’t be using ChatGPT for this,” said my colleague, glancing at the draft I’d just sent him.
“I agree. That’s my writing.”
“Oh.” He paused and read a bit. “Well, the em dashes and the structured paragraphs make this seem like AI slop, even if the content is there.”
“Thanks for the feedback,” I said. Then I flung my laptop across the room and leaped to my feet. “Those are my em dashes,” I growled, pounding the table. “And I always write in structured paragraphs. I’m an English major.” [Editor’s note: Em dashes are also house style at Stack.]
OK, no laptops were thrown and no tables were pounded, but I was a bit affronted. It was the first and only time someone had vocalized the assumption that my work was AI-generated, and it made me wonder if anyone else (incorrectly) perceives the content I write as AI slop.
Why did the idea that someone might label my work as AI-generated make me feel both icky and irritated? Why was I so eager to deny using a tool that hundreds of millions of people are using? Why did I slip in that defensive “incorrectly” a few sentences back?
Initially, I wanted to write about the supposed telltale signs of AI-generated text and whether those signs actually reveal anything. Simply put, I wanted to defend my em dashes-and I will.But as I thought more about the subject, a straightforward blog post became an ontological exploration into how we perceive, understand, and experience AI-generated content. As-as nearly anyone can tell you even if they can’t quite explain why-there’s a significant contrast between how we experience AI art and how we experience art (visual, musical, or literary) created by humans. What is the nature of that contrast, and what does it tell us?
First, a disclaimer. I don’t consider the articles and other content I write as part of my job to be art, necessarily. But that content is my body of work, the product of my effort and experience. Like every other marketing writer I’ve ever worked with, I take pride in my writing and its attribution.The line between work product and art is not always a bright one. For the purposes of this article, at least, please forgive some conflation between the two.
the suggestion that em dashes are a hallmark of AI writing doesn’t come out of nowhere. Wikipedia’s extensive field guide to patterns associated with AI-generated content covers style (overuse of em dashes, section headings zhuzhed up by emoji), language and grammar (overdependence on the rule of three, weasel words), and broader content issues.Those issues, less easily named but just as obvious when you’re a Wikipedia editor reading a ton of AI slop, include superficial analysis, overly promotional language, and an undue emphasis on a subject’s symbolic importance or coverage in the media.
The field guide leads with a crucial disclaimer: “Not all text featuring these indicators is AI-generated, as the large language models that power AI chatbots are trained on human wri
“My main interaction with AI slop has (obviously) been on social media,” she said, “and it’s unfortunately reached the point were I see [AI content] and just skip. With more and more AI videos hitting my feed, my skin starts to crawl once I realize the images I’m seeing of a cute dog or bunnies on trampolines that at first gave me warm fuzzies are actually generated by a machine. The soul of it just leaves for me, and it feels unnatural.”
phoebe also reports, “One of the memes on TikTok now is for people to post ‘Is this AI, I can’t tell’ under videos that are clearly not AI as a joke.” It’s a level of irony and abstraction that would not have resonated just a couple of years ago.
I asked Ryan the same question: How do you feel when you recognise something you’re watching/reading/listening to as AI-generated?
“It depends,” he said. “If it’s something that’s ephemera around the thing I’m looking at, like a blog header or background in a game, I’m mildly disapproving, but I get it. If it’s the thing I’m looking at itself, then I feel betrayed and a little disgusted. I want to ride someone else’s brainwaves when I’m reading/viewing art/watching movies.”
“A lot of art is just product anyway,” Ryan acknowledged (thinking, I assume, of Tron: Ares), ”so it’s soulless crap made by people, but AI outputs are by definition soulless. There’s no authorial intent, just stats.” He also referenced a question that’s become a familiar refrain in conversations about AI content: “Why should I bother to read something you didn’t bother to write?”
As soon as we had AI text generators, we had AI text detectors. These tools (powered, naturally, by AI) promise to determine how much of a given text is AI-generated. Approximately one minute later, we had AI humanizers like UnAIMyText, which promise to make your AI-generated text sound like something writen by an actual breathing person.
As you’d expect, many users of AI detectors are teachers trying to determine whether their students actually did the homework. And many users of AI humanizers are students trying to get an AI-generated paper past those same teachers.
But many people, not least the students themselves, see a basic contradiction at play here. Even as school policies communicate to students that AI writing tools are to be avoided or, failing that, used surreptitiously, kids and young adults absorb the message that they must use AI to be competitive in a daunting job market.They’re aware of the ubiquity of AI tools and the paucity of entry-level roles; from their outlook, if they don’t work with AI, they’ll lose their future job to someone who does.
Tools built to detect or disguise AI reveal our societally mixed feelings about the technology itself. On one hand, AI tools promise that with no investment, no skills, and only a little time, you too can cre
People consistently prefer art labeled as human-made, even if they can’t tell the difference, according to new research. A study published in Computers and Human Behavior found participants favored artwork they believed was created by a human artist, perceiving it as more creative and awe-inspiring. “No matter which one is actually made by the human artist,people prefer the artwork that is labelled as human,” said coauthor Sheng Du.
The preference isn’t about the image itself, but the knowledge of its origin. As one Reddit user commented on a discussion of the study,”It’s not art.” The sentiment echoes Truman Capote’s famous criticism of Jack Kerouac: “That’s not writing. That’s typing.”
We value art, in part, because we understand someone invested time and effort in its creation. Knowing no one dreamed and labored over AI-generated work diminishes its appeal.
Cartoonist Matthew Inman, creator of The Oatmeal, offered a compelling analogy. He recalls being captivated by the dinosaurs in Jurassic Park, despite knowing they were computer-generated.He didn’t focus on the technology, but wondered, “How did somebody make this?” The dinosaurs represented human skill and imagination.
“Seeing AI art,” Inman writes, “I don’t feel that way at all.”
A separate study in Scientific Reports showed people “devalue” AI-made art even when they beleive a human collaborated on it. There’s a clear distinction in our minds between humans using tools like CGI software and simply prompting an AI like Sora to generate video.
DC Comics president Jim Lee recently announced the company will not use AI-generated art or storytelling, stating, “not now, not ever.” He acknowledged a fundamental human reaction against it.
