The GeLLMann Amnesia Effect // BRXND Dispatch vol 109
AI can do every job except mine
You’re getting this email as a subscriber to the BRXND Dispatch, a newsletter at the intersection of marketing and AI. Forward it to a friend, they’d like that, and so would we.
The GeLLMann Amnesia Effect
There’s a thing that happens sometimes where you read an article about your industry or area of expertise and find a million holes in the reporting and thesis. But then you turn the page and read about the Strait of Hormuz or China or synthetic biology and are entranced by every word and depth of reporting. You’ve forgotten your skepticism from just a few minutes earlier and take every word on the page as gospel. That’s the Gell-Mann Amnesia effect.
Named after famous physicist Murray Gell-Mann, the Amnesia Effect was coined by Jurassic Park author Michael Crichton to describe that feeling of skepticism. Here’s Crichton explaining it in 2008:
You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward -- reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
There’s an interesting corollary happening in the world of AI that I’ve come to call the GeLLMann Amnesia Effect (couldn’t help myself). In the LLM edition, we have people who are absolutely sure AI couldn’t possibly replace the work they do (in which they have expertise), but when they consider another industry, they’re sure the models will wipe out all the jobs.
I originally remember running into this in 2023, when serious software developers were, on the one hand, pooh-poohing the output of GPT-4 for writing quality backend code, but, on the other hand, happy to let it loose to build a frontend. It couldn’t possibly do their job, but it could definitely replace the frontend engineer a few desks away.
Since then, the effect has only intensified, particularly as the job-replacement narrative snakes its way through society. Everyone is sure that AI is coming for every job but their own. In marketing, we see lots of folks defending strategy, writing, or art direction—AI can’t do “taste,” they claim—but they’re sure it’s great for operations, code, or whatever other discipline they don’t know about. Linear CEO Karri Saarinen put it well last week:
A common dynamic I observe with AI: it feels most impressive when you don’t know much about the subject, don’t care or don’t have a clear idea of what the [sic] you want.
This applies across design, code, legal, and more. If I don’t know code very well, every piece of code it writes feels very impressive.
Once you know what something should feel or look like, it becomes almost impossible to guide AI there. And you definitely can’t one-shot it.
Or even more simply from Can Duruk from Modal: “my job is very hard but everyone else’s job is easy to automate via ai.”
So what do we make of this attitude? I think there are a bunch of layers worth exploring. First, there’s the gatekeeping. Whenever a new technology comes along, gatekeepers want to explain why people wielding that technology aren’t doing as good a job as them. For reasons of personal psychology, gatekeeping drives me completely bonkers. The idea that people writing code with AI aren’t developers or that using it to help you write disqualifies you feels like a completely backward idea to me. I wrote specifically about this issue as it relates to music and AI a few months ago, but my basic take is that anyone can call themselves whatever they want, and that doesn’t cheapen the great work of others. I’m not saying those people are writing good code or prose, but I’m happy they’re doing it!
Second, there’s the very real economics of it. One of the simplest mental models I have for AI is that it’s an averaging machine. You give it all the written human knowledge, and you get a roughly median output—a 100 IQ in nearly every topic. No one, including those with IQs far above that, actually has a median IQ across every topic, so, of course, it’s going to be more helpful in some areas and less helpful in others. The net effect, though, is hugely beneficial for society, which now has capabilities it never imagined. Doctors and lawyers are pushing for laws that would limit these models’ advice, but I would argue that, although that advice may at times be incorrect, it does far more good than harm.
My wife’s father was a doctor, and she grew up feeling annoyed that every time she got hurt, he’d give her some acetaminophen or ibuprofen and tell her that if it still hurt tomorrow, he’d look at it. That’s a common story for the children of physicians. At work, though, malpractice exposure, patient expectations, and billing codes tend to drive doctors toward X-rays and MRIs. Overdiagnosis and the resulting over-treatment are real problems in medicine, and these models seem to offer something closer to the doctor-parent. They’ll make mistakes, and probably a catastrophic one at some point. But we need to measure societal technologies at the societal level.
Finally, the base case we are comparing these things to is almost always wrong. In the medical context, while doctors worry that the advice these models provide may be worse than that of a doctor, the reality is that most people aren’t going to the doctor; they’re going to Google. And, as any of us who has Googled some symptoms can attest, everything is always “you probably have cancer, and you’re going to die.” (Not for nothing, but as we talk about AI slop, much of the medical content you find on Google is the result of pre-AI slop, a fact many seem to gloss over.) While it’s nice to imagine that every citizen has a doctor and a lawyer on call for free advice, the reality is they don’t. And if they’re able to move from 10th-percentile advice on some random website to 50th-percentile advice from ChatGPT, that is the real delta.
As with most things in AI (and elsewhere), we should be more humble about what it can do in our own work and more skeptical of what it does in everyone else’s.
If you have any questions, please be in touch. As always, thanks for reading.
— Noah


