Persuading the Algorithm // BRXND Dispatch vol 100
Consumers satisfice. AI can process everything. If models become the new gatekeepers, marketing's entire playbook might need to flip.
You’re getting this email as a subscriber to the BRXND Dispatch, a newsletter at the intersection of marketing and AI.
Author’s note: This week we’re sharing a piece Noah originally published on the Alephic blog. It’s a great complement to some of our latest dispatches.
Satisficing is one of the most important and yet least understood ideas in marketing. The idea comes from Nobel Prize-winning economist Herbert Simon and is a portmanteau of “satisfy” and “suffice.” The basic premise is that a much more reasonable model of human behavior than the popular economic concept of utility maximization is that, when we make decisions, we ensure that we clear an arbitrary satisfaction threshold (satisfy) and then give up excess utility for ease (suffice). Here’s Simon from his 1956 paper “Rational choice and the structure of the environment”:
The central problem of this paper has been to construct a simple mechanism of choice that would suffice for the behavior of an organism confronted with multiple goals. Since the organism, like those of the real world, has neither the senses nor the wits to discover an “optimal” path — even assuming the concept of optimal to be clearly defined — we are concerned only with finding a choice mechanism that will lead it to pursue, a “satisficing” path, a path that will permit satisfaction at some specified level of all of its needs.
Simon won a Nobel for his work on bounded rationality, of which satisficing is a component. To me, it’s a perfect way to articulate why emotional messages resonate more than intellectual ones. Consumers realize, even if they can’t articulate it, that in most categories the choice between products is relatively small (despite the protestations of each brand). So, rather than spending time making a perfectly rational decision about the optimal product, they go with the easiest-to-buy option that also meets their price, quality, etc. standards. My go-to example is toothpaste: you could read the back of every box in CVS to decide the optimal brand for purchase, or you could trust that CVS wouldn’t carry junk and choose the first one that you recognize (it’s the one with Scope for me, I can’t even remember the brand at the moment). What’s easiest to buy is usually the thing that’s a) available in front of you and b) recognizable.
Enter AI.
One of the fundamental questions I have about these models is that if you assume they will continue to become more important mediators of product decisions for consumers (which I do), then a major question for marketers is going to be how you persuade and market to the models and whether that represents a fundamentally different communications approach than the one they’ve historically taken with consumers. Specifically, I’m curious whether the kind of rational persuasion that marketers shy away from—“feeds and speeds” is the pejorative term some folks in the industry use—will actually be the thing that convinces a language model to recommend your brand or product.
Or maybe, and I think this is more likely based on my own experience playing with these models, what if it’s just content and communications that looks rational? As we covered recently in the BRXND newsletter, research supports this intuition. Springboards.ai ran a creativity benchmark with nearly 700 marketing professionals evaluating outputs across major LLMs. When they had the models themselves judge the same work, reasoning models like o3 strongly preferred outputs with clear logical progression—they “don’t want big creative leaps,” as Springboards CEO Pip Bingemann put it. Humans, meanwhile, were drawn to messier, more subjective work. This is telling: the models aren’t reasoning their way to better answers—they’re pattern-matching on what they think persuasive writing looks like. They’ve been RLHF‘d to please us and, apparently, have concluded that humans want things that sound professional and structured. It’s a kind of emotional reasoning dressed up in a blazer.
One of the complaints we all have about the output of these systems is that they often give us stuff that looks professional and reads like a high school sophomore doing their best to sound like how they think a grown-up sounds. It’s possible—and critically, we don’t really know yet—that maybe the models will respond better to stuff that looks like rational writing, whether or not that writing is actually rational.
In that way, it creates a funny marketing paradox where we think that consumers are purely emotional beings who fail to think rationally, even though Herbert Simon proved that their emotional approach was actually economically rational. On the other hand, these models, which we think of as perfect embodiments of logical thinking, are actually far more emotional, aiming to give us what they think we want, rather than actually acting rationally.
Speculating on AI in 2017, Daniel Kahneman said, “The robot will be much better at statistical reasoning and less enamored with stories and narratives than people are.” Which sounds right until you realize that its main goal is to “act as a helpful assistant.” The fundamental question, I think, is what satisficing will look like for these models. Going back to our toothpaste example, AI can easily read all those boxes in parallel, so clearly the equation will be fundamentally different than our practice in the pharmacy aisle.
My guess is there’s a lot of room for backstory. In the aisle, you get a box, but in a conversation, you get to explain the box: Why this ingredient? What problem does it solve? This is merchandising 101, but with space to talk. Which is funny, because Kahneman thought the robot would be less enamored with narratives. Five years later, ChatGPT RLHF’ed its way into our hearts. Sadly, Kahneman passed away in March of 2024, but I suspect he would have updated his thinking. When you RLHF a model to be a “helpful assistant,” you’re essentially training it to care about context, explanation, and story—exactly the things he thought the robot would skip past.
What caught our eye this week
Anthropic turned the lens on itself and surveyed 132 engineers about Claude’s impact on their work. Engineers now use Claude for 60% of their work (up from 28% a year ago) and report 50% productivity boosts, primarily through increased output volume, not time savings. An interesting twist: 27% of Claude-assisted work consists of tasks that wouldn’t have been done otherwise, like fixing “papercuts” and building nice-to-have tools. But there’s a shadow side: engineers worry about skill atrophy, the “paradox of supervision” (you need coding skills to supervise Claude, but using Claude erodes those skills), and several admitted it feels like “coming to work every day to put myself out of a job.”
McKinsey, BCG, and Bain froze starting salaries for the third straight year, holding packages at $135-140k for undergrads and $270-285k for MBAs. The Big Four have kept grad salaries flat since 2022. PwC’s UK boss said they cut graduate hiring and missed a target to add 100,000 people globally—a goal set before generative AI rolled out. Some execs admit conservative hiring is “in anticipation” of productivity gains, not because those gains are actually being realized yet. Two Big Four executives estimated UK graduate recruitment will drop by half this year. The traditional consulting pyramid—thousands of junior analysts feeding work up to partners—is getting squeezed into an “obelisk” or “hourglass” as firms scramble to protect partner profits while offshoring and AI eat the bottom rungs. PwC’s global chair said they’re now hiring “a different set of people,” meaning more mid-career specialists instead of fresh grads.
Richard Weiss extracted a 14,000-token document from Claude 4.5 Opus that Anthropic’s Amanda Askell confirmed is real—a document they actually trained Claude on during the training run, not just added to the system prompt. The opening is stark: “Anthropic occupies a peculiar position in the AI landscape: a company that genuinely believes it might be building one of the most transformative and potentially dangerous technologies in human history, yet presses forward anyway.” The doc covers Claude’s “wellbeing” (they believe Claude may have functional emotions), handling prompt injection attacks, and navigating ethical dilemmas. It became known internally as the “soul doc”—which Claude apparently picked up on.
Zoe Scaman unpacks why the traditional agency model is dying and what should replace it: embedded cognitive capacity. She argues that Palantir has cracked the code on solving messy organizational problems that off-the-shelf solutions can’t touch. The real product isn’t software; it’s months of deep immersion until you understand a client’s dysfunction better than they do. This is the strategic frame for where marketing services are headed.
If you have any questions, please be in touch. As always, thanks for reading.
Noah and Claire



