Why AI Will Improve Content Quality, Not Degrade It // BRXND Dispatch vol. 77
Plus, OpenAI’s banner week.
You’re getting this email as a subscriber to the BRXND Dispatch, a newsletter at the intersection of marketing and AI. We’re hosting our next BRXND NYC conference on September 18, 2025 and currently looking for sponsors and speakers. If you’re interested in attending, please add your name to the wait list. As space is limited, we’re prioritizing attendees from brands first. If you work at an agency, just bring a client along—please contact us if you want to arrange this.
Since ChatGPT launched, I’ve been hearing the same fear expressed over and over, particularly in marketing circles: AI will drown us in low-quality content. The argument goes that AI makes content creation so cheap and easy that we’ll be flooded with mediocre, AI-generated “slop” that pollutes the internet. This creates a vicious cycle where models trained on slop produce even worse content. (Though interestingly, some early returns on training with synthetic data seem to contradict this prediction, but the jury is definitely still out.)
I think this logic misses something fundamental. Far from degrading content quality, there’s a good chance AI is poised to spark an influx of high-quality writing on the web.
My thesis is straightforward:
If you have a strong incentive to influence people (which every brand does—along with many other groups and organizations)
And if more people continue to make decisions based on AI outputs (up for debate, but ChatGPT just added one million users in a single hour this week)
Then producing high-quality content becomes essential
These models aren’t dumb. You can argue where they fall on the intelligence spectrum, but I don’t think it’s unreasonable to say they have, at minimum, an average IQ. Moreover, they’re only getting better. If brands want to influence what AI tells consumers, they will need to create content that models recognize as authoritative and valuable.
Put simply: actions follow money. If slop isn’t effective at influencing people, businesses won’t pursue that approach (of course, this isn’t to say that we still won’t see an influx of nefarious actors flooding the web with junk to pick up pennies where they can). Fundamentally, though, I believe this is where doom scenarios about AI content collapse—they ignore the market forces that have always shaped media.
The Paywall Opportunity
Consumer growth of these platforms isn’t the only market force at play. As AI adoption grows, publishers are moving to harder and harder paywalls, pushing for direct deals with the model providers to access their content. This has and will create a content vacuum—something anyone who has used ChatGPT Deep Research has certainly noticed—where the highest quality writing isn’t necessarily making it into the LLM’s response.
This gap creates an opportunity that savvy organizations will seize. While some will pump out low-quality filler, many will strategically develop high-value, freely accessible content explicitly designed to influence AI systems. This content will likely be longer, more detailed, and wholly separate from the stuff they write for consumers. Speaking to brands, I’ve already heard of a few examples of these new content strategies emerging.
The calculus is simple: If AI continues growing in influence, the content that matters most will be the stuff accessible to the models. Humans will represent just a fraction of the audience directly consuming content. The imperative shifts toward open, high-quality content that AI can index and process widely.
The Rational Model Advantage
This shift runs deeper than content accessibility, though. I believe AI models will function as fundamentally different consumers than humans. People famously make emotional decisions, ignoring rational economic models in favor of satisficing and heuristics. Models, however, detect patterns in quality information.
This distinction transforms marketing. Emotion-driven tactics that are effective with humans may fail with AI intermediaries. Clear, informative, structured content could disproportionately influence what models tell consumers. We already see long-form content and detailed instructions working better with AI than with human readers.
I suspect these models behave more like the theoretical "rational consumer" from economics than actual humans do. The end purchase still comes from a person, but the rules change dramatically when AI sits between brands and customers. (This is all very speculative, and could definitely be wrong, but my current experience leans this direction for sure.)
A Race to the Top
Following this logic leads to a counterintuitive conclusion: if quality content more effectively influences AI, and AI increasingly influences consumers, then publicly available quality content becomes more valuable, not less.
Economic incentives still drive behavior, but now they’re driving excellence rather than mediocrity. The current wave of “AI slop” merely represents the first reaction to new technology: the low-effort approach that always accompanies innovation.
Market forces and the nature of AI systems will ultimately reward those investing in quality. The future belongs to brands adapting to this shift—producing better, more valuable content rather than mediocre material.
AI won’t degrade our information ecosystem; it will elevate it by fundamentally realigning the incentives that shape content creation. I strongly suspect the winners in this new landscape won’t be those who can produce the most content but those who can produce the best.
What Caught My Eye This Week (Luke)
OpenAI had a monster week, and since I’m getting ready for the Ride AI summit in LA, I thought I would spend today’s missive quickly recapping the highlights in case you, like me, have been struggling to keep up with the torrent of news.
The undisputed biggest headline was the launch of OpenAI’s new o4 image model. If you haven’t had a chance to play with it yet, I’ll spare you the Ghibli memes and simply recommend you give it a spin. The results speak for themselves: stunningly coherent visuals with an almost eerie understanding of what you're asking for (assuming you don’t run into an outage).
If it feels like a major leap in image quality compared to earlier tools, that’s because it is. Unlike DALL-E and Midjourney’s diffusion-based approach, the new model is auto-regressive. That means it builds images token by token, exactly how ChatGPT creates text.
The internet has responded enthusiastically, to say the least. OpenAI reportedly added one million users in a single hour after the model’s release—a stat that beggars belief. For context, the only historical analog I could think of was Meta’s Threads launch in 2023, which experienced a similar surge in growth but leveraged Instagram’s massive existing user base cultivated over a decade.
The demand is so intense that Sam Altman tweeted the surge is “melting” their GPUs. (So much for those predictions about declining data center investments.) The explosive adoption helped OpenAI close a $40 billion round from SoftBank at $300 billion valuation, propelled by expectations that the company will triple its revenue to $12.7 billion this year.
Never a dull moment…
One more piece of OpenAI news: While o4’s ability to create viral image has lit the public’s imagination on fire, the #1 requested feature for ChatGPT business customers is something more functional: the ability to connect AI to internal knowledge sources. And OpenAI is finally delivering. A new update allows ChatGPT to sync directly with an organization’s Google Drive workspace, pulling in internal knowledge in real-time to provide more contextualized and relevant responses. The importance of these “private tokens,” i.e. the unique language, processes, and intelligence within your organization, to elevate AI’s performance is something we've written about extensively. I recommend this piece that James wrote at the Alephic blog if you want to learn more.
If you have any questions, please be in touch. If you are interested in sponsoring BRXND NYC, reach out and I’ll send you the details.
Thanks for reading,
Noah and Luke
I largely agree that we won't face a slop apocalypse (to mash two phrases together ;)
But for different reasons.
I agree that brands will design content to be 'read' by AIs. But I'm not sure why you think this content will be good. If SEO farms are any indication, it will be awful for humans to read.
I think it will be simpler than this. Brands influence by attracting attention. And attention comes from difference. So brands will have an incentive to stand out. So they will seek creativity & novelty, if not quality.
More here: https://open.substack.com/pub/thefuturenormal/p/beyond-the-moral-and-professional