The Evenly Distributed Blindfold // BRXND Dispatch, vol 83
Plus, AI cannibalism, hiring in the age of LLMs, and LegoGPT.
You’re getting this email as a subscriber to the BRXND Dispatch, a newsletter at the intersection of marketing and AI. We’re hosting our next BRXND NYC conference on September 18, 2025, and currently looking for sponsors and speakers. If you’re interested in attending, please add your name to the wait list. As space is limited, we’re prioritizing attendees from brands first. If you work at an agency, just bring a client along—please contact us if you want to arrange this.
“I keep having conversations where people speculate about when AI will be able to do things that AI can already do.” — Matthew Yglesias
“I don't think I’ve seen another subject where a cohort of people proud of their literacy are so proud of their ignorance.” — Nate Silver
Noah here. These comments have been stuck in my head for the last few days because they capture something fundamentally odd about our current moment.
I find myself having these surreal conversations where smart, accomplished people ask me when AI might be able to do things it’s already doing. This isn’t theoretical—it’s happening in boardrooms and coffee shops right now. A CMO or creative director will express genuine skepticism about whether AI can produce coherent research or quality copy. So I pull out my phone, open ChatGPT, and show them. They’re inevitably surprised, despite this technology being, literally, free.
We typically invoke William Gibson’s line here: “The future is already here—it’s just not evenly distributed.” But with generative AI, distribution isn’t the problem. Anyone with a phone can access tools more capable than most entry-level knowledge workers. ChatGPT reached 100 million monthly users just two months after launch, and OpenAI now reports over a billion registered users globally. These aren’t niche statistics—they’re mainstream adoption curves.
The tools aren’t hiding—they’re being ignored. The gap isn’t access; it’s acknowledgment. We’ve collectively constructed an invisible blindfold.
How We Quietly Moved the Goalposts
For decades, the Turing Test represented the watershed moment for artificial intelligence: if a machine could convincingly pass as human in conversation, we’d know we’d crossed a pivotal threshold. That’s happened. In fact, it happened so widely and quietly that no one ever even bothered to report on it. No major headlines. No philosophical reckonings. We simply shifted the criteria: Can it reason? Can it feel? Can it be trusted?
As Douglas Hofstadter observed in Gödel, Escher, Bach, “Sometimes it seems as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.” This insight from over forty years ago perfectly captures our current pattern: each achievement doesn't satisfy our definition of “real AI”—it just forces us to move the definition elsewhere.
In retrospect, the Turing Test was never the actual benchmark—it was a proxy for surprise. A placeholder for “make me feel something weird.”
Why the Blindfold Might Stay On
What explains this collective blindness? Here are some theories about why we might struggle to see what’s right in front of us (helped along by many conversations with friends):
1. Narrative Lock-In
Perhaps our cultural scripts have told us real AI would arrive as sentient robots or omniscient voices from the ceiling. A text box generating legal briefs doesn’t match our Blade Runner aesthetic, so we might categorize it as “clever parlor trick” rather than “fundamental transformation.”
2. Self-Preservation
This may operate on two levels. Professionally, if you bill by the hour for research, drafting, or analysis, acknowledging a tool that compresses hours into seconds could threaten your economic model. But there might also be a deeper personal preservation at work—a psychological resistance to information that contradicts our understanding of the world. Like questioning religious beliefs, acknowledging AI’s true capabilities opens a “can of worms”: if this is possible, what else might be? What other foundational assumptions might crumble? It might be easier to avoid confronting the difficult questions altogether.
3. Hype-Scar Tissue
The wounds from recent technological bubbles—NFTs, crypto, metaverse—remain fresh. The safest posture might become reflexive skepticism. It could be easier to say “I’ve seen this movie before” than to admit this one follows a different script. Many people, particularly my peers, feel burned by the perceived ills of social media, with nearly half of U.S. adults aged 35-54 now say big internet companies create more problems than they solve. I suspect this creates a reluctance to “sign up for the next new thing.” This skepticism might persist even when the evidence of transformation is clear—or perhaps especially then.
4. Invisible Interfaces
Truly transformative technology often disappears into the background. Autocomplete finishes our sentences; recommendation engines predict our preferences; spam filters perform industrial-scale pattern recognition. We may forget to be impressed by the extraordinary once it becomes ordinary.
5. The Threshold Myth
”Sure, it writes decent copy, but it still makes factual errors.” True—and new analysts still miss deadlines, and junior designers still create flawed mockups. Perfection was never the standard for human work; why might we demand it from silicon? Isn’t something that gets you from zero to forty percent complete in minutes unquestionably useful?
A Moment of Recognition
Consider Tesla’s Full Self-Driving (FSD) technology in 2025. While tech enthusiasts debate when “true autonomy” might arrive, Tesla vehicles are already navigating complex urban environments, handling intersections, changing lanes, and following navigation routes with minimal human intervention. There’s a striking perception gap: most people assume FSD can handle perhaps 30% of driving tasks autonomously—mainly highway miles and simple straight roads—when the reality is closer to 90% or higher across diverse driving conditions. I drove my first Tesla with FSD just last week and had to intervene only once during a 25-minute drive on both highways and busy city streets. This isn’t some future technology. It’s available right now to approximately 4 million Tesla owners with Hardware 3.0 or newer vehicles for $100 a month.
The Legitimate Reasons for Resistance
Of course, not all skepticism is unfounded. There are legitimate concerns behind the collective blindfold. People will lose jobs—and not just in the distant future, but imminently. I wince (and occasionally interrupt panel discussions) whenever I hear “you won't lose your job to AI; you'll lose it to someone using AI.” In a moment of great change, we need to stay humble about what we can predict. We simply don't know how this will unfold.
The fears of algorithmic influence, surveillance, and manipulation aren’t paranoid fantasies—they’re reasonable extrapolations from current trends. While historically new technologies have created more jobs than they’ve eliminated over the long term, the speed and scope of AI’s capabilities raise fair questions about whether this time really might be different. Concerns about these transitions deserve serious consideration, not dismissal.
But acknowledging these concerns doesn’t require denying what’s happening. In fact, meaningful engagement with the risks requires first seeing the technology clearly. Nothing is more jarring than hearing someone proudly declare they don’t use any AI tools, then proceed to expound for paragraphs about how terrible they are. We can’t have nuanced conversations about guardrails for tools we refuse to acknowledge exist.
The Reality in Front of Us
Part of this disconnect follows the familiar shape of the hype curve: initial excitement gives way to disillusionment when technologies don’t immediately deliver on their promises. After years of unfulfilled predictions, many simply stop paying attention and assume the technology will never arrive. Progress has continued in the background. While tech leaders have predicted AI’s transformative potential for years, the difference now is that it’s actually happening.
The curve of possibility has finally caught up with the predictions. Yet our collective resistance mechanisms remain fully engaged, preventing us from seeing what’s directly in front of us.
What Caught My Eye This Week (Luke)
All the speculation about ChatGPT killing Google just got a compelling new data point: An Apple exec says Google searches in Safari fell for the first time ever last month. (The news broke amid reports that Apple is developing its own AI-powered search engine.)
Related: Anthropic has added web search to its API, enabling Claude to access real-time information from the internet to enhance its responses with up-to-date insights and citations.
Legendary AI watcher Gwern makes the case that “AI cannibalism”—i.e. the seemingly paradoxical practice of training new LLMs with old LLM’s outputs —can actually be useful.
PR people are discovering that the most effective way to influence chatbots is to… talk to journalists. It will be interesting to watch how the PR industry evolves as legacy media companies push back against LLMs training on their content. “Firms, whose services now often include regularly testing clients’ reputations with AI models, are finding that authoritative publications — including declining local news outlets and specialist trade journals — shape the results of chatbot queries about a given company far more powerfully than a social media campaign or Reddit thread could.”
Amazon unveiled a new AI tool for optimizing product listings.
Stripe built a transformer-based foundation model for payments. This approach dramatically improved card-testing attack detection (from 59% to 97%) and suggests that payments, like language, have latent structure that transformers can effectively model.
A new feature from TikTok brings static photos to life as short-form videos.
Tech publisher and entrepreneur Tim O’Reilly addresses fears that AI will replace programmers. I like his twist on the familiar argument that lowering the barrier to coding doesn’t remove developers—it invites more people in and creates new specialties: “The cost of trying new things has gone down by orders of magnitude. And that means that the addressable surface area of programming has gone up by orders of magnitude.”
OpenAI has launched a new GitHub connector for its Deep Research feature, enabling the tool to analyze and answer questions about codebases.
Google introduced a new suite of AI-powered tools for search campaigns called AI Max.
There was a lot of news this week about AI and hiring: LinkedIn is rolling out AI-powered search features for job seekers, large employers are being inundated with impersonal job applications, and AI-generated recruiters are completely melting down.
LegoGPT, developed by Carnegie Mellon researchers, creates buildable Lego structures from text prompts.
New Jobs
Communications and Technology Strategy Director at Publicis Groupe, New York City (link)
Have a job you would like to share in the newsletter? Contact us.
If you have any questions, please be in touch. If you are interested in sponsoring BRXND NYC, reach out, and we’ll send you the details.
Thanks for reading,
Noah and Luke