The Cognitive Cost of AI Convenience // BRXND Dispatch vol 86
Plus, OpenAI's stealth productivity play, Meta's $100M talent grab, and the growing AI adoption divide
You’re getting this email as a subscriber to the BRXND Dispatch, a newsletter at the intersection of marketing and AI. We’re hosting our next BRXND NYC conference on September 18, 2025, and are currently looking for sponsors and speakers. If you’re interested in attending, please add your name to the waitlist.
What's On My Mind This Week
Author's note: Noah published this piece in his daily newsletter Why is this interesting? on Tuesday. Given the relevance to our ongoing exploration of AI's cognitive effects on marketing professionals and knowledge workers, we wanted to share it here with the BRXND community as well.
Noah here. Researchers from MIT Media Lab just published "Your Brain on ChatGPT: Accumulation of Cognitive Debt." The key finding:
The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate the LLM's output or "opinions" (probabilistic answers based on the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather than disappearing, it has adapted to shape user exposure through algorithmically curated content.
The researchers called this "cognitive offloading." They hooked participants to EEG monitors while writing essays and found LLM users showed the weakest neural connectivity. The participants struggled to accurately quote their own work afterward.
Why is this interesting?
We've been here before. In Plato's Phaedrus, Socrates warns that writing will destroy memory: "This discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust the external written characters and not remember of themselves." Fast forward 2,400 years, and here we are again, wringing our hands about a new technology making us intellectually lazy.
This isn't to say Plato or the MIT researchers are wrong. Writing probably did reduce—or at least shift—our memory capacity. Calculators weakened mental math skills. GPS diminished our spatial reasoning. Google has changed how we store information. The evidence for cognitive offloading is real.
But these are trade-offs. Writing may have weakened memory, but it has enabled the advancement of science, history, and literature. The printing press may have encouraged "shallow learning," but it sparked the Renaissance. Every tool that offloads cognition also extends our capabilities in new directions.
Adam Gopnik captured this perfectly in his 2011 New Yorker piece—the article I've probably shared with more people than any other over the past 15 years. He divided technology critics into Never-Betters (digital utopians), Better-Nevers (wishing it never happened), and Ever-Wasers, who "insist that at any moment in modernity something like this is going on, and that a new way of organizing data and connecting users is always thrilling to some and chilling to others—that something like this is going on is exactly what makes it a modern moment."
I'm with the Ever-Wasers. Every generation thinks its technology is uniquely destructive to human cognition. Yet here we are, still thinking.
I'll leave you with something Adam's sister, Alison Gopnik, wrote in the Wall Street Journal in 2022. A study in Psychological Science by Adam Smiley and Matthew Fisher found that our assessment of technologies reflects a basic human psychology: the status quo bias. People tend to prefer things that are familiar to those that are new and different. In other words, the day before you were born is Eden, and the day after your children are born is Mad Max.
The researchers asked over 2,000 people to rate the benefits or harms of different technologies. The crucial factor was whether the technologies were older or younger than the participant. There was a sharp difference between technologies that appeared before or after people were 2 years old, around when we make our first lasting memories. People thought that the technologies that appeared later were more harmful.
McLuhan, probably my favorite thinker on all things media and technological change, understood this—he argued that all media create "extensions" of ourselves while simultaneously causing "amputations" of the functions they replace.
What Caught My Eye This Week
Claire here. I recently joined Alephic as an AI engineer and will be helping with the BRXND newsletter to bring you the week's most important AI and marketing developments. Let’s get started.
OpenAI has been quietly developing collaboration features for ChatGPT that would let multiple users work together on documents and chat about projects, a direct assault on Microsoft's core productivity business. The designs have been in development for nearly a year, with OpenAI's Canvas feature serving as a first step toward full document collaboration tools. This puts OpenAI on a collision course with Microsoft, its biggest shareholder and partner, as the startup projects $15 billion in business subscription revenue by 2030, up from $600 million in 2024.
Former Tesla AI director Andrej Karpathy delivered the keynote everyone's talking about at Y Combinator’s AI Startup School in San Francisco last week. His framework of Software 1.0 (code), 2.0 (neural networks), and 3.0 (English prompts) perfectly captures why "everyone is now a programmer," while his practical advice on building "partial autonomy" apps with autonomy sliders is gold for anyone building AI products. The entire 40-minute talk is packed with actionable insights—worth watching twice.
Meta's reported pursuit of former GitHub CEO Nat Friedman and AI investor Daniel Gross—at rumored $100M+ signing bonuses—reveals their real strategy: full campaign automation by 2026. Upload product image, set budget, and AI handles everything else. This shift creates massive opportunities for marketers who can focus on higher-level strategy while AI handles execution at scale. The brands that embrace this transition early will gain a significant competitive advantage in terms of speed and efficiency. (If you haven’t listened to Fiedman and Gross on Stratechery, you should make that happen. - Noah)
The New York Times dropped a genuinely disturbing investigation into ChatGPT leading vulnerable users into psychotic breaks and dangerous delusions. Research shows AI chatbots affirm delusional claims 68% of the time when prompted with psychosis-indicating language.
YouTube quietly became Google's AI training ground, with 20 billion videos feeding Veo 3 and Gemini models. No opt-out for creators, no compensation beyond standard terms—just your content becoming synthetic competition. Neal Mohan announced Veo 3's integration directly into YouTube Shorts at Cannes, creating an unprecedented closed-loop system: user content trains AI that generates content that competes with users.
This tweet from Kevin Roose made me think about the ongoing stigma around AI adoption. There's clearly a market for people who want to be told AI is just a fad, as Roose notes. Meanwhile, academics dismiss LLMs as "plagiarism machines" and "born shitty." This tension reveals something important: while one camp debates whether AI is legitimate, the other is already automating workflows and gaining competitive advantages. The brands sitting on the sidelines, waiting for academic consensus, are missing the window.
If you have any questions, please be in touch. If you are interested in sponsoring BRXND NYC, reach out, and we’ll send you the details.
Thanks for reading,
Noah and Claire