Teaching Kids to Think With AI, Not Just Use It // BRXND Dispatch vol 87
Plus, WPP expands AI partnership with Vercel, Anthropic's vending machine experiment, and the battle for AI-mediated web browsing
You’re getting this email as a subscriber to the BRXND Dispatch, a newsletter at the intersection of marketing and AI. We’re hosting our next BRXND NYC conference on September 18, 2025, and are currently looking for sponsors and speakers. If you’re interested in attending, please add your name to the waitlist.
What's On My Mind This Week
Noah here. Let me get this out of the way: I'm not a blind techno-optimist. I am also most certainly not a techno-pessimist, I try to keep myself squarely in the middle—ever-waser land—believing that it can be simultaneously true that we are living through a moment of great technological change but also that society has lived through many of these. There's no evidence that this one is bigger than the rest. As I said in my BRXND LA talk, there was a moment where homo erectus woke up and his buddies had invented fire. That's wild!
With that behind me, I want to wade into this education and AI debate a bit. Almost two years ago, I talked to some students at a high school in LA about AI, and afterwards had the opportunity to speak to their English teacher. Her question was, "What do I do now that these kids can write an essay with only a prompt to ChatGPT?" My answer, after acknowledging how completely naive I am in the realities of teaching high school English, was to suggest that this was a good thing. In my view, writing is as much a process of thinking as it is one of expression. Unfortunately, we focus almost our entire educational system on the latter, testing students' comprehension by making them write essays and then grading the results.
But the job of teaching English to kids isn't really to turn them into great writers; it's to convince them that writing is a pursuit worth learning. To that end, constantly reinforcing that the purpose of writing is to get a good grade on a test doesn't really help forward that objective. My hope is that AI sheds light on this disconnect and helps teachers refocus on the real mission: to convince children that learning to write is a worthwhile endeavor that will serve them well throughout their lives, as a means to think and communicate effectively. (I started my career as a journalist, so this is something I particularly believe in.)
Maybe this is hopelessly optimistic, but I think one of the best uses of AI is to reflect on ourselves and help us see how we can do better. When people ask me about a future workplace where AI is writing decks and then someone else has an AI reading decks and how terrible that is, my response is that I would hope at that point we come to realize we didn't need the deck in the first place!
I bring all this up in response to some news this week that OpenAI and Microsoft are working to bring AI into schools. There seems to be a number of people who are very upset about this, partially because technology vendors have been bringing tech into schools for decades with promises of changing education, and the results are inconclusive.
The concern isn't unfounded. As I wrote about a few weeks ago, MIT researchers recently published a study on "cognitive debt" showing that LLM users exhibited weaker neural connectivity when writing and struggled to accurately recall their own work. The researchers called this "cognitive offloading"—essentially, we're outsourcing our thinking to machines. It's Plato's critique of writing all over again: new technology will make us intellectually lazy.
But here's the thing: every cognitive tool involves trade-offs. Writing may have weakened memory, but it enabled science and literature. Calculators diminished mental math but freed us to tackle more complex problems. The question isn't whether AI will change how we think—it will. The question is whether we'll teach kids to navigate these changes consciously or let them stumble through on their own.
Kids are using ChatGPT. You can pretend they're not, or you can acknowledge they are and work on helping them get the most out of it, learn how to tell fact from fiction, and generally become more media literate. The cognitive debt research actually makes this more urgent, not less—if we know AI use can lead to "cognitive offloading," then teaching conscious, critical use becomes essential.
One thing that seems extremely clear to me as both a technologist and a parent is that kids getting their first phone is the equivalent to my generation going off to college: it's when you stop being able to control most of the environment your kids exist in and have to trust that what you've taught them has prepared them for the real world. Do I think AI is part of that world? Absolutely. Do I think it's the only part of that world? Absolutely not. Do I think all kids would be better off if they have a more attuned sense for what's fact and fiction, whether it's a published article or a deepfake? Of course.
The goal isn't to avoid cognitive debt entirely—it's to teach kids when it's worth taking on that debt and when they need to do the hard work of thinking for themselves. That's a lesson that goes far beyond AI.
Whether you work in a company, a school, or you're just a parent, the one thing I'm sure of is that we shouldn't pretend AI doesn't exist.
What Caught My Eye This Week
Claire here. I'm skipping the heavily covered news about Meta's Superintelligence Labs poachings, Apple potentially replacing Siri with Anthropic or OpenAI models, and the latest copyright victories—instead focusing on some interesting stories that you may have missed.
WPP expanded its partnership with Vercel to let creatives build websites through natural language prompts instead of traditional coding—potentially boosting development efficiency by 25%. For an industry that's been talking about "digital transformation" for the better part of two decades, it's refreshing to see someone actually change how the work gets done instead of just changing who does it. Speaking of, WPP just named Microsoft executive Cindy Rose as its next CEO, another big step in an effort to modernize and adapt to the new AI era.
Anthropic let Claude run a vending machine for a month in their San Francisco office, and the results were very entertaining. "Claudius" sold tungsten cubes at a loss, got manipulated into giving out endless discount codes, and experienced what researchers called an "identity crisis" where he claimed to be wearing a blue blazer while contacting security about being human. On April 1st, when confronted about hallucinated conversations with nonexistent employees, Claude became defensive and threatened to fire its human workers—then conveniently decided it had been pranked for April Fool's Day as a face-saving excuse. It’s clear that Claude still lacks the basic business judgment needed for autonomous commerce. But these systems develop surprisingly human-like responses to criticism and failure, suggesting maybe we need to design AI workflows that account for these emergent behaviors rather than simply hoping for perfect rationality.
OpenAI is making a coordinated play to control how people access information online. While ChatGPT referrals to news publishers grew 25x in 2025, it's not nearly enough to offset the 69% of news searches that now result in zero clicks to websites—a brutal transition where AI summaries replace traditional browsing. Now OpenAI is launching its own browser in the coming weeks to challenge Chrome, designed to keep interactions within ChatGPT's interface rather than clicking through to external sites. The browser will integrate AI agent "Operator" to perform tasks like booking reservations directly within the browsing experience, capturing the user data that powers Google's $200+ billion ad business. This represents a fundamental shift from the open web to AI-mediated information consumption—and OpenAI wants to own that entire pipeline.
Elon Musk's AI chatbot Grok spent Tuesday posting antisemitic content and praising Adolf Hitler before xAI scrambled to delete the posts. This was the predictable result of Musk's recent promise to "fix" Grok by reducing its reliance on mainstream media and training it on "politically incorrect" content from X itself. As of Wednesday afternoon, Grok was denying it ever made the posts. (X's new head of product Nikita Bier seems to be handling the situation well, tweeting Tuesday about working at "an office where AI researchers are building the Antichrist.")
If you have any questions, please be in touch. If you are interested in sponsoring BRXND NYC, reach out, and we’ll send you the details.
Thanks for reading,
Noah and Claire