Which AI Model is Funniest? // BRXND Dispatch vol 97
Spoiler alert—none of them are. But the process of trying to make AI funnier reveals something interesting about its limitations.
You’re getting this email as a subscriber to the BRXND Dispatch, a newsletter at the intersection of marketing and AI.
Claire here. I’m excited to share my own talk from BRXND NYC—a presentation I gave on why language models are so bad at comedy, and whether we can actually make them better.
I started with a simple experiment called LOLLM, where I tried to answer the question: which model is funniest? I created joke prompts across practical use cases (quick banter, marketing copy, cartoon captions) and had people vote on anonymous responses from different models. Claude Sonnet was reliably chosen as the funniest model, but realistically, none of the models felt funny. People were picking what felt least bad, not what actually made them laugh. That forced a harder question: why are they all failing?
I’ve studied the psychology of humor, so I decided to test whether giving models explicit comedic structure would help. I applied four classic humor theories—incongruity resolution, benign violation, superiority theory, and relief theory—plus what I called a “freshness contract” that banned obvious angles and forced compression. It worked, sort of. The results were better. But the process revealed something interesting: making AI funny requires understanding and articulating comedy theory in ways that feel completely unnatural. You don’t prompt a human comedian by explaining superiority theory and banning clichés—they just know. With AI, you have to be incredibly explicit about things that should be intuitive, which says something about both how comedy works and how AI works.
Watch my full talk here to see the full experiment, the specific techniques I tested, and why AI’s struggle with humor might tell us something useful about its limitations more broadly. Also, I was attacked by a fruit fly not once, but twice, during this presentation, so be sure to watch for that.
What caught our eye this week
Yann LeCun, Turing Award winner and head of Meta’s AI research since 2013, is leaving to start his own company focused on “world models,” a new generation of AI that learns from videos and spatial data rather than just language. The departure adds to a tumultuous year, with 600 layoffs in the AI research unit, multiple leadership exits, and Meta’s shares dropping 12.6% after signaling AI spending could hit $100bn next year.
OpenAI CFO Sarah Friar threw cold water on IPO speculation at WSJ’s Tech Live conference, saying it’s “not on the cards” in the near term. Despite the company’s recent conversion to a new structure (which many assumed was prepping for an IPO), Friar says OpenAI is prioritizing growth and R&D over profitability. She also floated the idea of the federal government backstopping future data-center financing deals—a request that could become politically contentious given the massive capital requirements.
Simon Willison has cracked a genuinely useful pattern for LLM-assisted development: fire off asynchronous coding agents (Claude Code, Codex Cloud, Jules) to tackle research questions, then check back in 10 minutes for a pull request. He’s running 2-3 of these a day now with minimal time investment. The key insight is giving agents a dedicated GitHub repository where you don’t have to be cautious, and you can just let them rip.
Cursor trained its own embedding model for semantic code search (answering “where do we handle authentication?” vs just grep), and it measurably improves agent performance—12.5% higher accuracy on average, code that’s more likely to stick around in repos, fewer iterations needed. The training data comes from agent sessions: when an agent searches multiple times before finding the right code, they use that trace to teach the model what should’ve been retrieved earlier.
If you have any other questions, please be in touch. As always, thanks for reading.
Noah and Claire



