Imperfect AI // BrXnd Dispatch vol. 42
On corporate AI adoption and the explore/exploit tradeoff.
You’re getting this email as a subscriber to the BrXnd Dispatch, a (roughly) weekly email at the intersection of brands and AI. Wednesday, May 8, was the second BrXnd NYC Marketing X AI Conference, and it was amazing. If you missed it, all the talks will be going up on YouTube, and I’ll have lots more content coming out soon. The fine folks at Redscout were kind enough to do a quick turnaround writeup the day after the event, so you can check that out for now.
One of the major points of conversation from my conference in New York was the relatively slow adoption by enterprise marketers. We assume this AI cycle will move faster than past technologies, but as someone who has deeply embraced AI, it's sometimes hard not to wonder what's taking so long. I’ve had a thought for a while that one thing that makes this particular cycle so odd is that even with all the hype, the capability of these models is absolutely extraordinary. Not at everything, of course, which leads lots of folks to poo-poo’ing the whole thing, but there are entire classes of problems (like web scraping and data transformation) that will almost certainly never look the same again because of these new models. Because of the tech's fundamentally counterintuitive nature, I actually suspect we’re underhyping it, not overhyping it.
Amongst the complaints people have about these models is their mediocrity and inconsistency. And that’s true, at least to an extent. They do kick out median answers and, without good prompting, can be prone to hallucinations. Specific jobs, like writing marketing copy, might not actually be AI’s best use—and certainly not with out-of-the-box zero-shot prompts—despite its generative label. With that said, their ability to judge and classify seems to me to be at least at the level of humans—albeit with an almost free price tag and no chance of getting bored of the repetitive work.
One of the first use cases I discovered that truly opened my eyes to the possibilities of AI was turning unstructured data into structured data. Specifically, I realized that if I gave GPT-3 (the most advanced model at the time) a data structure in JSON, it could parse a website and return that data in the structure I requested. Having spent entirely too much time building brittle web scrapers over the years that relied on class names and other structural elements that might change tomorrow, this ability blew my mind.
Bringing it back to the corporate adoption conundrum, one question that popped into my head is whether part of the disconnect with AI is a fundamental divide between entrepreneurs and corporate workers. Two traits evident in nearly every entrepreneur I’ve ever met are a tolerance for risk and a “hacker mindset,” which basically boils down to an ability to work with what you’ve got (for lack of a better description). As an entrepreneur, you learn to live with imperfect information and resources and need to find ways to squeeze the most out of what you’ve got. When I get an imperfect response back from AI, it isn't the end of the conversation, rather, it’s a place to either refine or think about where these kinds of imperfect responses can be used effectively.
That, of course, is not the corporate way. Part of the reason entrepreneurs find it hard to work inside (and sometimes outside) large companies is because we find it so strange to wait for the perfect answer. That’s a luxury you aren’t afforded when you’re running a startup.
None of this is to say there’s anything wrong with the corporate approach. I have spent enough time working with large brands to have a deep respect for the way they operate. Those companies have built what most entrepreneurs can only dream of creating. Also, the corporate reticence to dive into AI is at least partly a question of legality and liability, which are much more important when you’re operating a billion-dollar company than just a billion-dollar idea. Companies are struggling with questions of basic access to cutting-edge models, as they’re all afraid (rightly and wrongly, I think) of leaking sensitive data into the models. But it is hard not to wonder if part of the disconnect you feel when you talk to people who have gone deep with AI, particularly technology entrepreneurs, versus the corporate folks still keeping it at arm’s distance is a fundamental difference between how they approach their jobs.
In computer science, there’s a concept called the explore/exploit tradeoff. The essence of the problem is the question of when you should have an algorithm stop searching for optimal answers and instead return the best thing it’s come up with.1 The reason this matters is that in any search, you eventually hit diminishing returns, where the answers take longer and longer to get better by smaller and smaller increments. At some point, we have to make a decision about not just quality, but time, and so we cut off the algorithm’s search and settle on the best answer so far.
I think explore/exploit offers a useful model for understanding the difference in approach between entrepreneurs and corporations, particularly when it comes to adopting AI. Entrepreneurs, by nature, are more oriented towards exploration—they are willing to work with imperfect solutions and iterate quickly to find product-market fit. Corporations, on the other hand, are more focused on exploitation—they want to refine and optimize their existing processes and are more cautious about adopting new technologies until they are proven and reliable. They’ve already reached the top of the mountain, and their job is to stay there.
AI presents an interesting challenge for a typical corporate attitude (not that most of this newsletter's readers, particularly those who work for large companies, are typical corporate employees). It’s not perfect and definitely presents some risk. It’s not surprising that smaller companies are more willing to explore something so new and different and are much more willing to put up with its many imperfections. But the challenge, I think, is that this isn’t like the tech of the past. And while the models will certainly get better, hallucination is a feature, not a bug.
I would say my experience building lots of AI stuff both on my own and with companies large and small is that the current state of this tech requires an attitude shift towards exploration. Even projects that come back as failures—we weren’t able to crack that copy problem with AI—are amazing learning opportunities as they make you examine your process and guidelines and ask questions like, “What do we mean when we say something is ‘good?’”
A big thank you to all our 2024 sponsors, without whom I couldn’t have pulled off such an amazing event. Airtable enables any team, regardless of technical skill, to create apps on top of shared data and power their most critical and unique workflows. The Brandtech Group The Brandtech Group is a marketing technology group that helps brands do their marketing better, faster, and cheaper using the latest technology. Brandguard is the world’s #1 AI-powered brand governance platform, Focaldata is combining LLMs with qualitative research in fascinating ways, and Redscout is a strategy and design consultancy that partners with founders, CEOs, and CMOs at moments of inflection for their organizations. Plus, big thanks to McKinney, Inuvo, Dstillery, Canvas Worldwide, and Persistent Productions. If you’re interested in sponsoring a future event (or doing one internally at your brand or agency), please be in touch.
Thanks for reading, subscribing, and supporting. It was so amazing to see so many of you in person last week. As always, if you have questions or want to chat, please be in touch.
Thanks,
Noah
PS - Thanks to James & Mark for early reads and feedback.
As I outlined in my WITI on explore/exploit, the answer is 37%. “If you know the exact time dimensions you’re working within (say you have one month to find a new apartment, for instance), you’d be best off spending 37 percent of it exploring before you go deep with the best option.”