You’re getting this email as a subscriber to the BrXnd Dispatch, a (roughly) weekly email at the intersection of brands and AI. Last week, we announced that the next edition of the BRXND Marketing X AI Conference will be in LA on 2/6. If you’re interested in attending, you can leave your email here (the first set of ticket emails went out to previous attendees, and we’ll be releasing more soon), and if you’re interested in sponsoring, we’ve got a form for that as well. (Oh, and if you want to speak or know someone who should, let me know that too.)
Since ChatGPT came out, there’s been a concern about the data you input being included in training. Some of this had to do with OpenAI’s consumer terms of service, which allowed—and still does—them to train on the data you input into the system. But from the beginning, the terms on the API, and now teams and enterprise accounts, have been clear that anything you input is off-limits for training. Here’s the pertinent paragraph from their Business Terms:
3.2 Our Obligations for Customer Content. We will process and store Customer Content in accordance with our Enterprise privacy commitments. We will only use Customer Content as necessary to provide you with the Services, comply with applicable law, and enforce OpenAI Policies. We will not use Customer Content to develop or improve the Services.
Anthropic has fairly similar terms, though their consumer agreement says they won’t train on your data unless you provide them feedback (thumbs up/down) or it’s flagged for safety reasons. So why do I keep arguing with lawyers at brands and agencies about these services training on their data?
I have a few thoughts on this.
They haven’t read the terms. That’s okay, I guess, but not generally how things should work.
They are not specifically worried about the training but are more generally concerned about processing data through these services. That’s fair, and that's why both services will give you a Data Processing Addendum (DPA) if asked.
Their brand or agency has specific policies that bar them from using AI, and they’re hiding behind this term. Being clearer would be helpful here: I have heard of a few agencies who had to sign agreements with their clients saying they wouldn’t use any AI with their content (though even these often point back to the data security concerns).
Fourth, and I think most likely, is that there’s a fundamental misunderstanding of how these systems work.
This last one is most likely and most interesting to me. All of this popped into my mind recently with the introduction of the latest ChatGPT meme floating around social media:
If you go through the comments, you’ll find lots of folks blown away by the depth ChatGPT was able to see into their soul. “One thing that stands out is your ability to seek deep connections between seemingly unrelated ideas,” one reply says. “One thing that stands out about you is your ability to bridge different worlds,” says another. It sounds repetitive but reasonable. The only problem? It’s nonsense. ChatGPT knows almost nothing about you unless you are talking to it on a long-running chat (something I’ve noticed lots of people do) or you have memory enabled. The latter picks up lots of bits and pieces about you, but generally, nothing that would give it that level of depth.
As Simon Willison pointed out in an excellent post, this is actually called the Barnum effect, wherein people take generalized statements about themselves and their personality to be true even though they could apply to almost anyone. This, of course, is also how horoscopes and fortune cookies work: give someone in the right mindset a vague enough statement about themselves, and they’ll find meaning in it. Companies love it, too. Just look at the role of Myers-Briggs, etc., in corporate culture.
The Barnum effect isn’t new, but it speaks to a much broader issue about most people’s understanding of how these models, particularly chat interfaces, work. They think they have memory. Here’s how Willison described it:
The meme implies that ChatGPT has been learning about your personality through your interactions with it, which implies that it pays attention to your ongoing conversations with it and can refer back to them later on.
In reality, ChatGPT can consult a “memory” of just three things: the current conversation, those little bio notes that it might have stashed away and anything you’ve entered as “custom instructions” in the settings.
Understanding this is crucial to learning how to use ChatGPT. Using LLMs effectively is entirely about controlling their context—thinking carefully about exactly what information is currently being handled by the model. Memory is just a few extra lines of text that get invisibly pasted into that context at the start of every new conversation.
At the end of the day, those conversations with the lawyers and many others come back to this fundamental misunderstanding of how these systems function. They’re stateless, meaning every time you start a new conversation, the last one is gone (minus those little bits of memory described above). By and large, there’s no ongoing training or learning from what you tell it. Even when these services talk about training on your data—again, something they do not do if you’re on a business plan—that training would happen in a future training run of the model, not in the instant you input the data. Of course, they store those inputs in a database and can show you your old conversations, but every other service we use also does.
BRXND LA is coming up on 2/6. Tickets have started to be made available to previous attendees and will go out more broadly in the coming weeks. If you want to be the first to know about them, leave us your email. We are looking for sponsors to help us put on the event. If that’s of interest, please be in touch.
In my presentation at BRXND New York this year, I joked that AI doesn't pass the "duck test" when it comes to software. It looks like software, it feels like software, but it's not really software in the traditional sense.
When I say "software," I'm referring to the applications we've grown accustomed to using on our computers—almost entirely deterministic programs. With traditional software, you input the same data and get the same output every time.
Take ChatGPT as an example. The interface and the system that stores and retrieves your conversations is indeed traditional software. It uses a conventional database architecture that hasn't fundamentally changed in decades. However, the core of ChatGPT—the part that generates responses—is different. It's a language model: a probabilistic system that can generate answers to a wide range of questions.
This fundamental difference explains why early versions of ChatGPT struggled with tasks like math. Mathematical operations are deterministic processes, but these models operate probabilistically. However, AI systems have evolved. Now, when you ask ChatGPT a math question, you will likely get the correct answer. This improvement isn't because the language model suddenly became adept at mathematics. Instead, it's because the system has been enhanced with the ability to recognize math problems, generate appropriate code to solve them, and then run that code in a deterministic environment.
This hybrid approach—combining the creative, probabilistic nature of language models with the precise, deterministic capabilities of traditional computing—represents a new paradigm in software. It's not just a duck, and it's not just software—it's a new species altogether, capable of both creative generation and precise calculation.
As with so much related to AI, I keep coming back to the critical importance of building intuition through hands-on experimentation. There's simply no substitute for rolling up your sleeves and diving in, especially when dealing with systems that blend deterministic software with probabilistic AI models. But beyond just playing, we need a new kind of literacy around these tools. It's not enough to know how to use them, we need to understand their fundamental nature. This education is vital to prevent misconceptions and to harness AI's true potential in marketing and beyond. These systems are genuinely extraordinary, capable of tasks that would have been ridiculous to consider a few years ago. But they're also profoundly weird.
I think that’s it for this week. Thanks for reading, subscribing, and supporting. As always, if you have questions, want to chat, or are looking to sponsor the conference, please be in touch. I hope to start sending emails with ticket offers to previous attendees in the next few days and open up further in the coming weeks.
Thanks,
Noah