Things I Think I Think About AI // BRXND Dispatch vol 90
Plus, answering another one of your anonymous questions!
You’re getting this email as a subscriber to the BRXND Dispatch, a newsletter at the intersection of marketing and AI. We’re hosting our next BRXND NYC conference on September 18, 2025, and are currently looking for sponsors and speakers. If you’re interested in attending, please add your name to the waitlist. We plan to open up tickets publicly soon.
Noah here. This list started as a LinkedIn post and then expanded to be something I posted at Alephic.com. It’s a list of things I think I think about AI. It's incomplete, in no specific order, and probably controversial. But, it's born from a place of spending an inordinate amount of time using and building with these tools over the last few years (my estimate for time spent working on AI-backed software is ~2,400 hours since 2022, and that doesn't count the day-to-day use of ChatGPT, Claude, etc.).
So please, add, argue, critique what I've got here (as long as you're a) actually playing with this stuff and b) not still solely using 4o 😉).
You don't need fancy techniques, classes, or a complicated prompt to learn AI. The best way to get up to speed is to pay $25 per month for a ChatGPT subscription and use it as much as possible.
If you're still using 4o, your opinions on anything happening with AI are nullified.
o3 is the best model from OpenAI, and you shouldn't ever really have to use anything else.
Claude is a better writer than ChatGPT.
ChatGPT is better at just about everything else.
There's nothing Perplexity does that ChatGPT doesn't do better.
Grok voice > ChatGPT and Gemini voice (mainly because it uses a better model)
ChatGPT's advanced voice mode will remain unusable until they switch off 4o
Google Search is good for getting you to a website when you can't remember the URL or can't be bothered to type it in. AI is good for everything else.
The only way to be is a token maximalist. Always err on the side of too much context, not too little.
If you aren't exhausting your deep research queries in ChatGPT, you're using it wrong.
You should be writing with Claude Artifacts/ChatGPT Canvas.
The only time prompt engineering techniques really matter is when you're working on a shared prompt (app, GPT, project); otherwise, you should just work with the chat.
Sometimes you need to be rude to the AI to help it understand you really want it to do something.
You should never share something written by ChatGPT without reading it first.
You should never apologize for having used ChatGPT. WWGPTD.
Em-dashes are fine, everyone should chill out.
I have never found a use for fine-tuning that a prompt can't solve.
We will continue to be surprised to find out that, without any specialized training, transformer-based models can solve many problems that specialized models were built to solve.
People who say AI isn't useful for their job aren't trying hard enough.
People who say AI can't be creative aren't being creative enough.
One of the most important races in enterprise AI will be between the model creators trying to figure out how to build a rich text editor and the rich text editor companies (Google and Microsoft) trying to figure out how to integrate AI into their products in a way that isn't awful (like it is now).
People are the bottleneck in enterprise AI adoption, and it will stay that way for longer than anyone expects.
Most lawyers who worry about AI data protections haven't read the terms of service.
If models didn't progress from what we have today, we'd still have 10 years of runway to integrate their value into corporations and society.
When it comes to AI, don't trust anyone who sounds too confident.
If someone says, "You won't lose your job to AI, you'll lose your job to someone using AI," you should stop listening. It's not that it's wrong, but that we need a lot less certainty (and more humility) when it comes to this stuff.
The value of using AI isn't that it gives you great foresight into the future of how AI will evolve; it's that it gives you an uncanny ability to sniff out everyone else's BS.
AI is underhyped.
Anonymous AI Question of the Week:
As a reminder, we started an Anonymous AI Question submission to ask all the questions you’re too afraid to ask. Submit yours now! Here’s this week’s question:
"When do we know if AI is hallucinating?”
Noah: My favorite answer to this question came from none other than Andrej Karpathy, former head of AI at Tesla and early OpenAI engineer. In December of 2023, he wrote on X:
I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines.
We direct their dreams with prompts. The prompts start the dream, and based on the LLM's hazy recollection of its training documents, most of the time the result goes someplace useful.
It's only when the dreams go into deemed factually incorrect territory that we label it a "hallucination". It looks like a bug, but it's just the LLM doing what it always does.
No conversation about hallucinations should start without acknowledging that there is no AI as we know it without them. These systems don’t have a database of knowledge: they are just a set of weights developed by reading what is effectively all writing ever from humans. When they hallucinate, they’re often saying something that is directionally correct but factually inaccurate (my favorite example before ChatGPT had access to the web was that a friend asked for quotes from me, of which it offered a set of plausible but entirely made-up lines about creativity, curiosity, and technology).
To that end, I actually think the thing we should focus more on is the amazement of how often they are factually correct with this entirely non-deterministic approach, not their factual inaccuracies.
Claire: The frustrating answer is that we sometimes don't know until it's too late. AI delivers made-up information with exactly the same confidence as real facts, which makes hallucinations particularly hard to spot.
Anthropic's recent research reveals something fascinating about how this happens. They found that Claude has internal mechanisms that normally make it decline to answer questions it doesn't know. But these safety features can misfire. When researchers asked Claude about a fictional person, it initially said it didn't know who that was. But when they added subtle cues suggesting this person was well-known, Claude suddenly began generating detailed biographical information that was entirely fabricated.
There are some patterns to watch for. AI tends to hallucinate suspiciously specific details about obscure topics, and statistics that end in 0 or 5 appear disproportionately often in false information. Asking the same question multiple ways can also reveal inconsistencies. But the deeper issue is that these models are, as researchers put it, "indifferent to the truth of their outputs." They are optimized for generating plausible-sounding text, not for accuracy.
This means we need to treat AI outputs with healthy skepticism. Think of it like working with a brilliant but unreliable colleague who occasionally presents fiction as fact with complete confidence. The technology is incredibly useful, but still requires oversight.
If you have any questions, please be in touch. If you are interested in sponsoring BRXND NYC, reach out, and we’ll send you the details.
Thanks for reading,
Noah and Claire