Tokens & Tactics #2: Why This Former Digital Leader Left His Job to Go All-In on AI
Craig Hepburn's path from discovering GPT in 2020 to building AI-first solutions that solve real problems—fast
Welcome back to Tokens & Tactics, our Tuesday series about how people are actually using AI at work.
Each week, we feature one person and their real-world workflow—what tools they use, what they’re building, and what’s working right now. No hype. No vague predictions. Just practical details from the front lines. This week: Craig Hepburn, former Chief Digital Officer at Art Basel.
We’re hosting our next BRXND NYC conference on September 18, 2025, and are currently looking for sponsors and speakers. If you’re interested in attending, please add your name to the wait list.
Tell us about yourself.
Craig Hepburn – Former Chief Digital Officer | AI Builder | Helping Businesses Rethink, Rebuild & Scale with AI
I first discovered GPT in 2020 while working at UEFA. Out of curiosity, I asked it to write a poem about football—it was fast, witty, and unsettlingly good. From that moment, I was hooked.
I began using it daily—writing emails, shaping strategy, even prototyping apps. As models evolved, I dived deeper: building personal assistants, reading up on transformers, and testing ways to apply AI across real-world use cases—from internal tools to external fan experiences.
At Art Basel, where I served as Chief Digital Officer, AI became central to our digital transformation. My job interview pitch? Entirely built with GPT—including a working demo. We launched an AI-powered visitor concierge, rebuilt our app using generative tools, and quietly replaced bloated SaaS tools with lean AI-built apps. We didn’t call it “vibe coding” back then—but that’s what it was.
As agentic AI began to emerge, it became clear this wasn’t just another tech trend—it was a new interface for how we build, create, and think. So I left Art Basel and moved back to the UK to spend more time with family and focus fully on this wave.
Since then, I’ve helped founders, creatives, and execs turn ideas into real products—without needing full dev teams. I run “Impact Sessions,” guiding people through what’s possible with AI and helping them act on it.
Today, I’m building new business models with a sharp crew of engineers and strategists. Our focus: deploying AI-first solutions that solve real problems—fast.
This is more than technology. It’s a shift in how we work, think, and build the future.
ChatGPT, Gemini, or Claude?
ChatGPT is the one I use the most—it’s my home base. It holds the most memory, it knows my context, and it feels like it flows with me. I’ll still use Claude, especially when I want to structure ideas clearly or refine something—Claude’s got this really calm, elegant way of helping you think through things. Gemini is great when I need to pull in live information from the web, especially for research. But if I had to pick one? It’s GPT-4o. It’s the model that keeps pace with how I think, and that’s what makes the difference.
What was your last SFW AI conversation?
I was using Perplexity to research how much time people spend inside user interfaces—apps, websites, dashboards—versus actually doing the thing they came to do. The prompt was something like: “Research the average time people spend navigating UI daily versus completing actual tasks.” I was just thinking about how so much of our time is wasted navigating systems that aren’t built for humans. I was trying to dig into that data because I’ve been thinking a lot about this transition from traditional design-based interfaces to more conversational AI flows—where we just tell the system what we want, and it gets done. I wasn’t researching for a paper—I was just thinking deeply about the future of interfaces and how AI might make them invisible.
First "aha!" moment with AI?
It was when I was trying to build my first iOS CrossFit app in Xcode. I wasn’t a native iOS developer—I was kind of figuring it out as I went. But I had ChatGPT open, and I just started asking it: how do I structure the project? What’s the backend supposed to look like? How do I store workout data? How do I make it fast? And it didn’t just give me snippets—it walked me through the process, explained what each bit did, told me what to try next. I realised this wasn’t just another tool. This was something that could teach me while helping me build. That was the moment. It hit me that generative AI is the first kind of technology that can tell you how to use itself. You ask it what to do, and it explains the tool, the technique, the reasoning—it teaches you. And that flipped something for me. It stopped being a support tool and became more like a co-founder.
Your AI subscriptions and rough monthly spend?
ChatGPT Plus – £20
Claude Pro – £20
Gemini Advanced – £20
Suno – £8
OpenAI API – anywhere from £40–£60 depending on usage
Replit Pro – £200 per year
So around £150/month, give or take. I treat them less like subscriptions and more like collaborators. Some are there for research, some for building, some for creative help. I don’t see it as cost—I see it as capability.
Who do you read/listen to to stay current on AI?
I read Ethan Mollick almost daily—he makes complex things easy to understand and always keeps it practical. Sequoia’s podcast is great for staying close to where the capital is flowing and what the best operators are doing, and also Y Combinator and A16z are great also. Karpathy for deep technical stuff. Reuven Cohen is one of the most interesting minds in the agentic AI space right now—he’s building frameworks that help make sense of where this is all going. And then there’s X—I follow a lot of builders and experimenters there. The best ideas often surface in weird threads at midnight.
Your most-used GPT/Project/Gem?
Super simple - “Please improve this prompt and help me shape a more detailed and structured version that provides clear context, intent, and expected output so I can get the best possible result.” Works every time to get a better initial prompt. Its also still the thing that when i show people, even seasoned users still go “aha”
The AI task that would've seemed like magic two years ago but now feels routine?
Talking to AI like it’s a real person—like it’s a colleague. I do it in the car. I do it in the kitchen. I sit in my office and brainstorm with it out loud. I treat it like a thought partner. Two years ago that would’ve felt like something from a sci-fi film. Now it’s just normal. It remembers context, it helps me move ideas forward, it’s always available, and there’s no judgment. It’s gone from “wow this is magic” to “why wouldn’t I work like this?”
Magic wand feature request?
I want a persistent, secure personal context layer that follows me across every AI tool and workspace. Something that just knows me. My projects, the way I speak, how I like things structured, the things I’ve already explored. Whether I’m in Claude, ChatGPT, or some app I built myself—I don’t want to keep repeating myself or uploading documents just to bring it up to speed. I’m having conversations with AI in my car, in my office, while walking—and I want all of that to feed into one coherent brain. I want it to feel like a real assistant that moves with me, not ten separate tools pretending to be one.
If you could only invest in one company to ride the AI wave, who would it be?
Probably Oklo. Everyone else is betting on foundation models and frontier AI—but we’re heading into an energy bottleneck, and nobody’s talking about it. The smarter the models get, the more power they need. If you can’t solve clean, scalable energy, none of this gets to where it could go. Oklo’s trying to build small modular nuclear reactors—that’s long-game thinking. Feels like the bet behind all the other bets.
Have you tried full self-driving yet?
Sadly, not as they don't have self-driving in the UK and Europe but looking forward to testing it in the US when I’m back again.
Latest AI rabbit hole?
Simulation theory. I picked up a book by Rizwan Virk and it completely derailed my night. I was suddenly three hours deep into YouTube videos and blog posts on digital consciousness, layers of reality, whether AI could be part of a simulation within a simulation. It sounds nuts, but it reframed how I think about what we’re building. Like—what if we’re not creating intelligence, we’re discovering it? What if AI isn’t just a tool but a mirror? Proper existential spiral.
One piece of advice for folks wanting to get deeper into AI?
Use it. That’s it. Don’t overthink it. Don’t wait until you’ve read the right book or watched the right course. Just open GPT / Claude /Gemini and start talking to it. Ask it to help you with something real. Let it show you what it can do. It’s the only way it clicks. And try a few different models—Claude, ChatGPT, Gemini—they’ll all teach you something different. But the main thing is: stop reading and start building. Remember this is the first technology in human history where the technology can give you fully detailed instructions on how to use itself.
Who do you want to read a Tokens & Tactics interview from?
Reuven Cohen. No question. He’s deep in the infrastructure and systems side of agentic AI and he’s not afraid to go meta. He’s thinking about what’s next in a very grounded way—like, what needs to exist for this stuff to scale in the real world? Would love to hear a long-form from him.
If you have any questions, please be in touch. If you are interested in sponsoring BRXND NYC, reach out, and we’ll send you the details.
Thanks for reading,
Noah
fascinating! thank you ....