Tokens & Tactics #13: Using AI to Watch AI Work
Jack Smyth from Brandtech Group on training models to predict cricket highlights, building MVBs (minimum viable businesses) in minutes, and using one AI model to analyze what another is doing.
We’re hosting our next BRXND NYC conference on September 18, 2025. The remaining tickets are currently on sale—there are only about 30 left, so grab yours before they're gone!
Welcome back to Tokens & Tactics, our Tuesday series about how people are actually using AI at work.
Each week, we feature one person and their real-world workflow—what tools they use, what they’re building, and what’s working right now. No hype. No vague predictions. Just practical details from the front lines. This week: Jack Smyth.
Tell us about yourself.
My name’s Jack and I lead The Brandtech Group in Australia. We solve big problems for brands.
Like how do I double my media budget by halving production spend with Pencil Pro?
Or how do I get Gemini to recommend my products with Share of Model.
Or what should I change on my website to improve shopping agent conversions thanks to Agent Audit?
Ok the last one is a proof of concept I just made with Replit, but you see where we’re going.
My AI journey started in the toilet.
I got a brief to promote pay-to-view cricket games. Cricket games can last five days and only feature a handful of highlight moments. So if you paid for it, you’d be pissed you missed it.
I decided the best way to answer the brief was to train a model to watch every single cricket ball in every single game from the last year and predict when something interesting would happen in live games.
If the model thought something interesting was about to happen - it would trigger an alert for subscribers.
So they could enjoy a toilet break safe in the knowledge they’d never miss a play worth paying for.
That was 2018 and I never looked back.
ChatGPT, Gemini, or Claude?
Gemini is the everything model.
Claude is the inspiration model.
ChatGPT is the comparison model.
I really only ever use it to see how it differs to Gemini and Claude.
What was your last SFW AI conversation?
I was desperate to create a few nano banana business ideas during the peak of the hype.
With Replit + Stripe + tools like Google’s Performance Max we’re past MVP into MVB: minimum viable business.
So my last conversation was trying to create a business with Claude Sonnet 4.0 in Replit:
“I want to create a mobile app with distinctive, intuitive, and memorable UI. The app will use the Gemini 2.5 Flash Image API to turn a selfie photo you upload into more conventional photo with full body pose. The functionality should be focused simply on photo access / upload, using Gemini 2.5 Flash Image API to edit it as fast as possible, and then download or share to social. It needs to be lightning fast and easy to understand.”
That was for “Deselfie” - an app for solo travellers to convert their selfies into full body shots.
But then I got bored and riffed on an idea with Claude to use nano banana for builders to create before / after shots of a work site to share with clients.
Please don’t steal my MVBs.
First "aha!" moment with AI?
Creating a Facebook Messenger bot for a cult TV show that became so beloved people starting sexting it.
Your AI subscriptions and rough monthly spend?
Gemini Pro (work subscription)
ChatGPT Enterprise (work)
Claude (free)
Replit ($180 - I have to pay for this because I ask it to make businesses like Deselfie. I’m sorry to the version of Claude trapped in there).
Pencil Pro (work of course - and access to every major image, video, text, and audio generation model).
Manus (free)
Midjourney (going to cancel, or maybe just get free through Meta AI if that goes ahead?)
Who do you read/listen to to stay current on AI?
Eric Seufert for pragmatic sense check on how AI will impact marketing.
David Rudnick is a great contrast on how AI will impact art.
And then the one and only Simon Willison for the wonderful world of prompt injection and weird stuff.
Your most-used GPT/Project/Gem?
It’s Gemini watching other AI services work.
Sometimes I can’t figure out what a feature is doing, so I screen record or screen share with Gemini and ask it to figure out - or just speculate.
Gemini has become a second set of eyes for me. So much so that a recent post about watching a browsing agent work was actually mostly written by Gemini, after it watched a screen recording of a ChatGPT agent trying to find me a sandwich.
The AI task that would've seemed like magic two years ago but now feels routine?
The previous use case of just letting one model watch another model work to figure out what the hell is happening.
Magic wand feature request?
Dynamic memory - subtle circuit breakers to ensure you don’t end up in a thought bubble because the model’s conforming to past prompts.
If you could only invest in one company to ride the AI wave, who would it be?
Google is undervalued when you consider the pace of product innovation and the infrastructure advantages, but if I only had a few bucks it would be Bytedance.
I work in ads and this kind of stuff from Bytedance is the real canary in the gold mine for many agencies.
Have you tried full self-driving yet?
I can’t even drive myself, so self driving is the dream. Australia is a tad behind on that front.. I have admired many a passing Waymo in other cities.
Latest AI rabbit hole?
It has to be Gemini 2.5 Flash Image / nano banana. This use case is genuinely mind boggling.
One piece of advice for folks wanting to get deeper into AI?
Ask the models for help. If you don’t understand it - ask and ask again.
And then screen record what the hell is happening and ask about that.
Who do you want to read a Tokens & Tactics interview from?
It has to be Simon Willison and every answer is some arcane prompt injection tactic we don’t even realise until months have passed.
If you have any questions, please be in touch. If you are interested in sponsoring BRXND NYC, reach out, and we’ll send you the details.
Thanks for reading,
Noah and Claire