Tokens & Tactics #16: Making AI Plan Before It Acts
Ian P. Cook on his journey through NLP and AI, building automation to manage information flow, and his core principle of making models document goals before starting work.
Welcome back to Tokens & Tactics, our Tuesday series about how people are actually using AI at work.
Each week, we feature one person and their real-world workflow—what tools they use, what they’re building, and what’s working right now. No hype. No vague predictions. Just practical details from the front lines. This week: Ian P. Cook.
Tell us about yourself.
My name is Ian P. Cook, and I’m currently the SVP of AI at Qloo, a startup that’s built the largest knowledge graph of culture and individual taste. Decades ago, I was doing quantitative policy research for the federal government. I found my way into a PhD, where I wanted to focus on communications, which then led to studying Natural Language Processing/Understanding and teaching myself Python. When the educational market and I looked at each other a bit askance, I opted to take a role as a data scientist. From there, I worked for years as model-builder and tool-developer at a number of companies, working on products for both consumers and companies. Along the way, I started managing bigger projects, then teams, then whole divisions until I found myself a CTO. More recently I’ve been getting back to some of my roots by getting to dig deep into the new tools and methods of Generative AI. It’s been hugely rewarding working with some truly amazing people, all while getting to work in this field as it explodes.
ChatGPT, Gemini, or Claude?
Claude, hands down.
For developing, Claude Code has a better scaffold around the model: it’s not just writing correct code (syntactically), it’s better at focusing on a single issue, planning for execution, and explaining the choices it made. Codex is getting better, but it still doesn’t get rid of the over-eager, cleverer-than-thou, why-are-you-bothering-me smell most OpenAI models have.
For writing, Claude has a more human voice, and is better at adapting when I ask for revisions. Provided a sample, it will better capture the voice and style of the original, and usually has better suggestions.
I will admit to opening ChatGPT if I want to start learning more about a topic that is relatively recent. I have a suspicion that the training data is more up-to-date on the latest OpenAI models, and I’ve found that it’s often better at using the web to augment its answer.
Gemini? Clippy, but make it Google.
I’ll use a range of other models for more multi-modal work: Veo and Midjourney for visuals, Suno for sound, and a spin-the-wheel selection of things for making slides.
What was your last SFW AI conversation?
I talked to Nietzsche, but first I had to make him.
Some quick background: all the recent blathering about Stoicism annoys me. My short take is that it is just another form of victim-blaming, in this case for being too emotional. Tech bros don’t like dealing with other people’s emotions, and apparently think about Rome way too much, so Stoicism blew up in Silicon Valley, and now it’s everywhere. But is there a more principled argument against it? Yes; enter Friedrich Nietzsche.
I uploaded files of all his books to ChatGPT (yeah, I know what i said above, but OpenAI is the only one that has “GPTs” you can make like this), and gave the system a simple prompt to respond in his writing style, grounding responses in quotes with citations.
How did it go? See for yourself (trimmed for space):
Me: What is your view of Stoicism?
NietzscheGPT: Ah, Stoicism! The philosophy of the stiff upper lip, of enduring fate as if the universe were some orderly bureaucrat filing away human misery with indifferent precision. You ask me, Friedrich Nietzsche, what I think of Stoicism?Let me speak plainly: I have grappled with the Stoics, and I have spat out their graveyard serenity.
In Beyond Good and Evil (§9), I say:
“The Stoics… want to live according to nature? Oh, what a heroic pose! To live — is not that precisely to want to be different than this nature? Is not living — estimating, preferring, being unjust, being limited, wanting to be different?”
The Stoic says: “Live according to nature.” I ask: Which nature? This so-called rational, harmonious cosmos of the Stoics — is it not a fiction? Nature, in truth, is not rational, it is not moral, it is not calm. It is will to power. It devours. It creates. It destroys. Nature is chaos — and we are her cruelest expression.
Would you worship such a nature? Or would you dare — like me — to say yes to it, in all its terror and splendor?
Conclusion: Stoicism is a noble lie. It is admirable — like a beautiful statue is admirable — but dead. It is the virtue of slaves who have learned to praise their chains. It is the mask worn by those too weary to rebel.
Now I have a better grounding for my annoyance with Stoics; I’m still unlikely to become an ubermensch. Even ignoring later distortion of his philosophy by Fascists, there’s plenty that gives me the existential ick – none of which I would have known without making my own little Friedrich!
First "aha!" moment with AI?
I can’t hide the fact that I’ve been working in this area for a while. The first time I really remember just sitting and staring at my screen in disbelief was when, after first learning methods to embed a massive amount of text data, I did my own version of the “king - man + woman = queen” calculation. Without triggering an army of NDA lawyers, I’ll say it had to do with executive staff at big corporations: I entered (roughly) “Company_Name + CEO - CEO_Name + Other_Company”, and got back the correct “Other_Company_CEO_Name”. Even knowing the math and the principles behind it, I had the sensation that this was close to magic. Suddenly a lot of things we were trying to do were possible through text as not just basic data (counts, frequency analysis, etc), but as conceptual linkages.
Unlike others, I didn’t immediately raise a couple billion dollars to create a world-changing company around it, I continued on at various startups, watching as it got better and better until the GPT-3.5 big bang in November 2022. At that point, I’d worked on NLP enough to not be too shocked by the abilities of the technology, but will admit to being stunned by the breadth of encoded information.
Your AI subscriptions and rough monthly spend?
These are my constants, the ones I’m using all the time:
Anthropic - $200/mo
Cursor - $20/mo
OpenAI - $20/mo
Poke - $20/mo
Lindy - $75/mo
Midjourney - $8/mo
Every - $288/year [Every offers some fun AI tools to subscribers, my favorite being Monologue for voice-to-text anywhere on my computer]
Then I have a rotating list of things that I’m trying out at any one time, meaning I’m balancing any number of free trials (until I forget about them and they show up on my credit card transactions):
Who do you read/listen to to stay current on AI?
I subscribe to some unsustainable number of Substacks, Beehiives, Ghosts, Mediums (Media?), and other newsletters. It’s gotten to the point where I can’t read them all, so I actually have an agent that goes through them all, consolidates the information into one email with a one-line description of each story and a link to read it if I want. That agent tells me this is the list of things I’ve signed up for, and I think it missed a few:
AI/ML Focused Newsletters:
Core AI Publications:
AI Developer/Technical:
General Tech & Industry:
Data Science & Engineering:
Startup & Business:
Product & Development:
AI Podcasts
I listen to a lot of podcasts as well, across any number of topics. I included that qualifying clause because I don’t think speed-running 100 AI podcasts a week at 2.5x is all that helpful. Things change so fast, and there are so many people trying to capitalize on the podcast and “AI Advisor” boom that there’s a lot of noise around the signal.
For AI specifically:
But here are some great listens that I prize for their insights, creativity, and quality:
Your most-used GPT/Project/Gem?
My biggest use case for AI is code generation and personal work automation, so I’m rarely re-applying the exact same prompt. I spend 75% of my days with Claude Code open in Cursor, in a kind of platonic techno-/psychosocial threesome that lets me interact with repos, Github, Jira/Confluence, email, and any part of the web via whatever LLM I choose. I tend to be promiscuous with those, jumping from one to the next.
The biggest thing I find myself using, in some form or another, is the requirement to have the LLM create a written plan before beginning to work: I want to see reasoning, goal identification, tasks, and evaluations so it knows it has succeeded. I do some form of this in every project, and even in any significant chat where I need some output I’m going to share.
The AI task that would've seemed like magic two years ago but now feels routine?
Language translation! I went to Japan for a couple weeks this summer, and used my phone to translate print and audio constantly.
Magic wand feature request?
Portability – I want to jump from one provider or model to the next without having to recreate things like agents, slash commands, and hooks. Yes, memory is one aspect of this, where conversation history and key bits of information are retained, but it goes beyond wrangling context windows. Small and tailored models are the future, and I want to be able to go from one to the next without having to reintroduce myself and build rapport all over again.
If you could only invest in one company to ride the AI wave, who would it be?
Microsoft. Azure is increasingly the doorway to LLM models for the enterprise. Once the impact of the MIT report about “95% of AI initiatives failing” (which, no matter how bad the research, did make waves) is over, enterprise is still where the vast majority of the work AI can support will get done. They have strong, strategic investments across the ecosystem, the cash to survive, a disciplined leadership that isn’t announcing deals it can’t ever pay for, and the largest installed base of software to deliver AI tools (even before considering Github).
Have you tried full self-driving yet?
Waymos are super fun. It’s a bit disconcerting at first, since it’s an IRL Twitch stream where you’re hoping the player doesn’t hit a bit they haven’t mastered yet. But once you settle in, you get that whole “living in the future” tingle down your neck.
Latest AI rabbit hole?
I’m going to be hosting a home poker game for people with diverse skill sets. I started asking about guidelines for home game rules that keep the game moving and fun without bankrupting anyone. Now I have a set of rules on dealing with a number of common issues, chip distribution, betting requirements, even how to shuffle to prevent exposing cards.
Hours later I was testing the ability of the LLM to read a “live” poker game I took pictures of and uploaded. While it was good with recommendations at each step, once again the lack of real memory meant that it couldn’t distinguish opponents as tight-aggressive, loose, a fish, or anything else by analyzing their history of play.
One piece of advice for folks wanting to get deeper into AI?
For both you and the tool: focus, and document. Frustration and disillusionment happens most often when: 1) the AI fails at some huge one-shot request that actually requires a ton of smaller pieces, and 2) when the AI forgets what it was trying to do because it lost context. Instead, pick a small goal, and have the LLM create a document that describes the goal and what it means to succeed. This gives the LLM a concrete target, and a way to check how it’s doing.
Who do you want to read a Tokens & Tactics interview from?
Mira Murati
Tim Robinson
Killer Mike
John Oliver
Margaret Atwood
Joel and/or Ethan Coen
Reese Witherspoon
If you have any questions, please be in touch.
Thanks for reading,
Noah and Claire