Tokens & Tactics #12: Turning Software Into a Medium
Paul Ford on his company Aboard's approach to AI-generated software, creating 20,000-word research briefings for every meeting, and why he wants "nutrition labels" for every AI platform.
We’re hosting our next BRXND NYC conference on September 18, 2025. The remaining tickets are currently on sale—there are only about 30 left, so grab yours before they're gone!
Welcome back to Tokens & Tactics, our Tuesday series about how people are actually using AI at work.
Each week, we feature one person and their real-world workflow—what tools they use, what they’re building, and what’s working right now. No hype. No vague predictions. Just practical details from the front lines. This week: Paul Ford.
Tell us about yourself.
Paul Ford, co-founder and President of Aboard, which uses AI to assist in building full-fledged, complete business apps—a little different than vibecoding, in that (1) an LLM supports the user to define software architecture, but doesn’t actually code; (2) it then transforms the resulting blueprint into full-fledged business apps. Try it! It works! (See also.)
Otherwise: I’m a lifelong technologist, and was the co-founder of a digital product consultancy called Postlight—sold a few years ago. I’m also a technology writer, for Businessweek, Wired, and lots of different magazines, although I stepped back for a year because the tech industry and the larger world stopped making any sense to me and I needed to get my bearings. I am always, always interested in how people are using tools, not just silicon-based tools.
ChatGPT, Gemini, or Claude?
ChatGPT 5 for exploring and understanding—but kindly see my critical review! Claude’s my default for coding, in the terminal, although I use ChatGPT’s code output too. I mess around with Gemini and Veo but not consistently. I check in on Mistral and DeepSeek as well. I was doing more local model exploration but that’s faded for now.
What was your last SFW AI conversation?
Perhaps this says something sad about me but I’ve never had an NSFW AI conversation. A few recent prompts:
“How can I integrate a guitar pedal into a Eurorack system?” (Worked)
“Could you make me a voter guide for the 2025 New York mayoral election? A table of policies would be good.” (Should have looked elsewhere.)
“We need a perfect, well crafted, very simple pitch deck that we can use and present in about 10 minutes on calls with VC fund reps who keep calling us. Many of the reps are VERY young, like probably into Pokemon, so it needs to be at the level a Stanford graduate can comprehend, using very simple words and charts with at most two colored lines. Can you outline that deck?” (Helpful, got me started.)
“Are Europe’s moves to expand its nuclear arsenal driven by a loss of trust in the US?” (Useful.)
“What does it mean when binaries collapse into spectra?” (Fun concept exploration.)
“Create a flat image of a ribbon award that reads YOU'RE THE MOST CORRECT.” (Fine.)
“Blues scale.” (Helpful.)
First "aha!" moment with AI?
About two years ago I fed ChatGPT some COBOL code and had it turn it into Java, and it was a credible start. Whether it worked perfectly or not didn’t matter—what I saw was that projects that frequently take years to start could start right now, in the next five minutes. Enormous amounts of bureaucratic friction could be erased.
Your AI subscriptions and rough monthly spend?
Nothing too wild. I pay for the “pro” tier for ChatGPT, Claude, and Perplexity, and have some credits floating around with DeepSeek. There are probably other fees in there, they keep creeping in. My combined spend is probably $150-$200 a month but I’d probably just choose one, like Claude, if I wasn’t doing it for work and in learning mode. We spend a lot for API access to various LLMs for work, but nothing too shocking either, because, critically, we’re not vibe coding so we don’t have the same kinds of dependencies on upstream LLMs. We use Google/Gemini transcription tools a lot at work. I should look at the credit card bill soon, frankly.
Who do you read/listen to to stay current on AI?
Wired (best it’s ever been)
404Media (gets really surprising stories)
Fast Company (under new management, very good)
MarkTechPost as a kind of raw feed
I get a lot on Bluesky, especially critical, broadly anti-AI stuff
I should point out that I write an AI newsletter, too!
Your most-used GPT/Project/Gem?
Brace yourself, for it’s a long, long prompt, and it produces sometimes 20,000 words of text. It’s a detailed Deep Research prompt. You need to max out whatever thinking mode you’re using and let it search the web. I use this for every meeting and pitch. It’s incredibly valuable, and it helps me be more helpful and thoughtful to the people I’m meeting. I read these on the train the night before a meeting.
I’m very interested in [[FIRM NAME]], an [[INDUSTRY TYPE]] company ([[URL]]).
First, please describe their market sector, their size, and the industries to which they are connected.
Can you create a detailed event and overview of every significant event related to the firm and its leadership? At the same time, note major events in the industry and the markets at large that might have influenced the firm’s development. Probe deeply and also note anything interesting or novel. Start at the founding. Note all leadership events and bios. Include major industry changes and regulatory changes.
List the ten greatest risks to a business like this.
Who are the firm’s five biggest competitors? I want you to write brief corporate profiles of each company. Age, leadership, etc. At the end, make a table of their offerings by state. Also add the original firm back in.
Help me understand their supply chain. Make sure I understand their customers and how they make money. From whom do they buy and to whom do they sell?
Write a report on the overall information technology needs of firms in this space.
I want to know where companies build versus buy.
I want to know the different kinds of software that they use for a variety of uses.
I want to know the kinds of tools that they use for interacting with their different customers, for regulatory approval, and for data management.
How do they manage liability risk?
What are common data formats, and common APIs and protocols that are used throughout the industry?
What kind of technical debt do firms like this typically carry?
At the end of the report, I would like you to make forward-looking statements about the future of the IT industry, and how that might affect the technology development in these industries. Create an appendix on the idealized structure of the firm’s technology organization—organized for future maximum effect. How many people should be on the team, and how should the teams be structured?
When discussing build-vs-buy, go into depth on specific vendors around all systems where possible. Be exhaustive and explain the unique selling propositions for each vendor—make sure to list at least five differentiators where you can. Don't forget to focus on internal R&D. A U.S.-centric view is fine.
Make a list of the key software vendors listed above and describe them, explain their size, and list their five major features for firms in this space.
Make a list of the five main potential acquirers for these kinds of firms. Who are the biggest players in the space?
Produce a full briefing with all of the information above. At the beginning, add a well-organized executive briefing with summary.
The AI task that would've seemed like magic two years ago but now feels routine?
Those deep research reports are miraculous but most of what I do at work touches AI. At work our tool generates software, which increasingly feels normal, but it generates software with context and history. A few days ago I made a business management tool for beekeepers and it already “knew” about queens, hives, hive types, bee genetics, and honey grading—like in the dropdown menus. It’s so small, such a bunch of little details, but it’s so, so, so much easier to show a smart stakeholder something like this and get them to critique it than it is to get the information from them over interviews, turn that into documents, and wait for design and implementation, then get their feedback. Months into minutes. Software may not be perfect at the prompt—it may be kind of bad, actually, and that includes what we generate!—but it can increasingly be worthy of critique and easier to improve. This is such a stunning change to witness later in my career—the transition of software from a bureaucratic side-effect that often never shipped to a medium unto itself—that it sometimes feels like I’m dreaming.
Magic wand feature request?
I’m going to grind my axe: If the wand was truly magic, I would add a global regulatory framework across the entire AI industry that made every platform offer the equivalent of nutrition labels: I want disclosures of LLM sources, information on energy usage per training run and per query, data around anti-bias filtering, and funding for preserving the spidered commons as well as for public education as to what LLMs are, what their limits are, and where they can help and harm. This sounds like axe-grinding but the global “AGI is almost here and we care about safety!” narrative has obscured the fact that these tools are relentlessly extractive at scale and speed, and until we get a better framework for using this software, and education about risks, we’re going to have more and more stories like the one about 16-year-old Adam Raine’s relationship to ChatGPT as he planned his suicide. AI firms can’t do this, because boosters cannot be guardians. Will it happen? I’m tired of being cynical about our current world order. It should happen, and I’d gladly help.
If you could only invest in one company to ride the AI wave, who would it be?
I’ve invested enough in Aboard, I hope! That said, If I could make a portfolio that hedged against the big AI platforms, I would. As Moore’s law keeps zigging and zagging, physics be damned, this technology yearns to be cheap, localized, application-specific, and run on small machines. Ten or fifteen years ago the scale and range of applications of something free and tiny like SQLite would have seemed magical. Big LLMs products today are mostly bad web interfaces to sloppy cloud databases. They’re not products for people yet. Will Google, OpenAI, Microsoft, and Anthropic be able to generate product experiences at the rate necessary to capture this entire space? I would bet on humans coming up with millions of applications on billions of devices, and less on six or seven giant platforms being able to capture all the value.
Have you tried full self-driving yet?
I rode in a Waymo with my kids last year in SF. I hate cars, don’t drive, bike wherever I can, and I absolutely loved it in every possible way. I just saw a Waymo as I rode a bike to the Manhattan Bridge, which was shocking in a “here we go” kind of way. I’m sorry but I’m going to beat the same drum: These are society-changing technologies and we live in a society. We deserve a public referendum on self-driving cars in NYC and comprehensible regulation. It’s powerful, it’s an unbelievably accessible technology with infinite uses, it could be incredibly useful to millions of people, and it should be possible for us to talk about how we want it to work instead of having it thrust upon us. I really want it, but I want it with guardrails, guarantees, and for some of the value it generates to come back to the people who live here.
Latest AI rabbit hole?
I actually try not to go down too many AI rabbit holes. I try to think through my prompts and execute them, then improve them, and avoid too much chatting with LLMs. Chats tend to last much longer than they should, and go on well after you've achieved a goal. I try to bring a software mindset, I guess.
One piece of advice for folks wanting to get deeper into AI?
I try to explain to people that an LLM is not answering a question like a conscious human but rather translating a set of words in question form to a set of words in answer form. An LLM is a database with unusual contours. It is not a consciousness. You need to understand that your prompts are not questions but queries. And so your work with this tool is not to “ask the right question” but rather figure out how to put guardrails on the LLM, so that you eventually push it to generate artifacts that are trustworthy enough for you to use them and take action upon them. Look for repeatable processes that scale! This message doesn’t always land.
Who do you want to read a Tokens & Tactics interview from?
I always like to know what Clay Shirky is up to! Or Perry Hewitt! Or maybe my business partner Rich Ziade!
If you have any questions, please be in touch. If you are interested in sponsoring BRXND NYC, reach out, and we’ll send you the details.
Thanks for reading,
Noah and Claire