Linting, AI, and Brands // BRXND Dispatch vol. 75
Plus, OCR-to-code, YC startups’ AI habits, and the "fast foodication" of software.
You’re getting this email as a subscriber to the BRXND Dispatch, a newsletter at the intersection of marketing and AI. We’re hosting our next BRXND NYC conference on September 18, 2025 and currently looking for sponsors and speakers (including cool demos!). If you’re interested in attending, please add your name to the wait list. As space is limited, we’re prioritizing attendees from brands first. If you work at an agency, just bring a client along—please contact us if you want to arrange this.
Noah here. Over the last few weeks, I, like many others, have been deep in the world of coding agents. It hasn’t been unusual for me to be running Cursor, Claude Code, and Devin simultaneously, bouncing across multiple projects and tasks simultaneously. Each has its own specific pros and cons, and all three plot an interesting course for where agents might head and how humans will interact with them. But I try not to speculate too much around here, so I want to talk about the fundamental way in which these agents have gotten so good at writing code and how this might impact brands.
Before we jump into linting, here are a few things to keep in mind about code and bugs (I promise this will come back around to marketing, so try to bear with me):
Ensuring code has zero bugs is impossible. As Alan Turing laid out in the halting problem, it’s impossible to create a system with code that can determine with certainty that the code it’s evaluating has no bugs.
But engineers want to drive down bugs. To do that, there are a whole host of techniques, such as doing code reviews (having another engineer review the code you wrote) and writing tests, which run the actual code against a specific input scenario to ensure that any changes did not affect the expected output. If you have a function that adds numbers together, you might write a test to make sure it’s working. With the input of 2 and 3, it expects the function to return 5, and if it doesn’t return that answer, the test fails. (This is obviously a pretty dumb test, but you get my drift.)
While it’s great to have tests run the actual code against pre-planned scenarios, that’s both expensive (computationally) and depends on the engineer to imagine every edge case. That’s where “linting” comes in. Also known as static analysis, linting is essentially a tool engineers use to analyze their code without running it, checking against the programming language’s rules to ensure that basic oversights might not cause a bug. For instance, if a function requires you to pass it a number but the code used somewhere else is passing it a string, the linter would catch that before it causes a bug in production.
Where linting gets interesting is that you can define rules that go beyond the programming language’s foundational rules, creating specific instructions and styles you want to follow in your codebase. While the example with the string being passed to a function that accepts numbers is true across any programming language, there are lots of other rules that aren’t hard and fast but encourage cleaner, more maintainable code over the long run.
For instance, we do most of our programming at Alephic in TypeScript. TypeScript is just like JavaScript, but it adds a thing called static types, which allows you to say if this function accepts either strings or numbers explicitly. Linters can help you decide how strictly you want to enforce the use of types and whether you want to allow coders to say if a function can accept “any,” which is an easy way out when all else fails. Our linting rules don’t allow the use of “any,” so when you’re trying to find a solution you’ve got to find a better one.
I want to focus on linting because it’s the cheapest of all these options to run, and therefore can be running pretty much constantly. Cursor, for instance, has an option to run lint after every update it makes to see whether it has made any mistakes and followed your rules. Here it is getting told that any is not acceptable, for instance.
So, to recap, linting runs some code against the code that was written but doesn’t actually run the program. It looks for fundamental issues with the syntax or structural problems but also accepts rules and stylistic preferences you have and can return those. It’s an extensible system, meaning you can add it as a rule as long as you can describe it in code.
So, what does any of this have to do with brands and marketing?
Well, one of the things that keeps floating around my head as I use these tools is that marketers will need something similar to linting to run across all their AI-generated content and agents to ensure everything stays on brand. In the early days of this newsletter, I wrote a bit about “brand APIs,” a concept that has been on my mind for years:
For several years, I’ve had an idea pinging around in my head about “brand APIs.” An API is an “application programming interface” and represents a set of protocols for interacting with another computer. GPT3 and Dall-E have an API that I use to programmatically generate brand collabs. Twitter famously has an API that, until last week, was used by a bunch of different apps to provide access to users. Your operating system has tons of APIs that allow all the different software you run to do the things it needs by accessing OS X/Windows/Linux functions. These perform various tasks rather than requiring every software developer to reinvent the wheel every time they need to allow a user to copy something to the clipboard or take a screenshot.
I think this linting concept takes things further. These agents work much more effectively when you have a set of rules that you can frequently run against the content being produced to provide feedback to the human or AI author. For a brand, this could range from very simple things, like always capitalizing a name or never saying “consumer,” to concepts that are more focused on tone or even specific logo usage/colors. Lots of folks are thinking about this—Rob May spoke about some of these concepts in his talk about Brandguard two years ago at the BRXND event—but I think there’s an opportunity to push this idea further, particularly by extending the linting analogy, and start building some standards and putting them into practice for brands. More to come. If you’ve got some ideas or are working on this (or know someone who is), get in touch.
What Caught My Eye This Week (Luke)
A few weeks ago, I wrote about how the rise of Deep Research could set off a content marketing tsunami as brands and creators vie to make their knowledge more discoverable to AI. At the time, I cited paywalls as the most obvious barrier that needed to come down in order for this to happen, but as Andrej Karpathy recently pointed out on X, the challenge of LLM training is more fundamental than that. Almost all information on the internet is currently trapped in formats designed for human consumption, like web pages, PDFs, images, videos, and audio. That is probably going to start to change soon because, as Karpathy notes, “99.9% of attention is about to be LLM attention, not human attention.”
As AI continues to make massive strides in coding, industry leaders predict 2025 will be a turning point. OpenAI CPO Kevin Weil believes AI will outperform top engineers within the next 12 months; Anthropic CEO Dario Amodei goes further and says AI will be writing pretty much all code by then. You don’t need a crystal ball to see this transformation already happening: About a quarter of current YC startups use AI to write 95% of their code, some reaching $10M in revenue with teams of less than 10 people. The traditional tech career ladder is getting completely upended. How many young software engineers who couldn’t find a job in FAANG will instead build their own business with a small team as a result of AI? The gap between “I have an idea” and “I built a business” is narrowing dramatically.
RIP 10 blue links? New research from Adobe confirms the growing importance of AI in channeling traffic to retailers, with AI search referrals surging 1,300% during the 2024 holiday season compared to 2023. Not only that, but users referred by AI search versus Google or Bing tended to visit the site for longer, browse more pages, and bounce less often.
On using document analysis for coding: Mistral’s new OCR API goes beyond just extracting text from PDFs and images. It also converts that text into Markdown, which can be easily turned into HTML or other formats.
It’s been a while since I did a news roundup and I’d be remiss if I didn’t mention that Gemini has been shipping new features like crazy recently. The latest batch includes tools for image generation, robotics, embedding, and more—not to mention Gemma 3, a new suite of ultra-efficient AI models that can deliver top-tier performance while running on a single GPU. Quite the turnaround at Google. (And, as a follow-on to Noah’s writeup about linting, Gemini Code Assist recently began offering users 180k free code completions per month—way more than GitHub Copilot’s free tier.)
To close, I enjoyed this short analogy from Jack Pearkes about AI as the potential “fast foodification” of software. It’s something to chew on.
You can create small pieces of software so rapidly, of mixed quality, that it is almost instantaneously available to a business. I believe this will erode the value of SaaS services that create generic solutions and cause the rapid adoption of simpler, purpose-built software.
Business workflow consultants using these new tools will blossom. I am short small mom-and-pop SaaS services and long Ronald McDonald software consultants. Hawking their wares on every street corner – high calorie, minimal nutritional value, instantly satisfying – possibly creating significant maintenance headaches if you overindulge.
If you have any questions, please be in touch. If you are interested in sponsoring BRXND NYC, reach out and I’ll send you the details.
Thanks for reading,
Noah & Luke