AI & The Enterprise with James Cham // BrXnd Dispatch vol. 052
James Cham of Bloomberg Beta on AI, cycles, and how opinionated enterprise software should be.
Hi everyone, I have another fun interview edition of the newsletter for you. This one is a conversation with one of my favorite AI sparring partners, James Cham. James is an investor with Bloomberg Beta and a deep thinker about the space. He and I have had a recurring call over the last year—every two months or so—where we share our latest thinking. After our last conversation, I asked if I could record one and write it up. Like all my interviews, this is lightly edited for readability, but it’s basically the conversation in full. I hope you enjoy it.
Noah Brier: Give me your latest on what's interesting to you as of today, July 29th.
James Cham: Okay, so it's July 29th. Where are we in the cycle? We kind of all know that there's going to be a collective freakout whenever OpenAI releases GPT-5, right? I feel like I go through this cycle in my life of freaking out about what OpenAI just did and having existential dread and thinking there's no purpose in life, and then waiting a few weeks or months before figuring out the outlines of what actually happened, and then thinking, “Okay, here's what we can do now, and here's what the valuable thing is.” And so I feel like that's kind of been my life for the last three years.
We can all be a little more level-headed and clear-thinking for a little while before the freakout. And I think the thing that is relatively clear is that the architecture of what these things are good at is increasingly in focus. And then, when that's true, the real question is, what are the boring applications, user interfaces, and business models that will actually end up being useful here?
And then there's this other social-technical dynamic that everything is a diffusion of knowledge problem, right? And so there's this question of how long it takes for knowledge to get through the system. And we live in a weird world right now where I kind of thought—everyone kind of thought—“Oh, now that everyone sees these foundation models are valuable, no one will talk to each other.” That was the thing that everyone told me. But what's actually happened is everyone talks. Everyone talks on Discord, everyone talks on Twitter—things get released, and then everyone freaks out.
And so the dynamic is a little bit different than I expected. But the thing that people don't seem to talk about is that inside companies right now, there aren't really incentives for people to describe what actually works. My impression is that right now, a CEO somewhere got a pitch from Satya Nadella and said, “Yes, I will spend $50 million to work with Microsoft so I can be a forward-thinking AI-powered company.” And then, at the same time, the actual person doing the work is being told by the head of HR that they aren't allowed to use any AI tools. And then, at the same time, some other person is secretly using some AI tool or made a little process, and they've got no incentives to share that. They actually have anti-incentives because their lawyers will get mad at them, or their boss’s boss’s boss’s lawyers will get mad at them.
So we have this really weird ecosystem where lots of information is being passed really quickly on things that I thought would stay secret. At the same time, fairly straightforward things that are practical and productive for individuals have kind of remained hidden.
And then the other strain is there's this open question of what's the durable product that's to be built by people? I hate using the word “moats” because … I hate using the word, but there is a real question about what's going to be durably valuable for people to build. And I've got a couple of points of view about that.
I have a friend of mine at Microsoft who says, “Pixels are free.” Not literally right now, but very soon, generating pixels, generating user interfaces, and all that stuff will be basically close to free. And then the question is, what's valuable? And I think what's valuable is perspective. Perspective, as encapsulated either in opinions about business processes, encapsulated by database schemas, or by domain-specific languages, actually represents a perspective on how business should be done. And then that's, I think, where the value is going to be.
NB: Why don't we start with that? That idea is pretty counter to what we've seen in enterprise software over the last 15 years, which is a move towards unopinionated software. The big winners in the world of software-as-a-service (SaaS) have primarily been the platform companies that offer what is effectively a data modeling service. Salesforce is basically a giant database.
JC: That's more like late-stage decay, right? Early on, Salesforce had a relatively distinct point of view. I think, actually, the early winners all had pretty strong points of view.
NB: That's sort of interesting. So, do you think it's like an enterprise version of Zawinski's Law? Zawinski’s law is that any sufficiently complicated piece of software will eventually have its own email system. I feel like you’re offering a corollary to that in enterprise software, which is that any sufficiently successful enterprise software company will eventually just become a database.
JC: That's right. I buy that.
NB: That would suggest there’s some kind of cycle to software, right? You start more opinionated, and eventually, you become less opinionated. But what does it mean to be opinionated in AI? Is it a good thing or a bad thing?
JC: Well I think we’re early in the cycle, and there’s still a lot of work to be done finding the existing patterns. For instance, enough people kind of have to build their own email marketing tools, before they realize what the outline of the problem is. So I do think that a little bit is we're all trying to speed run that or pass over it. But sometimes, it just takes a little while to see enough to say, “Oh, this is a pattern.”
So think about the consulting work you're doing now. You work closely with some great company, and you're like, “They finally figured out this thing.” And then maybe that thing is specific to their company because they have some weird corporate culture or some weird processes. But if you see it three times, you're like, “Oh, maybe this is going to be a thing that's actually a new category.”
NB: AI is also so much lower in the stack than SaaS, right? The models themselves, I mean. I’ve had a bunch of brands and marketers ask me how I think about the value of AI, and I sort of don’t know what to say. I honestly can’t compute the question. It’s so quickly become a bit of technology that I honestly can't really imagine life without it anymore. The value is just kind of universal.
JC: But that's because you're an extreme early adopter, right? But also, critically, you're another thing: which is you're an extreme early implementer—which is different than being an early adopter. You’ve done a good job not just saying, “I will use the tools,” but also you've built tools, and you've done it earlier in the stage than you should. When you were working on this stuff two years ago, the models were not really ready. And so I feel like that's part of the reason why you're different from most people,
NB: Maybe, but I would argue that even a year and a half ago, when I was playing with this stuff, the models were ready and continue to be ready at the one thing that they're very best at, which is transforming data from one format to another. And that is a very universal problem inside the enterprise. So when a CIO asked me about the enterprise value of AI, I just sort of didn’t even know how to answer. Because, in a way, everything you do with software in the enterprise is about taking data from one format and putting it in another that you can report on and work with. Salesforce exists to convert unstructured data from the conversations salespeople have with prospects and clients and turn it into a structured format that companies can report on, right?
In some ways, if we were to describe what's going on with AI and programming, I think a decent explanation is that it has enabled heuristic-based programming. And that's just so different. Because you can start to do these qualitative-quantitative exercises, and you can start to merge these worlds, and that's a lot of the work I've been doing with brands lately.
JC: Right. Doing the qualitative at the speed and scale of quantitative.
Do you know Daniel Rock? Daniel is a professor at Wharton and a friend of mine, and he's on the cutting edge of the economic opportunities for using LLMs. And so, for example, he can do things like take all the jobs and then classify the jobs in different ways using LLMs, and then that just gives them radically different precision. And suddenly, there are all these things that were never really computable by economists that now they can do because of these models. You’re describing another version of what he's talking about.
JC: Okay, I need to go catch a plane now.
NB: Okay, go catch your plane. Thank you.
I think that’s it for this week. If you like this, let me know, and I’ll keep doing it. And, as always, thanks for reading, subscribing, and supporting. As always, if you have questions or want to chat, or if you’re at a brand and want to go deeper with AI, please be in touch.
Thanks,
Noah