Chain of Thought // BrXnd Dispatch vol. 33
The last dispatch of the year ... plus an announcement for 2024.
You’re getting this email as a subscriber to the BrXnd Dispatch, a (roughly) weekly email at the intersection of brands and AI. I am planning some corporate AI events as well as my own conference in NYC for the Spring. Re: my NYC event for the spring - I am starting to look for speakers and sponsors, so if you think you fit into one of those buckets, please be in touch.
I wasn’t really planning on sending out another email this year, but I got inspired enough to write a WITI (my other newsletter) about AI and hospitality, and I thought it was worth sending out here as well. I also have a bit of an announcement …
That’s right, the BrXnd Marketing X AI conference is coming back to NYC this Spring. I hope to have a date in the next few weeks. I haven’t quite figured out how many people I’ll be able to have this time, but I’ll be releasing lots more details as I have them. For now, since you’re on this email list, you’ll be getting all the updates. And, of course, if you want to sponsor or speak, please let me know.
Ok, onto the actual content. I’ve been spending a lot of time reading prompting papers lately, and an article I read made me think it was worth spending a few minutes writing down some thoughts. Here’s a repost of the full Why is this interesting? email with a few small additions.
Noah here. A few years ago, Jann Schwarz wrote a WITI about omotenashi, a Japanese approach to anticipating someone’s needs and providing a level of service that goes beyond expectations—for example, the Park Hyatt Tokyo remembering how he’d left his hotel room at the end of his first stay, and quietly recreating it when he checked back in for his second. “Like many things Japanese,” he wrote, “it is not easily defined or translated but evolves around bringing your full, authentic self to serve a guest and doing so in a humble, non-ostentatious way. It is about a lack of pretense and showing no expectation of reciprocity.”
Since then, I’ve run into the term all over the place, most recently in a Nikkei article titled “Japan's 'omotenashi' culture can offer an edge in the AI age.” The article’s argument is that while other companies, particularly in the hospitality space, are focused on finding transactional moments to integrate AI, omotenashi is a uniquely human approach that AI won’t be able to replicate quite so easily.
Why is this interesting?
If you’ve spent ten minutes on LinkedIn or Twitter in the last twelve months, you’ve surely been inundated with AI talk, maybe even a never-ending stream of advice about prompting—complete with PDFs and cheat sheets of the Ten Prompts You Should Never Leave Home Without.™
The truth is actually a lot simpler, though. When it comes to AI prompting, for instance, understanding the general approaches—zero-shot, few-shot, chain-of-thought, etc.—is a lot more powerful than any specifically worded prompt. To quickly explain: zero-shot is when you just ask a question, few-shot is when you offer examples for the large language model (LLM) to mimic in your prompt, and chain-of-thought is when you actually describe your problem-solving process (your “chain-of-thought”) within your prompt.
That last technique came out of Google Research in 2022 and was found to return much better results than other techniques. Here’s an image from that paper that helps illustrate the technique:
The basic idea is that if you give the LLM a few examples of how to work through a process of answering a question, it’s better able to approximate that process in its completion. It’s not really following that process, per se, but more like imitating it, which leads the model to return better results. Last month, a team at Microsoft built on the success of chain-of-thought, showing how better prompting techniques can actually help general models perform as if they were specially tuned for a specific discipline (in Microsoft Research’s case, medicine). The results were pretty wild and speak to the amazing power of these models. Basically, using the fancy prompt GPT-4 was able to outperform a model that was specifically tuned to answer medical questions.
What does all this have to do with omotenashi? Reading these papers recently, I’ve been noodling on whether a key part of getting great results out of these systems will actually be the self-awareness and process understanding required to write great chain-of-thought-like prompts. By that, I mean if an organization is able to describe its “chain of thought” in detail, that may lead to better outputs. While there’s lots of talk about sharing prompts inside companies, I wonder if the real thing isn’t sharing “chains of thought” and “few-shot” examples that can be used to prompt.
In the end, it seems to me that what comes with an omotenashi approach is a deeply considered process to service that might actually lead to better results than the more transactional approaches the technology is being considered for.
Hope you all are having an excellent holiday and a happy new year. As always, feel free to be in touch if there’s anything you want to talk about or I can help with.
— Noah