How to Talk to Language Models Will Listen // BrXnd Dispatch vol. 45
On psychology, language models, and saying "please" and "thank you"
You’re getting this email as a subscriber to the BrXnd Dispatch, a (roughly) weekly email at the intersection of brands and AI. Wednesday, May 8, was the second BrXnd NYC Marketing X AI Conference, and it was amazing. If you missed it, all the talks are up on YouTube, and I’m writing a series of posts summarizing a few of them.
Tim Hwang was back at this year’s conference with more mind-bending provocations. (If you missed last year’s talk, Hallucinations! (for Fun and Profit), it’s worth a watch.) This year, he dug into the strange quirks of how we interact with AI and what they reveal about making these systems work for us.
Tim started his talk by relaying the strange experience of finishing a ride in a Waymo self-driving taxi:
And what's so interesting is actually not this technological wonder that was so striking about the experience. Instead, it was this really funny moment at the very end of the trip where it pulls up to the curb, the door opens up, and it shows you how much you've been charged for your trip. And then as you're getting out, there's like a little voice in your head is like, “You should say thank you on the way out.” And I was like, “Should I do this? This is really dumb.” And then I said, “thank you.” And of course, this is a completely insane thing to do because there's just no one in the car except for you.
This, of course, is a common situation we all run into when interacting with these models. We don’t quite know what they are or how to address them. We know they’re not human, but we don’t quite know how to deal with something that feels this human. And so we do these weird human things, like saying “thank you” to an empty car or saying “sorry to bother you” to ChatGPT.
What’s going on here? That’s what Tim was exploring in his talk. Is it just that humans are stupid? Are we like the monkeys in one of Harry Harlow’s ethically awful monkey experiments who were happy to bond with anything like a mother as long as it was soft? Or is it the more sinister explanation, illustrated by the meme below, where the machines are manipulating us? Are they, as Tim put it, “these Lovecraftian beasts of linear algebra?”
The rest of Tim’s talk offers a third option: maybe it’s not that humans are dumb or these things are supernaturally gifted at manipulating us, but rather that in the process of training them, we have accidentally also imbued them with our own quirks and irrationalities. Here’s Tim again:
And this is actually leading to a very funny outcome if you believe these systems are going to become a more and more important part of our technological ecosystem going forward. Because suddenly you have a computer where some of the programming primitives, the things you can actually call in that computer are our emotions, right? You can change the system's behavior based on your understanding of these deep psychological principles that we're familiar with on a daily basis.
Maybe, as Tim puts it, “the hottest new programming language is psychology.”
Thanks for reading, subscribing, and supporting. As always, if you have questions or want to chat, please be in touch.
Thanks,
Noah