The First Ad Turing Test // BrXnd Dispatch vol. 021
Test your skills at distinguishing between human and AI-generated ads.
You’re getting this email as a subscriber to the BrXnd Dispatch, a (roughly) bi-weekly email at the intersection of brands and AI. We are officially coming to San Francisco this Fall for the BrXnd Marketing X AI Conference. Right now, we’re giving all sponsors who sign on before we lock down date+venue 20% off. If you’re interested, please be in touch.
As part of the conference, we put on what I believe is the world’s first Ad Turing Test. For those unfamiliar with the Turing test, it’s based on a concept from Alan Turing that suggested one way to test the intelligence of machines would be to have a human interact with either another person or computer via text and then guess which they’ve been communicating with. We took this idea and extended it to ads by recruiting teams of marketing students to put together an ad in a human way and AI teams to make it all happen with machines. We then presented the results to an esteemed panel of industry experts with some 300 years of experience between them and saw whether they could guess which came from a human and which was AI-generated.
The rules for the humans were pretty straightforward: follow the brief and make an ad however you make ads. The AI teams, however, had stricter controls:
To ensure we are getting an accurate view of the technology, there are additional rules we are expecting our AI participants to follow:
Your system needs to use AI to both generate all the components of the ad and assemble the layout. You are not allowed to generate the pieces independently and plug them together using other technology or to handcraft the final layout.
Generation needs to be an end-to-end single process that only uses the assets you are given by us as inputs. You should be trying to get as close to “push a button, and an ad comes out” as possible. If you are using systems in a similar fashion to hand-crafting an ad, you are out of bounds.
Any other assets (images, copy, or otherwise) used in the ad need to be synthetically generated using an LLM, Stable Diffusion, etc. Systems that curate or copy media from elsewhere online for creating the ad are not permitted.
No post-processing of the ads may be done. This includes but is not limited to adding text, placing a logo, rearranging the logo, choosing a font, any kind of cropping, color grading, filtering, normalizing, or otherwise once your system generates the ad. Basically, you can’t just generate stuff and then stitch it together using ImageMagick.
As part of the submission, you will be asked to record a short video showing how the ad is generated.
The teams got a brief and set of assets and were on their way. They have about five weeks to get their ads done. In the end, we had ten teams who returned ads.
Rather than show you the ads or the results, I will just let you take the test.
I’ll be back next week with the results, including creative testing, and lots more info on the test and how the AI teams approached the problem.
Thanks for reading. If you want to continue the conversation, feel free to reply, comment, or join us on Discord. Also, please share this email with others you think would find it interesting.
Thanks for reading BrXnd Dispatch! Subscribe for free to receive new posts.