Imagine you’re talking to someone and they suddenly start to add advertising to the exchange. What might that look like?
In a 1965 episode of the classic sitcom I Dream of Jeannie, the protagonist uses her magical powers to create fake parents for herself in order to impress a date. She crafts them to be “just like the people on television commercials,” making them speak using sentences from commercials. Her synthetic “parents” appear friendly and normal—until they start talking, reciting ads verbatim for products like “streak-away” for gray hair, dish soap, “Grippo” denture adhesive, and deodorant. They have so much to say, yet communicate nothing at all.
Something similar might happen if OpenAI goes forward with its rumored plans to add advertising to ChatGPT. Last December, an article in Futurism, citing internal sources at OpenAI, suggested ad adoption could be near. Recently, The Information reported that the company is hiring “digital advertising veterans” and that it will install a secondary model capable of evaluating if a conversation “has commercial intent,” before offering up relevant ads in the chat responses.
Annoying ads within ChatGPT could be for things as banal as a grocery product, a local destination to visit, or a handyman service. But they could also be a lot of something else—something dangerous. Given ChatGPT’s track record, some poor soul might be pouring their heart out to the chatbot, only to be advised of a special on rope at their local hardware store. I’m not making light of the latter—It could happen. There can’t be true oversight with LLMs. And that’s only one of their problems.
Context is the Holy Grail
OpenAI’s advertising move is a bold and brilliant, but potentially terrible, crude attempt to automate contextual “understanding,” a missing link with the push toward combining big data and surveillance.
For a long time, newspapers and radio stations were local and distributed. As transportation connected us and technology improved, the opportunity to distribute more centralized news from single, larger sources became possible. Television began with a few channels and concentrated programming that was the same across broad regions. This ushered in a heyday for advertisers who sponsored TV content and could show single ads to millions of viewers.
As a distributed technology, the internet disrupted many forms of traditional media, and advertisers have been scrambling to try to reach us in new ways. While technology has enabled advertisers to benefit from our location in an attempt to hone in on what might appeal to us, internet ads are often not contextually relevant to what we want or need.
What OpenAI intends to do with advertising, via ChatGPT’s self-reported 900 million weekly users, will synthesize the local distributed model. This will enable the platform to reach into our homes in the same way that mass television once possessed. It’s an attempt to unify and bypass the interfaces of phones and computers that we currentlyuse. In the process, OpenAI willbe creating a “super platform” for informational use and processing.
The algorithms don’t know us
Within its current platform, ChatGPT offers a conversational medium of interaction and query; each chat captures how we use language, more detailed descriptions of the problems we seek to solve, and many of our needs. Thus, the opportunity for OpenAI to have platform control, along with access to our inner thoughts, all with the surveillance capability to compile these into targeted individual ads, is the ultimate goal for advertisers: to really reach us, deep inside our thoughts.
However, this outcome is unlikely. The problem with this model is that it still relies on computational compiling and sorting. The algorithms won’t know us, or form relationships with us. Because of that, they can’t actually recommend true advertising solutions to our problems—just like these algorithms can’t solve our problems now. But their results can mimic helpfulness, just like Jeannie’s synthetic parents.
While collecting and compiling our online data has brought advertisers closer to knowing what they think we need, what has been missing is an understanding of the context of what these actions mean to us. Qualitative research, which helps to discover the how, why, and what of interaction, has been pushed aside through the rush to embrace big data.
The LLMs that feed chatbots are not magical: They are algorithms that statistically match words and rank them from sources that the model was trained on. An LLM “listening to our conversations” will not “understand” context as human qualitative researchers can. Thus, the ads that ChatGPT will suggest from our conversations may seem like a match, but they’re unlikely to offer anything contextually substantial.
Another idea OpenAI suggests is that sponsored results could get preferential treatment. Subscribers might get better matching ads, but, again, because this is all based on word matching, it may not matter much. (It hasn’t been revealed if there will be an option to avoid such advertising completely.)
The trust is an illusion
An OpenAI spokesperson told The Information: “People have a trusted relationship with ChatGPT, and any approach would be designed to respect that trust.” But there’s a big difference between having a social relationship with someone and having a trusted social relationship.
Many of us are trained to fill in social gaps when we interact with others who are trying to communicate with us. In that context, we may project sociability—and, thus, trust—onto them. By seeming to respond to us with a point of view and a chat style that feels personal, ChatGPT perpetuates that illusion of sociability and trust. By leveraging our innate social behaviors, ChatGPT also leverages that behavioral goodwill. But that sociability and “trust” is in our heads. It isn’t real—it’s just an algorithm.
ChatGPT is merely a way for OpenAI’s LLM outcomes to be presented to us. Is it trusted and social to siphon people’s knowledge and work to train a model? If OpenAI were a person, we’d say no, pointing out how that’s akin to a sociopath stealing our ideas and work and presenting them to others. But because we “converse” with ChatGPT, we project a trust upon it that it can’t earn because it is not human.
OpenAI adding advertising to ChatGPT seems like an inevitability. If we use this tool, we need to remember that we cannot form bonds to it, that it cannot have a relationship with us, and that all it can do is word match. Any ad it serves us will be based on what we tell it, but it can’t “think” about all we tell it and propose an ad that speaks explicitly to us as a trusted friend who knows us would do. It is best to keep that in mind as these tools evolve to seemingly “understand” us.
OpenAI as a company could try to earn its customers’ trust by discovering what its customers want and need using qualitative research rather than foisting its advertising decisions upon us. Even so, the idea that this advertising model will scale and deliver contextually relevant advertising to “900 million weekly users” seems unrealistic. Context, especially driven through LLMs that already have issues with “slop, hallucinations, and outright lies”, can be a challenging match for advertisers, who need reliable recommendations to keep the integrity of their brands and reputations.
Without trust formed between entities, we’re all at risk of being played: OpenAI, who believes their algorithms will deliver what they promise, the advertisers who trust that their ads will accurately match the user’s context and interest, and those who use ChatGPT, a service they trust that, in fact, seems intent upon using them for revenue instead.
source https://www.fastcompany.com/91472395/openai-chatgpt-advertising-television-trust
Discover more from The Veteran-Owned Business Blog
Subscribe to get the latest posts sent to your email.
You must be logged in to post a comment.