Business

Marketers can’t predict what you’ll buy—even if they use A.I.

Popular media has been warning us about the ability of unsavory marketers and other bad actors to predict and even control our choices using the latest in tracking and artificial intelligence technologies.  

In the 2019 Netflix documentary The Great Hack, for instance, the case is made that the data analytics firm Cambridge Analytica scraped social media to gain deep insights into individuals’ psyches. Using these insights, the filmmakers argue, that firm was able to design carefully targeted ads to manipulate the 2016 U.S. Presidential election in favor of Donald Trump. In discussing the events depicted in the film, the well-known technology investor Roger McNamee averred that technology companies “have a data voodoo doll, which is a complete digital representation of our lives. With it, they can manipulate our behavior.”

Likewise, the Harvard psychologist Shoshana Zuboff recently warned of digital marketers, “The idea is not only to know our behavior but also to shape it in ways that can turn predictions into guarantees… the goal now is to automate us.”

Clearly, the idea that bad people do bad things and are adept at it resonates and is consistent with the public’s inclination to conspiracy theories. But as Stanford marketing professor Itamar Simonson and I discuss in a recent articlein Consumer Psychology Review, a closer examination suggests that the claims are grossly exaggerated.

There is no question that advances that go under the label of A.I. (mostly machine learning methods) are enabling revolutions in many domains, including image recognition, language translation, and many others. However, predicting people’s choices (and human behavior generally) is quite unlike the tasks where A.I. shines. Unlike the targets of these other tasks, preferences for specific products and attributes do not exist to be predicted but tend to be formed at the time decisions are made.

To elaborate, while people are likely to have general product preferences (for uniqueness, for ease-of-use, for quality, for a favorite color), people usually do not have precise, well-defined preferences for specific products, or for how they would trade off one product attribute for another. 

For example, people are unlikely to have a preference in advance of buying a toaster for a particular model or configuration of toaster. Likewise, they are unlikely to have a clear preference for how much extra they would be willing to pay for a somewhat more attractive toaster, until they are in the process of making a purchase decision. That is, such preferences do not exist to be predicted but are “constructed” in the process of making a decision on the basis of many, largely unpredictable factors.

This is particularly the case in the current consumer information environment, where many of the key determinants of choice (e.g., expert and user reviews, product recommendations, new options) are increasingly encountered by the consumer for the first time at or near the time when a decision is being made, and therefore cannot be anticipated ahead of time. For example, in the process of shopping, a consumer might encounter a product review that highlights the benefits of a seemingly insignificant feature the consumer previously had not considered, and this might substantially affect the consumer’s choice. The influence of such just-in-time information makes our choices increasingly more difficult, not easier, to predict. 

To be sure, in some cases consumers do have strong, precise, stable preferences for particular products or attributes. For instance, some people prefer to buy a latte every morning. In such a case making a prediction is relatively easy, and requires little sophistication in data or methods. 

Likewise, in some cases, certain variables will predict differences in preferences between consumer groups. For instance, consumers who bought an Xbox are likely to be much more receptive to ads for Xbox games than consumers who bought a PlayStation. Insofar as more of what we do (purchases, “likes,” visits, etc.) is tracked today, more such “easy” predictions can be made. 

However, even with extensive consumer data for targeting, the ability to predict who is likely to buy a product in an absolute sense remains low. In a recent Facebook campaign, for instance, where millions of users were shown ads for a beauty product that were targeted to their personalities (based on their history of Facebook likes), on average only about 1.5 in 10,000 people that viewed the ads bought the product. 

Granted, this result was about 50% higher than for people who saw the ad but were not targeted based on their personality. In other words, targeting based on personality increased the likelihood that someone who saw an ad would buy the advertised product from about 1 in 10,000 to about 1.5 in 10,000. Such a change in the success rate might be economically meaningful (depending on the cost of the ads and the product’s profit margins), but it is a far cry from having a “data voodoo doll” to manipulate consumers or to “automate” them. 

In other contexts, the use of highly sophisticated machine learning (deep learning) methods has shown limited ability to improve predictions of people’s choices over basic statistical methods. For instance, recent research found that the use of more sophisticated models yielded only very slight improvements over a simple model in the ability to predict people’s credit card choices– so slight that, given the cost involved, it was likely a waste of effort. 

As another example of the limited ability to predict consumer preferences, consider (the lack of) advances in recommendation engines, like those used by Netflix or Amazon to steer viewers toward new shows or products based on what they’ve already watched or purchased. Two recent reviews referred to most of the claimed gains in predictive accuracy from increasingly sophisticated methods as “phantom progress.” Simple methods, they found, tended to perform as well as more sophisticated ones, with one review concluding that, “progress seems to be still limited… despite the increasing computational complexity of the models.” 

For consumers and policymakers, the limited ability to predict and thereby influence individual choices should be somewhat comforting. On the other hand, consumers and policymakers need to be vigilant regarding the manipulation of the reviews and other information that consumers increasingly depend on in the current information environment to construct their preferences and make choices. 

In other words, we should be less concerned that marketers will know exactly what we want (or exactly what buttons to push to manipulate us) and more concerned about the integrity of the information we increasingly rely on to make choices. 

David Gal is professor of marketing at the University of Illinois at Chicago. Follow him on Twitter at @realDavidGal. 

More opinion from Fortune: