Ethical Issues with Generative AI in Retail Marketing

Written by:

Share

Facebook
Twitter
LinkedIn
Pinterest
Email
Print

The ethics of generative AI in retail is almost as polarizing as politics these days. The impact is two-fold: Fewer consumers are vocalizing worries about technological singularity (i.e., Vernor Vinge’s 1993 prediction of a theoretical future event when computer intelligence surpasses that of human creators); however, the real-world implications of AI can already be seen and felt in various industries.

Whether AI is used to perpetuate unrealistic (meta-human) beauty/body ideals, create fake influencers with no obligation to the communities they target, or produce product images that look a whole lot different than the item itself, retailers need to tread carefully.

For example, the Netflix reality TV competition, The Circle, has an AI player this season. The irony is that after the first few episodes, the AI player is the only one that nobody suspects of being artificial intelligence. But that’s just the oxymoronic reality of reality TV. The eerie impacts of AI are far more pervasive than a social media fake profile pic with a smiling canine and some on-the-nose one-liners.

Generative AI in Retail

Let’s bring it back to retail. In a 2024 study, most (53 percent) marketers believe that AI will “significantly enhance” the way shoppers are targeted and served ads. Yet what happens when retail marketers use AI-based imagery to create the ad itself? There are more than a few ethical complications that can arise, as well as issues with brand equity.

The Wall Street Journal recently reported AI facsimiles are now posing as models for online apparel brand shopping sites. And AI twins are now available for focus groups, programmed to think and act like current or potential customers. “AI systems can take in data on a person’s individual characteristics—such as appearance, shopping preferences then predict how they would look in an item of clothing.”

Think about that for a moment. That said, here are three key AI ethical issues to consider.

  1. Is AI Imagery Destructive for Females?

AI-enhanced imagery is nothing new and its use is not relegated to the retail industry. But there’s a big difference between a world in which a few elite professional photographers use Photoshop and one in which every smartphone-owning person feeds AI-filtered imagery to the hive mind on social media. We’ve already crossed that threshold and seen its impact on teen girls: A whopping 85 percent of women and girls report being exposed to “harmful beauty content online.”

As a consumer base, Gen Z is known for valuing authenticity above all else. The irony is that they’re coming into buying power in a time when AI-generated imagery is the new go-to for the brands they know and love, from Levi Strauss to Unilever.

Critics of AI in retail advertising say that it could lead to even more unrealistic, unachievable self-image standards for women and girls. And it’s the Unilever-owned beauty and skincare brand, Dove, that’s leading the charge––refusing to use AI-generated or photoshopped images in any of their marketing campaigns. (Does the irony of modern branding ever cease? Me thinks not. It’s just another example of a subsidiary taking a radical stand on an issue when the parent company takes the opposing viewpoint.) AI has long been used to create photoshopped images and, more recently, AI-enhanced image filters have graced the visages of celebrities, influencers, stay-at-home moms, and even the Pope. But AI-generated imagery––images created from text prompts on apps like DALL-E 2 with no human baseline––brings unique ethical considerations to the forefront. See the eight most viral AI-generated images of the year here.

  1. An Environment Rife for Diversity-Washing

We also need to talk about how the rise of digital-only influencers and imagery could actually harm the minority communities they’re meant to serve. Levi Strauss, for instance, partnered with Lalaland.ai to utilize digital-only AI models. We’re seeing “virtual influencers” model for the likes of Prada, Dior, and Calvin Klein; AI influencers are debuting music videos at Lollapalooza, posting unboxing videos to TikTok, and getting their likeness featured on The Cartoon Network. It’s a different world.

But how will the retail companies and brands using AI-generated images reflect the communities that those images were made to represent? Diversity-washing is real. What if a retail company with a primarily white, Judeo-Christian C-Suite decides to purchase imagery representing a “non-binary,” “Native American” AI influencer to attract a diverse customer base? And then the company that creates that AI-generated persona is also staffed with primarily white, hetero-normative folks. Now we have two groups representing communities that they are not a part of. They’re targeting and selling to marginalized communities, but it’s a veneer, as they are typically disconnected from understanding who those consumers actually are. See the ethical issue? Not to mention authenticity?

It could misrepresent the brand to prospective consumers, projecting an image of in-group affiliated influencers that they’ve taken no action to earn — diversity-washing in a nutshell. This brings up another ethical flash point. If companies are promoting artificially generated images of influencers from a range of marginalized communities, who is sharing in the proceeds? We have no proceed-sharing policies for AI influencers presenting as members of those marginalized communities instead of hiring human models/influencers from those groups.

  1. Generative AI Hawking Products that Don’t Exist

Have you ever been enticed by a gorgeous ad on social media to order a package that, when it arrives, is but a shabby nod to the product in the original image? Then you probably have firsthand experience with the catfishing liabilities of GenAI.

AI-generated imagery applications like DALL-E 2 use text prompts to create images from scratch. It’s fun to play with, but more importantly, it’s catching on like wildfire in the retail industry. No longer are false product representations from GenAI limited to overseas manufacturers hawking their wares on discount apps. Today, many major online third-party marketplaces are selling to a global consumer base with fabricated imagery.

There’s a prime reason that GenAI imagery is so popular, it’s the bottom line. WPP’s CEO told Reuters that savings from generative AI can be “10 to 20 times” that of genuine product photography––savings that can then be used to better compete in the digital landscape. With that said, the widespread use of GenAI brings brand reputation and consumer trust into question.

AI in Retail: Proceed with Honesty

No brand or retailer wants to be seen as dishonest or misleading. As such, when discussing AI in retail, we need to proceed in a way that customers are never blindsided or even worse, duped. Whether used to perpetuate unrealistic (meta-human) beauty/body ideals, create fake influencers with no obligation to the communities they target, or produce product images that look a whole lot different than the item itself, it’s critical to tread carefully. It may be the Wild West of AI, but plenty of folks died during the Gold Rush… and plenty of brands will, too.

One path forward is to implement “digital watermarking” –– adding a label to AI-produced photos so customers don’t confuse them with the real thing. OpenAI just introduced a tool to recognize digital images. So, fans of transparency can rest assured that it’s just a matter of time until digital watermarking becomes the industry standard. I’m talking about complete transparency about AI-generated imagery, AI-altered ads, and AI influencers, every step of the way. To maintain consumer trust, brands and retailers need to proceed with honesty. So, when the Dust Bowl is over, the brands still have legs.

Related

Articles

Scroll to Top
the Daily Report

Insights + Interviews right to your inbox.

Skip to content