Advertisement

As Gen AI Gains Traction, Potential Risks Become Impossible to Ignore

As the value of gen AI is becoming clearer, so do possible risks, including copyright infringement and drift.
uladzimirzuyeu - stock.adobe.com

Businesses across industries have embraced generative AI across their organizations to power efficiency and performance. In fact, 79% of business leaders said they expect generative AI to transform their organizations within three years, according to a Deloitte survey of more than 2,800 executives whose organizations are currently piloting or implementing generative AI.

In a recent special report, Retail TouchPoints outlined how, within retail especially, generative AI is gaining significant traction, with brands of all sizes and across categories using the technology to scale operations and enable one-to-one personalized experiences. With use cases spanning the entire customer journey, the value of gen AI is becoming clearer, but every new program or initiative also uncovers possible risks and challenges, especially related to content moderation, copyright infringement and drift.

In many scenarios, brands and retailers are using their own proprietary product data and customer insights to create more relevant experiences. For example, Bloomingdale’s was able to enrich product data so it was more detailed and aligned to consumers’ search behaviors and vocabularies, while also automating the product attribution process on the backend. This move ultimately made the Bloomingdale’s assortment more searchable, accelerating and streamlining product discovery.

Gen AI also can help personalize content creation and delivery to support the middle and bottom stages of the funnel. “Personalization is something we have talked about in retail for a long time, but generative AI can help me get messages based on the brands I like, letting me know when something new comes in or goes on sale,” said John Harmon, Managing Director of Technology Research at Coresight Research. “It’s a huge productivity saver. Gen AI acts as a co-pilot or accelerator for people to do that — create a higher quantity of high-quality content.”

Brands like Kroger are eagerly testing these capabilities, implementing a digital shelf optimization solution that combines gen AI and real-time data capture to create comprehensive product listings for shoppers — and provide rich data and insights to guide the company’s content creation. Other brands, such as Adore Me, are even exploring how gen AI can enable product customization and more sustainable manufacturing practices. However, with these more robust use cases come greater risks for which brands need to prepare.

Building a Moderation Engine for AI-Powered Creativity

In May 2024, the lingerie brand launched “AM by You,” a platform that allows consumers to enter detailed prompts, including colors, patterns and imagery ideas, to receive a custom design for a bralette and panty set. In the early testing phases of the platform, Adore Me saw significant engagement, with 70% of users generating more than one prompt and spending approximately four minutes on the platform each session.

“From the beginning, we knew this technology could allow anyone to visually create something beautiful, exciting and personal,” said Ranjan Roy, VP of Strategy for Adore Me in an interview with Retail TouchPoints.

But the Adore Me team also put a lot of work into developing custom-trained image generation models to ensure the best results, as well as building a proprietary process and model for content moderation. With a history at the Financial Times, Roy knows how challenging content moderation at scale can be. That’s why, in the early stages of the platform, the Adore Me team (including Roy himself) manually reviewed and approved each design. Now, the team pushes every creation through an image API to look for any copyright “red flags,” including the use of copyrighted images and characters, profanity, obscenity and more.

“We’ve defined lots of rules ourselves, which has created a lot of internal debate,” Roy admitted. “And from there, there’s a risk that’s associated with every single prompt that’s generated, or every product that has gone through printing. Anything that presents even a low risk, we still have manual human review over.”

Even though gen AI is marketed as a technology designed to automate manual processes and accelerate creativity, Roy believes that humans should play an integral role, especially if the technology is used to facilitate creativity or content creation. “As a tech-first company, everyone thinks pure automation and scale, but we recognize that this is something that always needs to have some kind of human oversight, and we need to establish rules that we are proud of and develop a system that is as foolproof as possible.”

Adore Me even went through an extensive “red teaming” process, with Roy and other employees essentially trying to test the platform’s limits and create different prompts that could uncover possible risks or moderation gaps.

People and Process: Tackling AI Drift and Other Risks

Recent retail implementations and product launches paint an exhilarating picture of how gen AI can drive industry efficiency and innovation. But Roy and the Adore Me team reaffirm some of the technology’s inherent risks, the most significant being that AI models are only as accurate, valuable and trustworthy as their inputs. Engines must keep being fed for users — both internal and external — to keep getting valuable information.

Advertisement

As more data is fed into the model, the more it evolves and changes over time, which ultimately creates a phenomenon called “drift,” when LLMs “behave in unexpected or unpredictable ways that stray away from the original patterns,” according to ZDNET. In some cases, drift happens when a company tries to improve parts of AI models — only to make other parts worse.

“The more you use it, the more it changes, and you can get to a point where the results just aren’t usable anymore,” Harmon explained. “That’s when you have to go back and retrain the model or find another model [to build upon]. We already know of a lot of issues with gen AI models, such as toxicity and hallucination, where the models either give you poor answers or completely wrong answers.”

This is where people come in. Organizations need to develop a governance strategy and train internal team members to monitor AI models and make corrections as needed, according to Harmon. This is especially critical because most consumers (67%) have concerns with retailers using gen AI, according to CI&T — and 53% of these respondents are most concerned with retailers having access to their personal information.

While this concern is nothing new, especially with personalization engines becoming more sophisticated, having established teams, systems and processes dedicated to data security will help brands win, and keep, consumer trust. “This isn’t a toaster; you can’t just plug these things in,” Coresight Research’s Harmon said. “You need trained people to monitor them, or you need a platform to do it. You want to strip out [that personal data] before you hand it to the model, and you have to monitor the outputs again for hallucinations, toxicity and drift.”

Featured Event

Join the retail community as we come together for three days of strategic sessions, meaningful off-site networking events and interactive learning experiences.

Advertisement

Advertisement

Access The Media Kit

Interests:

Access Our Editorial Calendar




If you are downloading this on behalf of a client, please provide the company name and website information below: