Editor’s note: This is the fifth and final article in our series on AI in retail (though we’ll continue to cover the tech and its impact on an ongoing basis). Check out our previous coverage of the consumer attitudes toward AI, store design applications, customer-facing applications and AI’s operational impact.
Soon, we journalists are going to need AI-powered solutions just to keep up with all the information about AI. It’s not hyperbole to say that the artificial intelligence (AI)/machine learning (ML) terrain is changing on a daily basis; just last week, Alibaba’s plans to debut a ChatGPT-like chatbot were revealed.
Other tech and retail giants also are moving full speed ahead with AI solutions, including Walmart: in an interview with Venture Beat, Desirée Gosby, VP of Emerging Technology at Walmart Global Tech, predicted that generative AI like GPT-4 will “be as big a shift as mobile, in terms of how our customers are going to expect to interact with us.” OpenAI has launched its latest version of GPT, GPT-4, and Google announced a generative AI function for Google Workspace (formerly G Suite).
Despite this blisteringly fast pace of change, the editors of Retail TouchPoints wanted to try to find answers to the key questions about this newest generation of AI:
Advertisement
- Are these new AI tools as magical as they seem?
- Who will control access to these tools?
- Can we really trust the content that AI turns out?
- If AI can do all this, what will humans do?
- Where does this leave content creators and by the way, who owns all this AI-generated content?
- Should we be scared of this technology?
Q: Are these new AI tools as magical as they seem?
A: Yes — and no.
“AI has been a buzzword my entire career, and it’s meant different things over that time period, like the last 15 years or so. I’m a big believer that AI is really useful to solve very specific problems and tasks. It can be a really useful tool, but I’m also very cautious about the hype because it typically has limitations, and those show up in a lot of different ways. Sometimes it’s just the data that’s available for a given thing that you’re trying to get an answer for. Sometimes it’s challenges in interpreting the data in ways that are useful. Like with a lot of technologies, there’s all this promise about all the stuff you can do, and then once you practice with it, you’re like, ‘Okay, it’s a little bit harder than we thought it was going to be.’ But then if you keep using it, there’s a moment where [you] understand the constraints of the technology and you found some tasks and some ways of using it that are productive. It’s a useful tool, ChatGPT, and there are some applications that do feel sort of magical, but it’s limited to what data is available.” — Romney Evans, Founder and CEO, Shoptrue (also founder of TrueFit)
“This newly developed technology can provide brands with a powerful tool for creating high-quality video content in-house, which can be particularly valuable for brands that produce a high volume of content or have tight turnaround times. By using AI video tools, brands can achieve greater creative control and flexibility, while also potentially reducing the typical cost of production outsourcing. While AI video technology may impact outsourcing trends to some extent, it is unlikely to completely replace the need for creative expertise. Instead, it will likely become a valuable tool for brands looking to streamline their video production process and create more content in-house.” — Mathieu Champigny, CEO, Industrial Color
Q: Who will control access to these tools?
A: It depends on how much they will eventually cost and whether there are enough resources, like server hardware, to support their use.
“Perhaps the biggest question around AI is who will ‘control’ access to and use of AI tools. The cost of processing all of the data tied to AI tools is extraordinary and it doesn’t seem likely that these tools will remain affordable to all users. This could create a real divide between those with and without solid access to AI tools. Imagine if we all had been told 15 years ago that Google Search was $15 per month. Would you have paid?” — David Title, Partner, Bravo Media
Microsoft is poised to announce a suite of Office 365 tools powered by GPT-4, the powerful new artificial intelligence software made by OpenAI. But now Microsoft is facing an internal shortage of the server hardware needed to run the AI, according to three current Microsoft employees. That has forced the company to ration access to the hardware for some internal teams building other AI tools to ensure it has enough capacity to handle both Bing’s new GPT-4 powered chatbot and upcoming new Office tools. And the shortage of hardware may be affecting Microsoft customers: At least one told The Information there’s a long wait time to use the OpenAI software Microsoft already makes available through its Azure cloud service.
Q: Can we really trust the content that AI turns out?
A: No, at least not fully.
There are plenty of recent examples of AI creating completely false information, so as a basic precaution none of the content these machines create should be used without oversight. (And that’s leaving aside the content created with the express intention of falsity, like deep fakes that make it seem as if real people have said things they never actually said). AI-generated falsehoods include:
- MediaPost contributor Gord Hotchkiss had ChatGPT write his bio for him and got back the story of someone’s life, but it wasn’t his;
- Google, currently the uncontested titan of the search industry, recently found itself in hot water when its AI-powered chatbot Bard generated an erroneous fact about the James Webb Space Telescope during a recorded demo; and
- The first AI-generated article for Men’s Journal included serious medical errors and necessitated corrections, according to reporting in Futurism.com.
“Google is only as good as you can Google, and AI is the same. What you get out of it will depend a lot on how good you are at what is called the ‘prompt.’ But it’s also not like Google because it’s a conversational thing, so the previous prompts inform the future responses. It’s like if Google remembered your last 20 searches and that influenced what it churned out.” — Darren Hill, Chief Digital Office at BrandX, which owns the Bon-Ton brand
“Excel is what people feel comfortable with, and so when you start to give them a black box solution where they don’t know why recommendations come up — whether that’s for marketing changes or assortment changes or inventory — they’re going to go back and say, you know, I trust my spreadsheet before I trust this recommendation. The more black box it is, the less adoption you’re gonna get. The good news is that it’s easy to sit there and now show people why this recommendation came up the way it did. A lot of these analytics and technology solutions also now do back casting, saying if you had adopted what we told you, this is what your results would have been. It’s a little bit of a shame game — here’s all of the merchants that took a recommendation and look at their results — but it’s about getting people comfortable with the fact that parts of their job could be automated and you could get a better outcome.” — Jill Standish, Global Retail Lead, Accenture
“I’m looking at this as the year of optimization AI. So everywhere that you have practical, hard measures that will help you understand performance improvement or cost reduction — things around inventory, merchandising, where capital and expense is going, marketing optimization — becomes the practical retailer’s path to success in these times. There’s always going to be innovation going on in the customer experience side, but part of the journey of AI is a trust-building factor, being able to take the hands off the steering wheel or the hands off the keyboard. And where that starts is with these very practical use cases that you can come back and do a pre-post, an A/B test, whatever the right assessment is, and really understand the decisions that were made and the impact that you’re driving for the business. On the experience side of things, there are more uncontrolled factors, so really focusing on those use cases that drive near-term value is the way to go.” — Carrie Tharp, VP of Retail and Consumer, Google Cloud
Q: If AI can do all this, what will humans do?
A: On a macro level, AI appears to be a job creator: a 2020 World Economic Forum report indicated that while AI-powered machines may replace approximately 85 million jobs by 2025, the same timeframe could see the creation of approximately 97 million new jobs due to AI.
“AI plus humans equals the future, AI minus humans is just spam. We’re a long way from taking humans out of the loop on this. There are companies that come to us, and they want to get rid of all their writers, and we tell them that’s not going to happen, but you can really increase the amount of output they can do and how fast they can do it.” — Brian Hennessey, Talkoot
“It took me a long time to figure out the precise commands and descriptions. It’s like describing something you want, but doesn’t exist yet, to someone who doesn’t speak your language, behind a closed door — all you have in your corner is your creativity, vocabulary and storytelling skills. You need a true creative to drive this spaceship. These AI productivity shortcuts will become a skillset all the big brands will be seeking to hire.” — Sharon Weisman, CEO, PowerStation Studios U.S.
“A technology is introduced — say, the car — and an existing sector is made irrelevant overnight (e.g., horse and carriage). In the short term, we’re fixated on how many horses will be out of a job. Harder to imagine, however, is how many jobs the car will create — as well as the different kinds of jobs it will create. It’s hard to envision radios, turn-signal lights, motion sensors, and heated seats. Let alone NASCAR, The Italian Job, and the drive-through window. In other words, disruptive technology results in demand for things we never knew we wanted.” – Scott Galloway in his column
Q: Where does this leave content creators and, by the way, who owns this AI-generated content?
A: Unclear, both for influencers and more traditional content producers like publishers.
How AI technology, such as OpenAI’s ChatGPT, will alter search has publishers braced for the possibility that they will lose out on traffic and revenue. Urgency is mounting, so much so that publishing executives have recently looked at how their content has been used to train AI tools, according to publisher trade body News Media Alliance and the Wall Street Journal, according to AdWeek.
“With the ability to generate new text, images and videos, generative AI will enable creators to produce more content in less time, but that will lead to a surplus of content, which could make it more difficult for individual creators to stand out and gain traction. [It will also make] it possible for non-experts to create high-quality content so people with little to no experience [will be able to] produce content that is indistinguishable from that produced by experts. This could lead to a democratization of the content-creation process, making it possible for more people to participate in the creator economy. But as AI-generated content becomes more sophisticated, it may become difficult to distinguish it from human-generated content, which could decrease demand for human-generated content or, if that AI content is being used to commit fraud or plagiarism, it could also undermine the value of human-generated content.” -Dmitry Shapiro, CEO of Koji and former Google exec
“Across the industry today, and historically, designers utilize Pinterest to convention hunt. Sure, there’s a conversation around artistry and the integrity of the art, but everybody is getting inspiration from each other. These prompts do not need to be super literal, like what we’ve seen around the Tiffany and Nike collaboration. They can just be aesthetic inspiration.” — Melissa Gonzalez, Principal, MG2 Design
“In the short term, there may be a reaction to AI-generated media that focuses on more handmade, human-designed media, which could in turn be used by brands that want to communicate those values. We’ve already seen quite a lot of protests in the artistic community about AI-generated images being hosted on certain websites. There will be an effort to produce works that AI tools couldn’t, for whatever reason. But especially as time goes on, those tools will become more established and people who can use them well will be regarded as skilled in their own right. Even within the world of AI-generated media, some creations will be better than others, because some prompts are better than others, which will be based on their own experiences and understanding.” – Chris Beer, Data Journalist, GWI
Q: Should we be scared of this technology?
A: Yes, without regulation it can be very dangerous, especially as it gets better and better.
This isn’t just fearmongering — some of the biggest names in tech, including Elon Musk and Apple Co-founder Steve Wozniak. have called for a pause on all generative AI development until governance of the technology can be laid out. Separately, the U.S. Chamber of Commerce released a report in March 2023 calling on lawmakers to create some sort of regulation around the technology. One entire country — Italy — has blocked ChatGPT over privacy concerns, according to CNN, and OpenAI had to shut down ChatGPT service for several hours on March 22 after a privacy bug allowed users to see the titles of other people’s chat histories, according to Bloomberg.
“Our internal experience of a situation is distinct from the external ‘reality’ of it. So we may know intellectually that Sydney’s just spitting out words in an order that seems consistent with all of the examples it’s digested. But our system responds exactly the same way as if it were a person writing to us. AI may not be “sentient” — but that may not matter. In terms of the effect it can have on us, it’s already crossed the uncanny valley. Any sufficiently advanced AI is indistinguishable from sentience. We’d be wise not to dismiss their impacts.” -Kaila Colbin, Founder, Boma Global, in a column contributed to MediaPost
“Generative AI is a nuclear weapon of creativity in a bad and good way. The scary part is this technology in the hands of somebody who wants to really use it for evil. Think about what [some bad actors] were doing during the last election, creating all these fake user accounts and posting millions of times a day [to influence voter opinions]. They don’t need the people now. You can create a million really believable tweets in the course of minutes about, say, a plane crash that never happened. We’ve gotten to the point where reporters aren’t the only people we get our news from because of social networks, and there’s no one to check this stuff. You could have millions of people with videos with photos talking about a plane crash that didn’t really happen, but it changes the entire character of the situation.” -Darren Hill, BrandX