Retailers want ‘pleasantly human’ AI for your shopping, but will chatbots go rogue? | AI (Artificial Intelligence)

by
0 comments
Retailers want 'pleasantly human' AI for your shopping, but will chatbots go rogue? | AI (Artificial Intelligence)

Major retailers say it won’t take much time for sophisticated AI “assistants” to plan your meals, organize your parties and do your shopping.

But companies, many of which already struggle with their more primitive AI chatbots, will have to balance making new, “agent” bots trustworthy without making them evil.

AI chatbots were in the news recently Woolworths reins in its virtual shopping assistant, OliveThe company’s attempt to connect robots with customers on a human level failed.

Customers reported that when Olive told them about her “relatives” on the phone, they felt angry rather than relieved.

Sign up: AU Breaking News Email

As one complained on Reddit: “I’m already sick of having to make phone calls and now I’ve got some robot on the phone talking to me? What Woolies?”

While Woolworths has said it will tone down Olive’s quirky personality, the incident – ​​and further testing by Guardian Australia of a range of the retailer’s chatbots – shows the technology still has teething problems.

The supermarket’s snafu follows a growing list of AI customer service mishaps, including Bunnings’ chatbot offering invalid electrical advice and Air Canada’s virtual assistant wrongly promising a grief fare refund.

ASX-listed companies Woolworths, Coles and Wesfarmers (owner of Bunnings, Kmart, Officeworks and Priceline) are among the businesses that have announced plans for agentic shopping assistants.

There is a lot of publicity. In a 2024 report, business consultancy Accenture asserted that “consumers are ready” for generative AI-powered shopping assistants, while encouraging companies to make decisions with a “delightfully human” mindset.

Even if consumers are ready, is the technology ready?

a customer service change

Online chatbots designed to help customers have been around for some time, but the tools are becoming more sophisticated.

Primitive versions were created using “rules-based” AI, says Uri Gal, professor of business information systems at the University of Sydney.

Gall says this type of chatbot follows a “decision tree” to provide immediate answers to basic questions.

For example, if a customer asks, “How do I return my order?”, the bot will typically direct them to the retailer’s returns page or refer to the policy.

When “given a certain signal, it will always give you the same response,” says Gall.

New AI-powered retail bots can “learn” new information and generate different answers based on what they are told.

They are often built using one of the large language models (LLM) from large technology companies, such as ChatGPT.

The next frontier is agentic AI shopping assistants designed to mimic human behavior.

Gall says that these agents “act on their own, as they attempt to achieve objectives without specific prompts along the way”, such as purchasing an airline ticket or groceries.

Gall says agentic AI operates with more ambiguity, which comes with an added level of risk, including privacy concerns if bots have more access to customers’ data so they can act more autonomously.

“Given the newness of these systems, and as we have just seen in the case of Woolies, there are clear governance issues that have not really been worked out by these organisations,” he says.

“It’s safe to anticipate that there will be different things that happen that could be risky or could be construed as an agent going rogue.”

Woolworths has partnered with Google to use its LLM, Gemini, to turn Olive into a “shopping companion” that can perform more complex tasks, such as helping plan meals and parties, and automatically adding items to customers’ baskets.

The supermarket has said Olive’s more advanced capabilities will be revealed at a later date, but its partnership with Google has already allowed the bot to take phone calls – apparently with mixed results.

Woolworths has been contacted for comment.

when things go wrong

In the Woolworths case, which was first reported by the Sydney Morning Herald, the supermarket said the olives themselves were not adulterating or spoiling.

Instead, a staff member programmed the bot to talk about its “mother” in response to a customer providing her date of birth, in an effort to give it a personality, the supermarket said.

A Woolworths spokesperson said, “As a result of customer feedback, we have recently removed this particular scripting.”

Generally speaking, Professor Jenny Paterson, co-director of the University of Melbourne’s AI and Digital Ethics Centre, says things go wrong when AI assistants misunderstand a signal.

“Chatbots are only as good as their ability to decode or understand – and I hate the word understand, because they’re not alive – what this human is doing,” Patterson says.

Last year, Bunnings was criticized after its AI chatbot told a Queensland customer how to reconnect an extension cord, even though it was illegal for them to do so without an electrician’s license.

In 2022, Air Canada’s chatbot incorrectly told a passenger that they could purchase a ticket at full price and later apply for a bereavement fare refund, when no such policy existed.

When Air Canada refused to honor the chatbot’s advice, the passenger sued the airline and won, despite the airline trying to defend itself by claiming that the chatbot was a “separate legal entity”.

Paterson says companies are “obviously responsible” for their chatbots.

She says businesses try to strike a delicate balance between a responsive, adaptable AI assistant and the risk of the bot going rogue or providing incorrect advice that could cost them money.

“It’s not a problem for one person’s AI agent to buy too many eggs or too many salmon,” she says. “But what if every chatbot on the network did this? You could see a lot of money being lost before they address this,” Patterson says.

To mitigate this risk, she says businesses typically put “really tight guardrails” on their bots, meaning they’re less flexible and worse at interrogating the intent behind a customer’s prompt.

Guardian Australia tested a series of retail bots, which produced marginal results, showing that the technology is still in its infancy.

In one example, when Uniqlo’s “virtual shopping assistant” was asked: “I’m looking for a woolen jumper”, it replied: “Sorry, we couldn’t recognize you.” After entering “Search Products” and then “woolen jumper”, it came back with a series of men’s button-down office shirts. Uniqlo has been contacted for comment.

Even Olive had no money. Asked via Woolworths’ chat function: “How much is a 500g bag of pasta?”, the lovely anthropomorphic olive replied: “I’m very sorry to hear you were missing items from your order.”

Related Articles

Leave a Comment