Software

AI + ML

Sage Copilot grounded briefly to fix AI misbehavior

'Minor issue' with showing accounting customers 'unrelated business information' required repairs


Sage Group plc has confirmed it temporarily suspended its Sage Copilot, an AI assistant for the UK-based business software maker's accounting tools, this month after it blurted customer information to other users.

A source familiar with the developer told The Register late last week: "A customer found when they asked [Sage Copilot] to show a list of recent invoices, the AI pulled data from other customer accounts including their own."

"It was reported via their support lines, verified, and the decision was made to pull access to the AI," the source continued.

The biz described the blunder as a "minor issue" to the The Register, and denied the machine-learning system had leaked GDPR-sensitive data as some had feared.

"After discovering a minor issue involving a small amount of customers with Sage Copilot in Sage Accounting, we briefly paused Sage Copilot," a company spokesperson said. "The issue showed unrelated business information to a very small number of customers. At no point were any invoices exposed. The fix is fully implemented and Sage Copilot is performing as expected."

Sage's spokesperson indicated Sage Copilot was taken offline for a few hours last Monday, to investigate the issue and implement a fix.

Have you experienced an AI system going off the rails? Tell us about it in confidence.

Unveiled in February 2024, Sage Copilot is described by its creators as "a trusted team member, handling administrative and repetitive tasks in ‘real-time’, while recommending ways for customers to create more time and space to focus on growing and scaling their businesses."

The chatbot is intended to automate workflows, catch errors and generate suggested actions relevant to business accounting.

"Sage Copilot’s accuracy, security and trust have been prioritized every step of the way, combined with expert support, robust encryption, access controls, and compliance with data protection regulations," the biz says on its website.

Sage Copilot is presently available by invitation as an early access product, and is being used by a small number of customers; the company's spokesperson could not provide a specific figure.

AI models make cybersecurity more difficult, according to Microsoft. And they generally come with warnings that their output needs to be verified since they're often wrong. Nonetheless, companies insist on deploying AI services, occasionally to their chagrin.

Apple this week suspended Apple Intelligence's news summarization capability following concerns that the service's AI summaries were inaccurate.

Last year, Air Canada had to pay a traveler hundreds of dollars after its chatbot misinformed the passenger about the airline's bereavement rate discount. And McDonald's ended its Automated Order Taker pilot after customers complained the AI got orders wrong.

In 2023, a GM chatbot used by a Watsonville, California, auto dealership got talked into agreeing to sell a 2024 Chevy Tahoe for $1 through some clever prompt engineering. Two years earlier, Zillow took a $304 million inventory write-down and cut 2,000 jobs after the real estate firm's bet on AI-based property valuations sank its home-buying business.

AI models have invented software package names, cited non-existent court cases, and accused people of crimes they haven't committed.

Meanwhile, several large makers of AI models including Anthropic, Meta, Microsoft, OpenAI, and Nvidia face copyright lawsuits over the data used to make these models. ®

Send us news
19 Comments

Microsoft eggheads say AI can never be made secure – after testing Redmond's own products

If you want a picture of the future, imagine your infosec team stamping on software forever

Biden signs sweeping cybersecurity order, just in time for Trump to gut it

Ransomware, AI, secure software, digital IDs – there's something for everyone in the presidential directive

Google reports halving code migration time with AI help

Chocolate Factory slurps own dogfood, sheds drudgery in specific areas

OpenAI's ChatGPT crawler can be tricked into DDoSing sites, answering your queries

The S in LLM stands for Security

Just as your LLM once again goes off the rails, Cisco, Nvidia are at the door smiling

Some of you have apparently already botched chatbots or allowed ‘shadow AI’ to creep in

Megan, AI recruiting agent, is on the job, giving bosses fewer reasons to hire in HR

She doesn't feel pity, remorse, or fear, but she'll craft a polite email message as she turns you down

3Blue1Brown copyright takedown blunder by AI biz blamed on human error

Worker copy-pasted wrong YouTube URL, says ChainPatrol

Look for the label: White House rolls out 'Cyber Trust Mark' for smart devices

Beware the IoT that doesn’t get a security tag

Free-software warriors celebrate landmark case that enforced GNU LGPL

On the Fritz: German router maker AVM lets device rights case end after coughing up source code

Where does Microsoft's NPU obsession leave Nvidia's AI PC ambitions?

While Microsoft pushes AI PC experiences, Nvidia is busy wooing developers

AI hype led to an enterprise datacenter spending binge in 2024 that won't last

GPUs and generative AI systems so hot right now... yet 'long-term trend remains,' says analyst

Uncle Sam now targets six landlord giants in war on alleged algorithmic rent fixing

One of ya is gonna sing like a canary, prosecutors say