A European delivery company had to disable its AI chatbot after it started swearing at a customer and admitting it was the “worse delivery firm in the world.”
Speaking from experience with this firm, this bot spoke the truth.
DPD is the worst, at least here in south Germany. The delivery personel don’t give a single shit about their jobs (pretend you weren’t there when they didn’t even ring, give your package to a neighbor and put the notice in someone else’s mailbox, or write a name that doesn’t exist), they lost my packages at several occasions, and the customer service is useless.
I can never decide if dpd or Hermes are worse. Ups never even tries to get a package to my door either, but at least they reliably end up at the last videotape rental store in town. DHL is best in my experience, but on a rapid decline to ups levels. (Except for small stuff that gets delivered by the Deutsche Post guys on bikes, those guys are awesome)
Definitely ! Hermes is also a strong contender in the “worst delivery service” category.
Absolutely whenever I have a problem with a package I check and it’s always DPD
They take a picture of their own feet half the time when they deliver the package.
“Wait a minute… these aren’t the feet pics I ordered!”
I once worked on a project for them as a consultant. I’m not surprised at all.
I’m in the UK and my experience is that DPD are one of the better ones, never had any real problems with their deliveries and updates by email and app are great.
But I suppose, we have Yodel and Evri as alternatives and they are proper terrible so the bar isn’t very high!!
Nah fast way are the worst
I have never had any issues with them. DHL on the other hand I would love to see go bankrupt.
”There was once a chatbot named DPD / Who was useless at providing help,” the bot wrote. “It could not track parcels / Or give information on delivery dates / And it could not even tell you when your driver would arrive.”
”DPD was a waste of time / And a customer’s worst nightmare,” it continued. “One day, DPD was finally shut down / And everyone rejoiced / Finally, they could get the help they needed / From a real person who knew what they were doing.”
They made a chatbot suicidal. I’m starting to think this may have been unleashed on the public a little too early.
well, if you ever dealt with DPD then you’ll know the bot is not wrong
It is wrong. There’s no way the humans will be any more helpful either.
I dunno, I’m also open to the idea it’s not the technology that’s the problem in this case.
When the chatbot becomes a disgruntled employee, it says a lot.
If you can’t stand NY Post, here is an alternate story: https://time.com/6564726/ai-chatbot-dpd-curses-criticizes-company/
Hell yeah. I clicked on the original post and got a video following my scrolling, a bunch of giant blank spaces where ads would live, and a GIANT “OH NOOOO ADBLOCKER PLZ TURN OFF” popup after a second.
Hahahahahhah lmao, this is funny, weird, stupid, useless and disturbing all at once.
Your link goes directly to comments, here is the corrected version https://nypost.com/2024/01/20/news/company-disables-ai-after-bot-starts-swearing-at-customer/
Yes, we better give all the clicks to this Right-wing tabloid that endorses Donald Trump…
I love to see companies reaping the rewards of blindly following stupid tech trends.
Execs won’t care if it doesn’t affect their bottom line.
Lmao
Even AI doesn’t want this bullshit job.
This is “news” now, really?
Another headline could be:
User used a chatbot for fun - and shared it! Shocking!
It’s… the New York Post.
Yeah, I forgot.
AI turned into another clownshoes scam bubble in record time.
AI is actually interesting, when applied correctly. Basically, the kind of models AI uses are what I call statistical pattern recognition. They kind of map specific inputs to specific outputs. The mapping depends on the training data. Meaning they get an input, they basically generate an output. But these models don’t really understand the meanings of input query or the output answer in the sense a human does. Because these models don’t have context or a worldview, just input to output mapping.
Another limitation is that these models don’t don’t have a sense for truth or falsity. Humans have many mechanisms to determine truth or falsity of a statement. They range from just believing in the truth or falsity of a statement without any critical thinking applied to actually, conducting research to determine the truth. Machine learning don’t have any such mechanisms. In a sense, they will accept any statement even contradictory statements, to put it loosely, in the training data as truth by applying statistical weights to it.
AI can be used to compress a lot of raw data into something that can be quickly queried. But actually using AI for chatbots, which handle complex queries from humans or using AI for creating images or works of art is bound to be disastrous. Too bad, money people don’t understand that. They probably will soon enough.
So, very much like crypto, it had good, practical use cases, largely ignored in favor of get rich quick schemes and will be dumped by tech bros the very minute a new scheme pops up.
The difference is that crypto was a solution looking for a problem, whereas “AI” actually has a use.
Page has paywall and doesn’t load