I should further add - don’t fucking use it in places it’s not capable of properly functioning and then trying to deflect the blame on the AI from yourself, like what Air Canada did.
When Air Canada’s chatbot gave incorrect information to a traveller, the airline argued its chatbot is “responsible for its own actions”.
Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada’s chatbot promised a discount that wasn’t available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother’s funeral and then apply for a bereavement fare after the fact.
According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn’t offer the discount. Instead, the airline said the chatbot was a “separate legal entity that is responsible for its own actions”. Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.
The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees
ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees
That is a tiny fraction of a rounding error for a company that size. And it doesn’t come anywhere near being just compensation for the stress and loss of time it likely caused.
There should be some kind of general punitive “you tried to screw over a customer or the general public” fee defined as a fraction of the companies’ revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.
…what kind of brain damage did the rep have to think that was a viable defense? surely their human customer service personnel are also responsible for their own actions?
It makes sense to do it, it’s just along the lines of evil company.
If they lose, it’s some bad press and people will forget.
If they win, they’ve begun setting precedent to fuck over their customers and earn more money. Even if it only had a 5% chance of success, it was probably worth it.
I should further add - don’t fucking use it in places it’s not capable of properly functioning and then trying to deflect the blame on the AI from yourself, like what Air Canada did.
https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know
That is a tiny fraction of a rounding error for a company that size. And it doesn’t come anywhere near being just compensation for the stress and loss of time it likely caused.
There should be some kind of general punitive “you tried to screw over a customer or the general public” fee defined as a fraction of the companies’ revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.
…what kind of brain damage did the rep have to think that was a viable defense? surely their human customer service personnel are also responsible for their own actions?
It makes sense to do it, it’s just along the lines of evil company.
If they lose, it’s some bad press and people will forget.
If they win, they’ve begun setting precedent to fuck over their customers and earn more money. Even if it only had a 5% chance of success, it was probably worth it.
sure but there was no way that wouldn’t have been thrown out.