I think this is a great example of the caution needed when using AI chatbots. We are still in the early stages of adoption. With AI hallucinations being quite common, more "misunderstandings" are surely to occur.
On Fri, 16 Feb 2024 at 13:54, Ron / BCLUG via talk <
talk@gtalug.org> wrote:
D. Hugh Redelmeier via talk wrote on 2024-02-16 08:25:
> <https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416>
>
> There's surely more to this story.
Standard operating procedures - "it wasn't us, guv, it was a contractor".
I particularly like this part:
> Air Canada argued Moffatt could have found the correct information
> about bereavement rates on another part of the airline's website.
>
> But as Rivers pointed out, "it does not explain why the webpage titled
> "Bereavement Travel" was inherently more trustworthy than its
> chatbot."
>
> "There is no reason why Mr. Moffatt should know that one section of
> Air Canada's webpage is accurate, and another is not," Rivers wrote.
Also, where CBC looked up any similar cases on CanLII:
>A survey of the Canadian Legal Information Institute — which maintains
> a database of Canadian legal decisions — shows a paucity of cases
> featuring bad advice from chatbots; Moffatt's appears to be the first.
We can bet everything we own that it won't be the last...
rb
---
Post to this mailing list talk@gtalug.org
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk