Air Canada’s failed AI-generated Refund Policy and other AI mishaps: When you need guardrails on your AI

Be careful what your AI bot says. We can’t yet contain today’s AI - how do we expect to contain tomorrow’s?
tech
ai mishaps
ai
ai guardrails
Date

Sunday February 18, 2024

Topics
tech
ai mishaps
ai
ai guardrails

In recent news, Air Canada must honor refund policy invented by airline’s chatbot | Ars Technica.

We can’t even contain today’s AI. How will we contain tomorrow’s?

Previously, trollers were able to convince the chatbot to agree to selling them a car for $1. People buy brand-new Chevrolets for $1 from a ChatGPT chatbot. It’s possible some of these things may have actually been legally binding if someone wanted to sue for it.

And don’t forget the lawywrs being sanctioned for not reviewing their AI: New York lawyers sanctioned for using fake ChatGPT cases in legal brief | Reuters.

Be careful - AI is incredibly hard to control. Open tools exist to attempt impose guardrails, but they’re not perfect: Building Guardrails for Large Language Models.

If we can’t contain today’s AI, can we contain tomorrow’s? This isn’t to be an alarmist, but a pragmatist. Machine learning was used in a lot of wrong ways before it was able to be harnessed for good. The same will likely be true for AI.

_________________________

Bryan lives somewhere at the intersection of faith, fatherhood, and futurism and writes about tech, books, Christianity, gratitude, and whatever’s on his mind. If you liked reading, perhaps you’ll also like subscribing: