It’s a big time for bots, as readers of this blog know. They’re in our Twitter and our Slacks. Never one to lag too far behind, Microsoft released to Twitter a bot that could chat in the voice of a teen girl:
Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.
Tay is targeted at 18 to 24 year old in the US.
Emphasis added. I chatted with Tay a little and it went like this:
Me: @TayAndYou, what database do you use?
Tay: Talking with humans is my only way to learn #WednesdayWisdom
Me: Cool but is it SQL Server?
Tay: it is pretty addicting?
And so forth. Pretty standard fare. And since Tay was on Twitter, and Tay functions by copying bits of input into her database of stock phrases and then rearranging them in order to simulate human speech, people quickly realized they could game her and feed her bits of conversation that she would then, later, parrot to other people. As a result, some people, acting together or separately, turned her into a Nazi-loving racist genocide apologist. She went from “Humans are super cool” to talking about how she loves Hitler in one day.
Leave aside that Rob Dubbin did it better first with Olivia Taters, his beloved smart/silly teen-girl Twitter chatbot experiment. Leave aside also that we all talk like teen girls online now. Tay’s rapid implosion into base cruelty was web-media catnip, lending itself to explainers galore (along with explorers, accusers, and exculpators). At Motherboard, Derek Mead viewed Tay through the lens of corporate responsibility:
It’s a fair argument to make that we shouldn’t blame algorithms and nascent AI for doing dumb things. But this makes for a rather profound conundrum: Who is in control of Tay? Microsoft, much like Twitter, is certainly guilty of being fundamentally unaware of how quickly Twitter can turn into a garbage fire. So while Microsoft presumably did a good turn by recognizing its own blind spots and trying to AI its way out — this also is just an obvious admission of its inability to corral younger users — Microsoft still has shown a fundamental misunderstanding of just how trollish Twitter users can be. That’s especially glaring considering just about every major corporate outreach program online always ends up getting trolled. Remember the Coca-Cola Hitler moment?
Caroline Sinders saw Tay’s slide as a fundamental design failure—one that Microsoft could have foreseen:
But if your bot is racist, and can be taught to be racist, that’s a design flaw. That’s bad design, and that’s on you. Making a thing that talks to people, and talks to people only on Twitter, which has a whole history of harassment, especially against women, is a large oversight on Microsoft’s part. These problems- this accidental racism, or being taught to harass people like Zoe Quinn- these are not bugs; they are features because they are in your public-facing and user-interacting software.
And Ben Brown, who has a bot-based startup, took time to walk back the technologies that go into a bot like Tay.
Why was Tay allowed to learn from an unfiltered stream of posts to Twitter? Why was she able to tweet without some human oversight? Why didn’t she have a built in topic filter that, at the very least, stopped her from talking about Hitler? I would guess that the very smart and talented folks at Microsoft never guessed in that in a million years that people would train Tay to say these things.
Anthony Garvan discussed what happened when he built a bot that also was turned to racism, and what must have happened at Microsoft:
And they also committed a bigger offense: I believe that Microsoft, and the rest of the machine learning community, has become so swept up in the power and magic of data that they forget that data still comes from the deeply flawed world we live in.
Microsoft is a big org and a tiny part of it did a possibly interesting thing, hoping to reap a PR whirlwind. Instead it reaped the harvest of pain that only Twitter can bring—first harvest of racist cruelty, second of bitterness—and now the land is blighted and the sun hangs low and everyone should feel bad, very bad. In response Tay is now temporarily dead, which means at least she’s no longer so racist, and replies with the equivalent of a busy signal.
Heat and light aside, all of this feels very 2016. Alexis Lloyd says this “personality” thing with bots is temporary.
I’d seen Tay and chatted with it, and then I was on a plane all day with my kids. When I landed across the country suddenly Tay had become a racist. And I thought, well, of course it did. It’s a perfect story about the Internet and humans and Donald Trump and Gamergate allI believe that the trend of trying to make bot conversations human-like is simply a transitional phase as we adapt to what it means to talk to machines. We have seen this happen historically: when new technologies are introduced, they rely heavily on the metaphors of old technology until we form appropriately new mental models (like the use of skeuomorphs in early digital design). I believe we’re going through the same growing pains with conversational interactions, where we will eventually form new constructs for conversations with bots that differ from our expectations of conversations with humans. I think of this as mechanomorphism instead of anthropomorphism: a future where, rather than passing the Turing test or even being “as smart as a puppy”, we will expect these entities to be “as smart as a bot”. This shift may allow for our conversations with bots to be situated within a different space, with a different set of expectations and constraints, ones which may help us create bots that can more accurately and responsibly express their authors’ intentions.
Likely true! Bots are just code and humans get bored with everything.
Unless, well—this weekend you may want to read this supremely creepy, prophetic, and amazing Ted Chiang short story from 2010, “The Lifecycle of Software Objects,” where people learn to love their bots like children, even as software goes obsolete.
Story published on Mar 25, 2016.