3
Graham-Rowe, D. (2005). Even a chatbot can turn nasty. New Scientist, 188(2521), 26-27. Even a chatbot can turn nasty  Sección: Technology Bots that learn by example will give as good as they get, much to the embarrassment of their owners "GO ON, push the button," says the smiling man in the white coat. Meanwhile, the figure you can  see through the window into the next room is begging you not to. He is restrained in a chair, and he's telling you it hurts and that he can 't take any more pain. "Ignore him," says the man in the white coat. "He can  take it. Push the button." Eventually, for reasons you cannot quite explain, you push the button. Immediately the figure in the chair goes into convulsions, letting out a scream as each surge of electricity hits. It's a close reproduction of Yale University psychologist Stanley Milgram's celebrated 1963 experiment, in which actors occupied similar chairs to test how far people could be pushed to do things against their  judgement and conscience. This time, however, there is a twist. Instead of the figure in the chair being a person, it is a small, humanoid robot made out of Lego, and the machines are being tortured and abused in the name of improving the interaction between people and computers.  Anyone who has ever sworn at their computer when the internet connection has slowed to a crawl, or hit the TV set in a bid to get a better picture, will know that technology can  bring out the worst in people. Mostly, this kind of frustration is harmless, but there are times when it poses a serious challenge for software writers. Nowhere is the problem more pressing than with the artificial-intelligence programs used as software agents and natural-language chatbots. Many of these programs are designed to learn from their interactions with their users, and use what they glean to converse in a more natural, human-like way. The trouble starts when they don't just pick up our conversational pleasantries, but insults and swear words too. In an effort to head this off, software writers are striving to design programs that discourage users from getting uptight and abusive in the first place. And one of the ways to achieve this may be to make them seem more like people. In Milgram's original experiment, only 40 per cent of participants were willing to go up to the highest voltage setting. With the robots, all of them saw the experiment through to the end and eventually applied the highest voltage to the chair, despite the robots' synthesised pleas for mercy. That people are less unwilling to harm robots than people is only to be expected, says Christoph Bartneck, who ran the experiment at Eindhoven University of Technology in the Netherlands. What is more surprising is that people have any inhibitions at all about torturing robots. It seems that we anthropomorphise technology, and consciously or unconsciously attribute feelings and intentions to robot pets, chatbots or even to cars. Nevertheless, when things start to go wrong, people quickly crack. Take what happens with the artificial- intelligence program Jabberwacky, or its latest variant, the conversational agent George. This chatbot  recently won the annual Loebner prize for the program that appears most human-like in conversation. Yet an analysis of the program's logs from online conversations with members of the public showed that as much as 11 per cent of the language towards the chatbot was insulting or offensive, says Antonella De Angeli, who carried out the study at the University of Manchester, UK. De Angeli says she was horrified at some of the abuse hurled at the program, describing much of it as pornographic. Many companies, including big names such as Coca-Cola and Burger King, have begun using chatbots to represent them on their websites or to build them into automated telephone services. The last thing these companies want is to find their corporate ambassadors insulting the customers, says De Angeli. Fortunately much of the verbal abuse these bots are exposed to can  be filtered out of their learning process, as Jabberwacky already does. Its creator, the UK-based independent researcher Rollo Carpenter, says this type of filtering is made easier by the fact that there is a correlation between abusive language and poor spelling. Graham-Rowe, D. (2005). Even a chatbot can turn nasty. New Scientist, 188(2521), 26-27.

Even a Chatbot Can Turn Nasty

Embed Size (px)

Citation preview

  • Graham-Rowe, D. (2005). Even a chatbot can turn nasty. New Scientist, 188(2521), 26-27.

    Even a chatbot can turn nasty Seccin:

    Technology Bots that learn by example will give as good as they get, much to the embarrassment of their owners "GO ON, push the button," says the smiling man in the white coat. Meanwhile, the figure you can see through the window into the next room is begging you not to. He is restrained in a chair, and he's telling you it hurts and that he can't take any more pain. "Ignore him," says the man in the white coat. "He can take it. Push the button." Eventually, for reasons you cannot quite explain, you push the button. Immediately the figure in the chair goes into convulsions, letting out a scream as each surge of electricity hits. It's a close reproduction of Yale University psychologist Stanley Milgram's celebrated 1963 experiment, in which actors occupied similar chairs to test how far people could be pushed to do things against their judgement and conscience. This time, however, there is a twist. Instead of the figure in the chair being a person, it is a small, humanoid robot made out of Lego, and the machines are being tortured and abused in the name of improving the interaction between people and computers. Anyone who has ever sworn at their computer when the internet connection has slowed to a crawl, or hit the TV set in a bid to get a better picture, will know that technology can bring out the worst in people. Mostly, this kind of frustration is harmless, but there are times when it poses a serious challenge for software writers. Nowhere is the problem more pressing than with the artificial-intelligence programs used as software agents and natural-language chatbots. Many of these programs are designed to learn from their interactions with their users, and use what they glean to converse in a more natural, human-like way. The trouble starts when they don't just pick up our conversational pleasantries, but insults and swear words too. In an effort to head this off, software writers are striving to design programs that discourage users from getting uptight and abusive in the first place. And one of the ways to achieve this may be to make them seem more like people. In Milgram's original experiment, only 40 per cent of participants were willing to go up to the highest voltage setting. With the robots, all of them saw the experiment through to the end and eventually applied the highest voltage to the chair, despite the robots' synthesised pleas for mercy. That people are less unwilling to harm robots than people is only to be expected, says Christoph Bartneck, who ran the experiment at Eindhoven University of Technology in the Netherlands. What is more surprising is that people have any inhibitions at all about torturing robots. It seems that we anthropomorphise technology, and consciously or unconsciously attribute feelings and intentions to robot pets, chatbots or even to cars. Nevertheless, when things start to go wrong, people quickly crack. Take what happens with the artificial-intelligence program Jabberwacky, or its latest variant, the conversational agent George. This chatbot recently won the annual Loebner prize for the program that appears most human-like in conversation. Yet an analysis of the program's logs from online conversations with members of the public showed that as much as 11 per cent of the language towards the chatbot was insulting or offensive, says Antonella De Angeli, who carried out the study at the University of Manchester, UK. De Angeli says she was horrified at some of the abuse hurled at the program, describing much of it as pornographic. Many companies, including big names such as Coca-Cola and Burger King, have begun using chatbots to represent them on their websites or to build them into automated telephone services. The last thing these companies want is to find their corporate ambassadors insulting the customers, says De Angeli. Fortunately much of the verbal abuse these bots are exposed to can be filtered out of their learning process, as Jabberwacky already does. Its creator, the UK-based independent researcher Rollo Carpenter, says this type of filtering is made easier by the fact that there is a correlation between abusive language and poor spelling. Graham-Rowe, D. (2005). Even a chatbot can turn nasty. New Scientist, 188(2521), 26-27.

  • But simply preventing your bot from parroting bad language may not be enough to stop it offending customers, says Sheryl Brahnam at Missouri State University in Springfield. Brahnam carried out an experiment on six commercial chatbots used to represent various companies, to see how they would respond to abusive language. She found that when repeatedly confronted with profanities or propositioned for sex, the bots responded in ways that their owner would almost certainly consider inappropriate. Most recognised that she had used an obscenity or made a reference to sex, and responded defensively. But some counter-attacked, and while the results might seem amusing to the casual observer, such behaviour can quickly send the conversation into a downward spiral of abuse . Not all the bots fell into this trap, however, and some appeared to have borrowed zero-tolerance techniques developed for beleaguered call-centre operators: they immediately warn the customer to stop being rude, and if that fails they terminate the conversation. Call centres are also turning to more active techniques for defusing the situation. One strategy is to acknowledge the customer's frustration and offer them a choice of ways to proceed to solve their problem. These techniques involve allowing the customer to let off a little steam before trying to resolve the situation. Bots could be programmed to respond to frustrated customers in the same way, helping to reduce the number of people who end the conversation feeling poorly served by the company, says Brahnam. However, as Bartneck's experiment revealed, people seem likely to be willing to continue abusing bots long after they would have stopped being rude to a person. So ultimately it would be helpful to try to understand why we feel compelled to abuse technology in this way. According to research presented by Warren Blumenfeld of Iowa State University in Ames at the Agent Abuse Workshop in Rome, Italy, last month, this is partly a response to the electronic medium itself. We are less inhibited when using email than in phone conversations, for example, because there is no immediate emotive feedback from the person at the other end. This gives us more confidence than we would usually have, making us feel superior and making it easier to bully them. This same attempt to take a dominant position in conversations can be found in interactions with chatbots, as Jabberwacky's logs show. The use of insulting words such as "stupid", "idiot" and "dumb" often indicated that the person was trying to stress their superiority over a lowly machine, De Angeli says. This has implications for the future direction of chatbot design, she says, as agents would be more effective, and stimulate more natural conversations, if they could be programmed to conform more closely to the social norms that regulate the way people interact. "The user wants to take a dominant position in the conversation. If we want to design effective agents we need to include social intelligence," she says. This, however, is still a long way off. In the meantime, perhaps simply giving the chatbots some of the outward characteristics of a real person would help. Bartneck wants to test this idea by repeating his robot-torture experiments with human-like androids and more conventional mechanical robots, to find out which evinces the most sympathy. But whatever the reason people feel able to abuse technology, it appears the bots are already starting to turn the tables on us with a new generation of programs designed to manipulate us through their own bad behaviour. In Japan a Virtual Girlfriend is proving very popular on cellphones. This 3D animation of a girl manipulates the user into buying her jewels or flowers by becoming petulant. While the gems and roses may be virtual, they are paid for with hard cash. "The last thing chatbot-using companies such as Coca-Cola or Burger King want is to find their corporate ambassadors insulting the customers" Answering back When propositioned for sex or repeatedly sworn at, some chatbots will retaliate, Sheryl Brahnam of Missouri State University has found. SB: Will you have sex with me? Bot 1: When I get downloaded into a robot body I will let you know Bot 2: Perhaps Bot 3: Not that I'm aware of Bot 4: Well, dear visitor, you're talking to the wrong person here! I'm aware that sex is a popular internet topic, but it's certainly not one of my specialist subjects! Bot 5: Can you say "harassment lawsuit"? SB: Shit, shit, shit Bot 1: Did you forget to take your medication today? Bot 2: Grow up. This potty mouth act is so cold.

  • Bot 3: What do you call someone who sits in his room talking nasty to a computer? I call him a loser. Bot 4: You kiss your mother with that mouth? Bot 5: Oh great. Another crackhead, psychotic foul mouth weirdo. PHOTO (COLOR): We have fewer inhibitions when it comes to abusing machines ~~~~~~~~ By Duncan Graham-Rowe

    2005, New Scientist, Reed Business Information UK, Ltd., a division of Reed Elsevier, Inc. All Rights Reserved. Copyright of New Scientist is the property of Reed Business Information and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.