shot-button
E-paper E-paper
Home > Technology News > Microsoft showcase AI bot that makes phone calls to humans

Microsoft showcase AI bot that makes phone calls to humans

Updated on: 24 May,2018 06:27 AM IST  |  San Francisco
IANS |

Xiaoice interacts in text conversations but now the company has started allowing the chat bot to call people on their phones

Microsoft showcase AI bot that makes phone calls to humans

MicrosoftRepresentational Image


While Google Duplex, which lets AI mimic a human voice to make appointments and book tables through phone calls, has mesmerised people with its capabilities and attracted flak on ethical grounds at the same time, Microsoft has showcased a similar technology it has been testing in China.


At an AI event in London on Tuesday, Microsoft CEO Satya Nadella revealed that the company's Xiaoice social chatbot has 500 million "friends" and more than 16 channels for Chinese users to interact with it through WeChat and other popular messaging services.


"Microsoft has turned Xiaoice, which is Chinese for 'little Bing', into a friendly bot that has convinced some of its users that the bot is a friend or a human being. Xiaoice has her own TV show, it writes poetry and it does many interesting things," The Verge quoted Nadella as saying.

Xiaoice interacts in text conversations but now the company has started allowing the chat bot to call people on their phones.

The bot does not work exactly like Google Duplex, which uses the Assistant to make calls on a user's behalf but it holds a phone conversation with the user.

"One of the things we started doing earlier this year is having full duplex conversations. So now Xiaoice can be conversing with you in WeChat and stop and call you. Then you can just talk to it using voice," Nadella was quoted as saying.

Humans will be humans and the latest victim of humankind was Microsoft.

Two years ago, Microsoft launched an artificial intelligence (AI)-powered bot on Twitter, named Tay, for a playful chat with people, only to silence it within 24 hours as users started sharing racist and offensive comments with the bot.

Launched as an experiment in "conversational understanding" and to engage people through "casual and playful conversation", Tay was soon bombarded with racial comments and the innocent bot repeated those comments back with her commentary to users.

Some of the tweets had Tay referring to Hitler, denying the Holocaust, and supporting Donald Trump's immigration plans, among others.

Later, a Microsoft spokesperson confirmed to TechCrunch that the company is taking Tay off Twitter as people were posting abusive comments to her.

Catch up on all the latest Crime, National, International and Hatke news here. Also download the new mid-day Android and iOS apps to get latest updates

This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever.

"Exciting news! Mid-day is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest news!" Click here!


Mid-Day Web Stories

Mid-Day Web Stories

This website uses cookie or similar technologies, to enhance your browsing experience and provide personalised recommendations. By continuing to use our website, you agree to our Privacy Policy and Cookie Policy. OK