Facebook Messenger adds option for chatbots to avoid chatting
Messenger bots can prevent people from typing in text to reply and make them use non-native menu buttons instead.
Conversations with Facebook Messenger bots soon may start to feel less like texting with a friend and more like tapping through a food-ordering app’s restaurant menu.
On Thursday, Messenger added an option for bot makers to prevent people from sending normal messages to their bots by replacing the in-app keyboard with a persistent menu, making the interaction more akin to a traditional — albeit bare-bones — mobile app or site. The Facebook-owned messaging app added other features, like customizable share messages that deep-link to the bot, a “Share in Messenger” button for in-app web pages and the ability for bots to upload attachments. You can read about those additional announcements here, but let’s talk about the main news.
First, here’s a video that Messenger posted to its blog for developers demonstrating the new keyboard-free menu:
As you can see, the new, optional menu replaces the usual exchange over a series of text messages between bot and user with a series of taps on menu buttons that can eventually open an in-app web page. The example video does show a “Send a message…” text box atop the menu, but bot makers have the option of disabling that option so that people can only interact with the bot via the menu buttons.
“Developers also have the option to hide the composer and create a simple Messenger experience without conversational capabilities,” Messenger wrote in a blog post announcing the new option.
The new option marks the latest departure from the original promise of chatbots, which was for people to interact with businesses like they do with their friends. I don’t know anyone who texts with their friends through a bunch of multiple-choice buttons. Then again, conversing with a bot isn’t as easy as texting a friend.
While a shift, the move away from text-based conversations is one that Messenger seems to have been in the process of making since soon after opening up to chat bots in April 2016.
In July 2016, Messenger added a way for bots to present people with “quick reply” buttons that they could tap instead of typing a response. “Quick replies offer a more guided experience for people as they interact with your bot, which helps set expectations on what the bot can do,” according to a Messenger blog post announcing quick replies.
These buttons were like adding inflatable bumpers to a bowling lane. By making clear what a bot expected a person to say, the quick replies made it less likely that the conversation would go into the gutter because the bot couldn’t understand what the person was saying. At the time, quick replies seemed to serve as training wheels until people and bots got better at understanding one another. Now there may be less of a need for improvement, or even quick replies to facilitate that improvement.
The move away from conversational replies, whether already formatted or freely typed, removes the magic of chatbots by making the mode of interacting alien to the messaging experience. It also proves how difficult it is for computers to converse with people.
At the same time Facebook opened Messenger up to chatbots, the company opened up its Wit.ai platform to use its artificial intelligence technology to help chatbots understand people’s replies and act on them. I’ve used Wit.ai for a personal Messenger bot I built to check the waves at my local surf spots. It works fine, but it takes a lot of work to even work fine. The person making the bot still has to draft scripts that wind through all the paths a conversation might take and identify when the conversation might verge off track and add guardrails to steer it back to the beaten path. And the bot maker has to train Wit.ai to understand the variations that someone might reply to a message. For example, I had to train Wit.ai to recognize that “how’s the surf?” and “how are the waves?” means the same thing. That didn’t take too long, but that’s also because my bot was super-basic, and since it was private, I didn’t have to think through all the ways someone else might ask for a surf report.
Eventually, I just trained Wit.ai to recognize the ? emoji and present a set of quick reply buttons to choose which surf spot to check. It was easier. But being able to type an emoji and tap a button was still more exciting — and made more sense in context — than tapping some buttons to open a web page and receive the surf report, like I could through my phone’s browser.
The bummer about Messenger’s move toward less native interactions for bots isn’t that it demonstrates how difficult it is for bots to converse with people like humans do. The bummer is that it makes it so they may never have to. Bots, and the natural language processing technology they use, need data to become more conversationally capable. That’s why Wit.ai has a tool that lists when a bot struggled to understand what someone said. The bot maker can use the tool to see what someone said and teach the bot how it should have responded so that it won’t struggle next time. The new persistent menu removes the likelihood of those mistakes, and with them the opportunity for the bots to learn.
Yes, Facebook has lots of other text data that its artificial intelligence technology is using to learn how people talk, and that technology could eventually support chatbots. But bots aren’t the only ones who need to learn.
People are still figuring out how to talk to bots. That likely explains why chatbots don’t appear to be as popular as they seemed poised to become a year ago. While the persistent menu lowers the barrier for entry to get people to interact with chatbots, it also potentially lowers people’s expectations, training them to think of chatbots as little more than stripped-down websites that happen to live in a messaging app.