Facebook’s DeepText has “near-human” understanding of people’s posts

Facebook's DeepText uses artificial intelligence techniques to understand text with "near-human" accuracy and quickly learn new languages and slang.

Chat with MarTechBot

facebook-servers-tech1-ss-1920

Facebook is getting even closer to a human-level understanding of what people are saying.

Facebook has developed DeepText, a new way to parse text using artificial intelligence processes that’s quicker at picking up new languages and slang than traditional approaches.

In a company blog post published on Wednesday, three members of the company’s applied machine learning team — Ahmad Abdulkader, Aparna Lakshmiratan and Joy Zhang — announced the technology that’s already being used across Facebook and Facebook Messenger.

DeepText is able to churn through “several thousands of posts per second” across more than 20 languages and understand what’s being communicated with “near-human accuracy,” according to the announcement post.

Facebook’s ability to comprehend what people are saying on its platform isn’t new. The company has been doing that for years in order to pick out which posts to show in people’s news feeds, which ads to show them and, more recently, which posts to show in its search results.

The new part is how good Facebook is getting at understanding what people are saying and, as importantly, how quickly it’s going to be able to get even better.

With DeepText, Facebook isn’t taking the traditional approach to computationally understanding text. The traditional approach basically entails writing a combination of the Oxford English Dictionary, Encyclopedia Britannica and a grammar textbook that a computer can reference when it’s processing words, sentences and paragraphs and repeating the process for each language the computer needs to learn. That human-dependent process limits the computer’s knowledge base to what humans would be able to teach it. It would be better for a computer to be able to teach itself, a process called machine learning that is exactly what Facebook’s team has adopted.

Facebook’s team took inspiration from a research paper published last year by members of Facebook’s artificial intelligence team that eschewed the traditional word-based approach for a character-based one. From what I can understand of that research paper, the character-based approach means that a computer doesn’t need a human-compiled dictionary to start learning what words mean and how they relate to one another; it can figure out those meanings and relationships on its own by starting from scratch at the character level.

That is to say, DeepText is able to pick up slang and new languages super-quickly without being limited by humans’ ability to teach it those words. Or put another way, DeepText is able to learn languages like Keanu Reeves was able to learn kung fu in “The Matrix,” except it doesn’t have to first get its butt kicked by Morpheus.

What Facebook is able to do with this improved text parsing may be obvious: the same things it was already able to do but better and quicker. If DeepText had been around a couple of years ago, it probably would have known what “on fleek” meant before Beyonce.

“DeepText has the potential to further improve Facebook experiences by understanding posts better to extract intent, sentiment, and entities (e.g., people, places, events), using mixed content signals like text and images and automating the removal of objectionable content like spam,” according to Facebook’s blog post.

Facebook’s team outlined a few ways DeepText can be put to work. For example, it can pick out which comments to a post are the “most relevant or high-quality comments.” That should come in handy for threads attached to celebrities’ or publishers’ posts or Facebook Live streams. And DeepText is already being used on Messenger “to help realize that a person is not looking for a taxi when he or she says something like, ‘I just came out of the taxi,’ as opposed to ‘I need a ride,’” according to Facebook’s blog post.

DeepText can also recognize if someone is trying to sell something through a post and information about it, “such as the object being sold and its price, and prompt the seller to use existing tools that make these transactions easier through Facebook.” Facebook could use that to get more people using its buy button and other newish shopping features, and it could also use the ability to prompt — if not require — those people to run those posts as ads in order to get them in front of people.

And DeepText is only going to get better at understanding text, thanks to how many words people have and continue to publish on Facebook, Messenger, WhatsApp and Instagram, as well as the other data Facebook can use to cross-reference with that text.



For example, DeepText can be used with Facebook’s image-recognition technology to determine that if someone captions a photo of a baby with the text “Day 25,” that most likely means the baby is 25 days old. And it can use contextual data like information about a Facebook Page to filter the meaning of the posts on that page. As a result, if the Pittsburgh Steelers write “pigskin” in a post to the NFL team’s page, DeepText can learn that “pigskin” is another way to say “football,” which will allow Facebook to target the people who liked that post with ads for replica game balls and not pork rinds.


Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.


About the author

Tim Peterson
Contributor
Tim Peterson, Third Door Media's Social Media Reporter, has been covering the digital marketing industry since 2011. He has reported for Advertising Age, Adweek and Direct Marketing News. A born-and-raised Angeleno who graduated from New York University, he currently lives in Los Angeles. He has broken stories on Snapchat's ad plans, Hulu founding CEO Jason Kilar's attempt to take on YouTube and the assemblage of Amazon's ad-tech stack; analyzed YouTube's programming strategy, Facebook's ad-tech ambitions and ad blocking's rise; and documented digital video's biggest annual event VidCon, BuzzFeed's branded video production process and Snapchat Discover's ad load six months after launch. He has also developed tools to monitor brands' early adoption of live-streaming apps, compare Yahoo's and Google's search designs and examine the NFL's YouTube and Facebook video strategies.

Fuel for your marketing strategy.