What do Google and a toddler have in common? Both need to learn good listening skills.
Contributor and patent explorer Dave Davies reviews a recently-presented paper that suggests Google is grouping entities and using their relationships to listen for better answers to multipart questions.
At the Sixth International Conference on Learning Representations, Jannis Bulian and Neil Houlsby, researchers at Google AI, presented a paper that shed light on new methods they’re testing to improve search results.
While publishing a paper certainly doesn’t mean the methods are being used, or even will be, it likely increases the odds when the results are highly successful. And when those methods also combine with other actions Google is taking, one can be almost certain.
I believe this is happening, and the changes are significant for search engine optimization specialists (SEOs) and content creators.
So, what’s going on?
Let’s start with the basics and look topically at what’s being discussed.
A picture is said to be worth a thousand words, so let’s start with the primary image from the paper.
This image is definitely not worth a thousand words. In fact, without the words, you’re probably pretty lost. You are probably visualizing a search system to look more like:
In the most basic form, a search system is:
- A user asks a question.
- The search algorithm interprets the question.
- The algorithm(s) are applied to the indexed data, and they provide an answer.
What we see in the first image, which illustrates the methods discussed in the paper, is very different.
In the middle stage, we see two parts: the Reformulate and the Aggregate. Basically, what’s happening in this new process is:
- User asks a question to the “Reformulate” portion of the active question-answering (AQA) agent.
- The “Reformulate” stage takes this question and, using various methods discussed below, creates a series of new questions.
- Each of these questions is sent to the “Environment” (We can loosely think of this as the core algorithm as you would think of it today) for an answer.
- An answer for each generated query is provided back to the AQA at the “Aggregate” stage.
- A winning answer is selected and provided to the user.
Seems pretty straightforward, right? The only real difference here is the generation of multiple questions and a system figuring out which is the best, then providing that to the user.
Heck, one might argue that this is what goes on already with algorithms assessing a number…
Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.