Google today announced what it considers to be “one of the biggest steps forward in Search history.” By introducing a neural network-based technique known as BERT, the search engine can gain a better understanding of real, everyday questions.
The company initiated the move by understanding that complex or conversational queries can still fail to understand the language capabilities of Search.
That way, you can develop and test your app on the same machine, all without a connected device or needing to put your laptop in developer mode. Developers can start testing this feature in developer channel in November.
Google uses a “Neural Network-based Technique for Natural Language Processing (NLP) Pretraining called Bidirectional Encoder Representations from Transformers” to allow people to ask questions naturally.
Often known as BERT, this model analyzes words rather than one word after another in relation to the entire search term. Search will improve the understanding of nuance, subtlety, and finding out what the user is actually asking for, essentially getting the full context. These improvements will result in more relevant links and snippets being featured.
Google will be better able to understand one in ten BERT questions in quantifiable metrics. It’s only available in the U.S. currently. But English will spread over time to more countries and languages. This approach is so complex behind the scenes that cloud TPUs for machine learning are used to search.
Particularly for longer, more conversational queries, or searches where prepositions like “for” and “to” matter a lot to the meaning, Search will be able to understand the context of the words in your query. You can search in a way that feels natural for you.