Google today announced what it considers to be “one of the biggest steps forward in Search history.” By introducing a neural network-based technique known as BERT, the search engine can gain a better understanding of real, everyday questions.
The company initiated the move by understanding that complex or conversational queries can still fail to understand the language capabilities of Search.
Google uses a “Neural Network-based Technique for Natural Language Processing (NLP) Pretraining called Bidirectional Encoder Representations from Transformers” to allow people to ask questions naturally.
Often known as BERT, this model analyzes words rather than one word after another in relation to the entire search term. Search will improve the understanding of nuance, subtlety, and finding out what the user is actually asking for, essentially getting the full context. These improvements will result in more relevant links and snippets being featured.
Google will be better able to understand one in ten BERT questions in quantifiable metrics. It’s only available in the U.S. currently. But English will spread over time to more countries and languages. This approach is so complex behind the scenes that cloud TPUs for machine learning are used to search.