Google is expanding the scope of Lens service by adding ‘Multisearch’ – a functionality it previewed in September last year.

With Google Lens Multisearch, users can search with both images and text inputs to better find relevant products. This technology uses machine learning that learns from the two inputs the user gives and combines them to do a better search across the web.

Multisearch in Google Lens

To make searching things better, Google introduced support for an image search quite a time ago. This was later merged into Google Lens, which can even extract the text from an image and use it for editing. Later, it even started identifying songs with a simple hum.

And now, enhancing this further, Google announced Multisearch in Lens. Initially previewed in September last year, Multisearch will let users search with both an image and text. For example, you can upload a picture of a tshirt to Lens and add “polka dots” to the find to see a polka-dotted tshirt.

Available to US users now, this ability aims to ease the shopping-related searches initially, said Liz Reid, vice president of Google Search to CNN Business. Although it’s not limited to that, she added.

To try this, tap on the camera icon (Google Lens) in the search bar, upload a picture, tap on the “+” sign to open the text box and enter a phrase to find something resulting from both inputs. Google said this feature used a powerful machine-learning tool called “multitask unified model” (MUM).

This can be trained further to search texts with songs too, later on. As of now, it combines natural language processing to understand the text input and artificial intelligence to vision the image and scour the web to put a relevant result in front of you in seconds.

LEAVE A REPLY

Please enter your comment!
Please enter your name here