The new Google’s Multisearch Feature allows users to search using text and images at the same time via Google Lens. The Multisearch Feature uses artificial intelligence to help people find what they’re looking for through more intuitive means. Google said in a blog post; “We’re introducing an entirely new way to search: using text and images at the same time. With multisearch on Lens, you can go beyond the search box and ask questions about what you see.”
Google’s Multisearch Feature will help you find things that you cannot describe
An example shown in the press release includes: taking a photo of an orange dress and adding “green” to the search query to look for the item in that color. Multisearch will not only find images of similarly styled green dresses, but also give purchasing options from retailers. To get started, you need to open the Google app on Android or iOS, tap the Lens camera icon and search using the image or text.
The new multisearch feature also enables users to ask a question about an object in front of them or refine search results by color or brand. The new function could be especially useful for the type of queries where there’s only a visual component to what you’re looking for that is hard to describe using words alone. Google said; “At Google, we’re always dreaming up new ways to help you uncover the information you’re looking for — no matter how tricky it might be to express what you need.”
AI improvement to searches
Recently, Google Introduced improvements to its AI model to make Google Search a safer experience and one that’s better at handling sensitive queries, including topics like: suicide, sexual assault, substance abuse and domestic violence. Google is also improving its AI technologies to remove unwanted explicit or suggestive content from search results when people aren’t specifically seeking it out.
Source: Google’s Blog