Google may soon combine text and image search results.
Google A new multi-tasking device has been tested. Search Beta users only. According to the tech giant, multi-search in the lens will let you go beyond the search box and answer questions about what you see. You can now ask questions about objects in front of your eyes or refine your search using color, brand, or visual attributes. When you have the new feature installed, tap on the Lens camera icon within the Google app to search for any image from the gallery or camera. To add text, swipe up and click the “+ Add your search” button.
You can refine your search by adding the item to your search. You can take a picture of a fashionable orange dress and add “green” to search for it in another color. Or, snap a photo of your dining room set and add “coffee table” to search for a matching one.
Google explained in a blog post that the feature makes use of the latest advances in artificial intelligence (AI), which allows users to better understand the world around them in more natural and intuitive ways. Further, the company revealed that Multitask Unified Model (MUM) is being explored to enhance this feature.
The tech giant revealed MuM at Google I/O last May. The technology is trained in 75 languages and uses the T5 text-to-text framework. This allows it to gain a deeper understanding of information. MUM is multimodal and can understand information from the text to images. In October 2013, the company announced its plans to develop new artificial intelligence for Lens and Search.