The company introduced multisearch earlier this year as a beta in the US, and will now expand it to more than 70 languages in the coming months.
Representative Image
Google has announced to make visual search more natural with multisearch, a new tool to search using images and text simultaneously.
ADVERTISEMENT
The company introduced multisearch earlier this year as a beta in the US, and will now expand it to more than 70 languages in the coming months.
"We're taking this capability even further with 'multisearch near me,' enabling you to take a picture of an unfamiliar item, such as a dish or plant, then find it at a local place nearby, like a restaurant or gardening shop," said Prabhakar Raghavan, Senior Vice President, Google Search.
The company will start rolling "multisearch near me" out in English in the US this fall.
People are using Google to translate text in images over 1 billion times a month, across more than 100 languages.
"We're now able to blend translated text into the background image thanks to a machine learning technology called Generative Adversarial Networks (GANs)," Raghavan informed.
With the new Lens translation update, people will now see translated text realistically overlaid onto the pictures underneath.
"Just as live traffic in navigation made Google Maps dramatically more helpful, we're making another significant advancement in mapping by bringing helpful insights --like weather and how busy a place is -- to life with immersive view in Google Maps," the company announced.
Also read: Google isn't likely to change Pixel 7 series prices: Report
This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever