Google AI will power Search, Maps and Translate
Potentially, the most advanced technology at the moment is artificial intelligence. AI helps individuals, organizations, and communities realize their full potential by assisting in the earlier diagnosis of diseases and enabling access to knowledge in a person’s native tongue. It also creates fresh possibilities that have the potential to greatly enhance the lives of billions of people. After announcing Bard, the new ChatGPT-like artificial intelligence technology, Google revealed how it will be integrated into its tools and platforms, such as Search, Google Maps and Translate. google ai
OpenAI’s ChatGPT has revolutionized artificial intelligence technology and the way it plans to enhance natural dialogue and search engines, making it easier to acquire online information. Microsoft has also announced the launch of the technology of its AI platform integrated into Edge, its web browser, and Bing, its search engine. Edge is currently ready for testing and is expected to be more potent than OpenAI’s ChatGPT, with which Microsoft is a partner. Google has also already revealed Bard, its own AI technology and a language model for dialog applications (LaMDA).
Intuitive AI systems google ai
Technological companies assume their role in exploring new AI systems, but with a commitment to the previously defined epic principles. And from this result, the first practical applications of the technology begin to be revealed. Google says it intends to make the exploration of information more visual, natural and intuitive, and in this case, it is already advancing with the first platforms that will receive its technology, which includes Search, Google Maps and Translator.
With regard to Search, it will be possible to access Google Lens features without being inside the camera, but directly on the smartphone screen. The multi-search system will allow you to search for images and text at the same time, “opening up new ways of expressing yourself“, stresses Google. In the image viewer, you can also access multi-search for any image on the internet or in the image gallery.
And Lens starts to identify objects in a video or photo, whether on Instagram, YouTube, TikTok or another social network. For example, through the multi-search you can ask a question about the object in front of you and then refine the answers considering the colors, the brand or any visual attributes. You can capture a picture of a dress you like in orange and ask for a green version to find the right shade. Or when you’re having dinner at someone’s house and you ask the system for the brand of the table and where to buy it. Or if you find a plant, you can quickly do a search on how to care for it. google ai
Google Maps will also benefit from new AI technology, promising to offer new ways to move around in a more immersive and sustainable way. Immersive Visualization is a new feature that had been announced last year, allowing you to plan a walk or a visit to a restaurant, allowing you see the traffic, the weather or the number of people, at a certain time of the day, using models of Predictive AI to help users anticipate travel to certain locations. For now, only Los Angeles, New York, London, San Francisco and Tokyo are included in the technology, but in the coming months, Google says it will introduce four cities in the EMEA region: Florence, Venice, Amsterdam and Dublin.
AI and augmented reality will power Live View Indoor, which helps users find things around them, including ATMs, restaurants, parks or public transport stations. Google explains that the user just has to point the smartphone at what he wants on the street and have access to useful information. For example, a store’s opening hours or user ratings. Electric vehicles will also be the reason for new features, as long as they have Google Maps integrated into their system.
Finally, the Translator will also have new context options in translations, as well as more features that adapt. The application will have a new design and will allow you to translate results searched by Lens AR. For now, the new contextual features will be supported by English, French, German, Japanese and Spanish languages, be released in the coming weeks.