“The features announced at Google I/O 2019 will start rolling out as early as this month”
At the Google I/O 2019 developer conference, all the eyes were on Android Q and the new Pixel 3a series phones. However, there were a bunch of new features Google announced that will be making a major impact on the way we use its products. From a more powerful Lens to a smarter Assistant and features to help the differently abled, here’s a look at some of the major announcements made at Google I/O 2019.
Starting off with Google Lens, the camera-centric product will not only glean relevant information from the text it is pointed to but also provide more related information around the same. For example, it will be able to highlight popular dishes when pointed to a menu and show details of the food item when an item is tapped. Furthermore, users can also have Google Lens calculate the tip and even split the total, by simply pointing it to restaurant receipt. Soon, users will be able to point at recipes in magazines and see the page come alive, showing them how to prepare the dish.
In addition to that, Google also said the Google Go search app for entry-level phones is getting Lens integration in the camera app. So, when users simply point to a text, Google Lens will translate it to a language they are familiar with and also read it out aloud. The translate feature is currently available in languages such as English, Hindi, French, Italian, Dutch, Portuguese, Chinese, Japanese, and more. These updates will start rolling out later this month.
Google Duplex also received an update at the I/O 2019 event, going beyond voice to carry out tasks on the web. For now, Google Duplex for the web will be limited to car reservations and booking movie tickets. When you ask Google Assistant to book a car for you, it will open the relevant website, fill out the form, provide the date and time for the car pickup. You will only have to confirm the details with a tap, after which the Assistant will even select the car for you and finalise the reservation.
Assistant will not be dependent on internet connectivity anymore as Google says it has brought down the computational models and associated data to ~500MB, making it small enough to run on-device. This eliminates the need to keep the virtual assistant connected to the internet to run tasks that can be performed offline. Moreover, the Assistant is now faster at processing requests, such that it can do a number of tasks one after the other instantly, without the need to say “Hey Google” each time. Assistant can also switch between apps while performing tasks, making it easier to, say, send images from the Photos app via Gmail. All these upgrades will be coming to “new” Pixel phones later this year, the company said at the event.
Federated Learning is a new feature coming to Google’s Gboard shortly, and other products thereafter. Federated Learning, Google explains, enables many devices to collaboratively train a global ML (machine learning) model without centralising their data. Each phone is said to compute on the model and send that to the cloud, not the data itself. For example, Gboard will be able to learn all kinds of new words that gain traction among its millions of users without compromising on security and looking into any specific user’s Gboard data.
Live Caption, Live Relay
Live Relay and Live Caption are Google’s new “smart assistants” to help the differently abled. The former is an assistive chat feature that lets you type in your responses, and types out what people say to you, in phone call chat window. The feature will help people with hearing disability or prefer not to speak, to take a call using captions and respond via smart replies. The conversation that you make through Live Relay happens completely on-device and remains private, says Google. There isn’t an exact timeline as to when the Live Relay feature will make its way to consumers.
Live Caption, on the other hand, will transcribe the video or audio on your screen in real-time. The feature, as per the Mountain View company, works on-device so there are no delays. The Live Caption feature can be used for videos, podcast, or videos you capture at home. “For 466 million deaf and hard-of-hearing people around the world, captions are more than a convenience — they make content more accessible. We worked closely with the deaf community to develop a feature that would improve access to digital media,” Google said in a blog post. The feature is there by default on Android Q and can be enabled from Accessibility settings.