Google has announced the rollout of nine new features for Android devices, including several generative artificial intelligence (AI)-based features, on the first day of the Mobile World Congress (MWC) 2024. The highlight among the features is the integration of Gemini, the company’s latest AI-powered chatbot, into the Messages app.
Gemini is a conversational agent that can generate natural language responses to various prompts, such as questions, commands, or requests. Gemini can also create original content, such as poems, stories, code, or lyrics, based on the user’s input. Google claims that Gemini is the most advanced AI model in the world, capable of producing coherent, diverse, and creative texts.
According to Google, users who have enrolled in the beta testing programme will soon be able to chat with Gemini within the Messages app. The feature adds a separate chat box for Gemini where users can have a conversation, ask queries, get it to write or rewrite messages, and much more. With this update, Gemini is available as a standalone app, as a virtual assistant, and within the Messages app. It is not certain whether the feature will be included in the Gemini Advanced plan once it is out of testing.
AI is also being integrated into other apps besides Messages. For example, Android Auto will have a feature where generative AI can summarize long texts or group chats and read it out while the user drives. It will also offer suitable responses and actions that can be tapped once.
One accessibility feature is also being added to the Lookout app. The app is designed for visually impaired users and can identify objects, read texts, and more through the smartphone’s camera. It will now automatically create AI-generated captions for images seen online or received through messages and read it out to users. It is rolling out globally in English.
In the same vein, Google Maps is also getting an update to Lens in Maps, which was first added in October 2023. Users can now point the phone’s camera at surroundings and the TalkBack feature will read the place’s information out loud, including its business hours, rating, or directions on how to get there.
Google is also introducing some new features that are not based on AI. One of them is the ability to annotate Google Docs on Android with handwriting or drawing using fingers or a stylus. This can help users make notes on their documents. Another one is an improvement to the Spotify output switcher, which lets users choose the device that will play the sound. Users can now do this from the home screen itself.
Google’s Health Connect app is also getting an update for Wear OS users. It will let them sync their data from various wearables and apps, such as Fitbit, Oura Ring, AllTrails, and MyFitnessPal. The app will display comprehensive health and fitness metrics and offer useful suggestions on them.
Google has also added two minor but handy features for Wear OS users. They can now use their smartwatches to access their Google Wallet passes, such as boarding passes or tickets. They can also get public transit directions on Google Maps for Wear OS, which will display information like when to leave, where to go, and how to get there. Users can also sync the directions from their phone to their watch.
Google said that these new features are aimed at improving the productivity and convenience of users. The company also said that it will continue to innovate and bring more AI-powered features to its products and services in the future.
Leave your Reply