Google has significantly enhanced Project Astra, its AI bot that understands the world around you. Slated to debut with Gemini 2.0, Astra boasts improved language comprehension, supporting multiple languages and accents, and even combining them seamlessly. These updates make Astra more conversational, leveraging Google Lens, Google Maps, and Google Search for a contextual understanding of its surroundings.
Memory and latency upgrades
Project Astra’s conversational flow has received a major boost with a 10-minute in-conversation memory, ensuring smooth interactions without repetitive clarifications. Additionally, Astra’s long-term memory has been refined, and latency improvements now align with the natural pace of human conversation.
Smart glasses on the horizon
Google teased wearable integration for Project Astra, aligning it with competitors like Meta’s Ray-Ban collaboration and Samsung’s rumored 2025 smart eyewear. In the demo video, Astra’s hands-free capabilities are showcased, indicating how well the bot complements smart glasses, a much-anticipated addition to Google’s lineup.
Gemini 2.0: The next frontier for Google AI
Google CEO Sundar Pichai described Gemini 2.0 as the company’s “most capable model yet,” building upon Gemini 1.0’s data organization by focusing on practical utility. Features like AI Overviews in Google Search will address complex queries, including advanced math and coding problems.
A chat-optimized version, Gemini 2.0 Flash, will soon be available in the Gemini app for mobile and desktop, with wider functionality expected in early 2025.