AI Revolution: Google's Latest Breakthroughs Unveiled

Published On Sun May 26 2024
AI Revolution: Google's Latest Breakthroughs Unveiled

Seeing Silicon | AI is way beyond our expectations, says Google co ...

It was while I was inside an AI sandbox, a makeshift room to experience Google’s new AI virtual assistant, Project Astra, that the company’s co-founder Sergey Brin walked in. Sporting a wind-weary messy hairdo, Brin looked like a typical Californian. Casual, easygoing, and concerned about how hot it was in the room. “Why don’t you turn on the aircon?” he suggested to one of the Google employees in the sandbox, showing his concern for the eight journalists from across the world who were in the demo to witness Google’s latest AI capabilities.

I was attending Google I/O, the company’s annual developer conference, in Mountain View’s Shoreline Amphitheatre, right next to the company’s headquarters. Security helicopters flew overhead as hundreds of developers, journalists, and employees from across the world headed to the conference, while millions joined online.

A Pivotal I/O for Google

It was a pivotal I/O for the company. Last year, Google’s executives did a scrambled I/O after OpenAI had changed internet search forever by launching their prompt-based search model ChatGPT. Everyone who attended this I/O wanted to know one thing: What’s Google doing next in AI?

Three hours before I headed into the AI sandbox, the keynote opened with a hilarious act by Tiktok celebrity DJ Marc Rebillit, wearing a robe, who created a song using Music FX, Google’s experimental music mixing AI software. A few minutes later, CEO Sundar Pichai came on stage to announce AI integration into all existing Google products like Search, Chrome and Workspace.

AI Integration and Updates

With the AI race heating up, announcing these integrations had become a survival issue for the company – get on board or get left behind as users leave for a prompt-based search. The writing was on the wall for Google, but they did add their own signature to it.

OpenAI might have had the first mover advantage, but Google still has a loyal user base of over two billion people worldwide, he seemed to say. Pichai announced Gemini 1.5 Pro, a multimodal AI model which could reason across text, images, video, code and more as well as a lighter AI model, Gemini 1.5 Flash which is optimised for “narrow, high-frequency, low latency tasks.”

The next two hours were filled with a succession of announcements about AI integration. You can now dig deeper into Google Photos archives, point your phone’s live camera to decode or analyse anything you see in your life and talk to Search to get answers in multiple media (photos, videos and text).

Gemini 1.5 Flash: Google's AI model get an update | Mashable

With a paid Google Workspace account, you also get a live virtual assistant who can fetch any content from the deep depths of your Google account, create an Excel of it and work across Gmail, Drive, Docs and Chrome – using the content it has been scraping since a decade now.

Project Astra - The New Generation Assistant

Everyone who has tried to get work done with any virtual assistants (like Siri or Google Assistant), sighed with relief when Pichai announced Project Astra. An AI-driven assistant, Astra is a big upgrade to the last-generation virtual assistants we’ve been struggling with.

In the two hours of announcements about AI integrations and updates, there was a delightful little cameo done by Google AR Glass. In a recorded video played at the venue, an employee from Google’s research arm, DeepMind, showed various capabilities of Project Astra.

Google AR Glass and Future Tech

“Did they just show us Google Glass?” exclaimed a journalist sitting by my side. I could understand his excitement. The video had surreptitiously shown us the AR glass of one of the most iconic Google I/Os ever.

Now, with the advancement in AR/VR technology and the launch of products like Ray-Ban Meta Smart Glasses and Meta Quest 3, we’re ready as users to add AR to our eyes and carry smartphones on our nose. So it’s exciting to see that Google’s going to merge AI with smart glass technology.

Google's Gemini 1.5 Pro - Revolutionizing AI with a 1M Token ...

AI Beyond Expectations

After a group selfie with Brin, I headed out in search of a demo that no one had heard of. A 3D, holographic video chat product by Google, where the person you’re talking to is projected as a hologram right out of your screen, making it feel like you’re sitting across from them.

As I walked out of Google I/O, I reflected on the oncoming change that technology companies are forcing on our everyday products – all thanks to advances in AI. Even people who built AI haven’t completely understood it, while people like Demis Hassabis (who heads Google DeepMind) and Sam Altman (CEO, OpenAI), relentlessly pursue the idea of AGI as the height of AI technology. A machine that can reason and think like a human. But then, have they ever stopped to wonder if we need it?