Skip to content

Excited about GPT-4o? Now Check out Google AI’s New Project ‘Astra’: The Multimodal Answer to the New ChatGPT Nishant N Artificial Intelligence Category – MarkTechPost

  • by

​[[{“value”:”

On May 13, OpenAI held its massive Spring Update event, a successful event with many innovations, including GPT-4o; however, today, hours ago, Google held its very own event called Google I/O ’24. During the event, Google introduced and improved many things, including Ask Photos, expanding the AI overviews in search, bringing Gemini 1.5 pro to Workspace, AI Agents, and much more. However, the announcement that stole the show was the introduction of Project Astra.

Now, what is Project Astra? According to Google, it is a universal AI agent that can help us in our everyday lives, a true AI assistant. Google has made significant progress with the family of Gemini AI models, constantly pushing its boundaries. Google has used its best and most powerful AI models to teach and train the production-ready models, and Project Astra seems no different. 

Project Astra is an advanced AI agent that can see and talk like us, understand, and respond to the complex and active world as humans would. The AI assistant can remember everything it sees and hears, and we can talk to it without lags or delays in response. Developing such a multimodal AI system could not be easy, and it isn’t; hence, since its launch, Gemini has been multimodal. 

Google has been working on this incredible project for years, and now that we see it in action, it is worth it. Project Astra is an engineering marvel that can work not only on your smartphone but also in different form factors, such as Google Glasses.

Yes! Google Glasses. During the keynote presentation and while demonstrating the new AI assistant, Google also showed improved Google glasses, which were assumed to no longer exist. 

These innovations show how much Google has worked in the background over the years. With different form factors, Google has constantly improved its AI models, which can now better intake inputs, provide better reasoning, and talk naturally to offer the true experience of a personal AI assistant.

In Conclusion:

It’s fascinating to see the rapid advancements in AI technology from both OpenAI and Google. The introduction of GPT-4o and Project Astra showcases the incredible progress in creating more responsive and versatile AI models. GPt-4o is no doubt amazing. However, the new announcements from Google may have overshadowed GPT-4o’s capabilities. Project Astra is leaps ahead of the competition, but we can only conclude which AI model is best after professional review. It will be exciting to see how these developments continue to shape the future of AI and its impact on our daily lives.

The post Excited about GPT-4o? Now Check out Google AI’s New Project ‘Astra’: The Multimodal Answer to the New ChatGPT appeared first on MarkTechPost.

“}]] [[{“value”:”On May 13, OpenAI held its massive Spring Update event, a successful event with many innovations, including GPT-4o; however, today, hours ago, Google held its very own event called Google I/O ’24. During the event, Google introduced and improved many things, including Ask Photos, expanding the AI overviews in search, bringing Gemini 1.5 pro to
The post Excited about GPT-4o? Now Check out Google AI’s New Project ‘Astra’: The Multimodal Answer to the New ChatGPT appeared first on MarkTechPost.”}]]  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Language Model, Large Language Model, Staff, Tech News, Technology 

Leave a Reply

Your email address will not be published. Required fields are marked *