Gemini 2.0 Has Arrived (for everyone) šŸš€!

Also: Figure drops partnership with OpenAI, while OpenAI finally unveils agent for deep research šŸ§ !

Time really does wonders, wouldnā€™t you say? A couple of months back, OpenAI pretty much dominated the AI headlines. And now? Competitors like Gemini, Claude, and DeepSeek are all sharing the spotlight. 

Forward thinkers, welcome to another edition of the Neural Frontier šŸ˜Š

A pretty exciting week so far, donā€™t you think? Gemini 2.0 is here (yes, for everyone šŸ˜), while Figure pretty much canceled their partnership with OpenAI, in favor of their in-house models. 

And to cap it off, OpenAI finally released its agent for deep research. Our thoughts? 

Only one way to find out! šŸƒšŸ’Ø

In a rush? Here's your quick byte: 

šŸ¤– Gemini 2.0 has arrived (for everyone)!

šŸ¤ Figure drops partnership with OpenAI.

šŸš€ OpenAI finally unveils AI agent for deep research!

šŸŽ­ AI Reimagines: Iconic offices from your favorite fictional worlds! 

šŸŽÆ Everything else you missed this week.  

āš” The Neural Frontierā€™s weekly spotlight: 3 AI tools making the rounds this week.

Source: Google 

Google has officially rolled out Gemini 2.0, making some of its most powerful AI models publicly available. This marks a major milestone in AI accessibility, as top-tier reasoning models, once limited to paid tiers, are now free for anyone with a Google account via the Gemini app.

Hereā€™s what you need to know: 

šŸš€ The Gemini 2.0 Lineup: This lineup is pretty much made of three models: 

  •  2.0 Flash Thinking Experimental ā€“ Enhanced reasoning with app integrations like YouTube, Maps, and Search

  •  2.0 Pro Experimental ā€“ The most powerful coding & reasoning model yet, with a 2M-token context window

  •  2.0 Flash-Lite ā€“ A cost-effective, developer-friendly multimodal model

This release indicates that Google is doubling down on AI democratization. By making advanced AI widely accessible, Google is challenging competitors like OpenAI and DeepSeek.

šŸ¤” Whatā€™s New in Gemini 2.0? For one, Gemini 2.0 Flash is now available to everyone. It features high-speed multimodal reasoning (1M-token context window) and is optimized for high-frequency, large-scale AI tasks. 

Flash was initially an experimental AI modelā€”now, itā€™s mainstream, serving two categories of users. Developers can use it to build powerful AI applications, and regular users can integrate it into daily life.

šŸ› ļø Googleā€™s answer to GPT-4o: This is undoubtedly Gemini 2.0 Pro, Googleā€™s best AI yet for complex tasks. It works great for coding, reasoning, and world knowledge, with a massive 2M-token context window (twice as large as most competitors). This allows it to provide elite-level reasoning while handling massive amounts of data in a single session.

šŸ’° What about 2.0 Flash-Lite: This is Googleā€™s high-performance, low-cost model, faster & more efficient than 1.5 Flash. With multimodal support, it can analyze text, images, and videos, and even generate captions for 40,000 photos at under $1 in Google AI Studio. This makes it the affordable go-to solution for businesses, startups, and developers.

šŸ›”ļø Googleā€™s AI Safety & Responsibility Approach: Google is adopting a three-pronged approach to ensure safety:  

  • Gemini 2.0 models critique their own responses for accuracy

  • Uses automated red-teaming to detect risks like indirect prompt injection

  • Built-in security to prevent malicious AI misuse

Overall, Googleā€™s move just goes to show that the era of advanced AI being solely for power users might be over. As such, we can expect more free AI tools in search, docs, and other apps in Googleā€™s Workspace. 

Figure AI has officially cut ties with OpenAI, opting to develop its own AI models to power its humanoid robots. CEO Brett Adcock calls this a ā€œmajor breakthroughā€, claiming that the company has built something revolutionary that will be revealed within 30 days.

As always, hereā€™s the lowdown: 

šŸ” Why Figure Split from OpenAI

Initially, Figure used OpenAIā€™s models to enable natural language understanding in its humanoid robots. However, Adcock argues that:

  • Embodied AI needs full vertical integrationā€”meaning AI and hardware must be built together.

  • General-purpose AI models (like OpenAIā€™s) arenā€™t optimized for physical robots, which require physics-based learning.

  • Outsourcing AI limits scalabilityā€”custom, in-house models allow better real-world adaptation.

šŸ† The Competitive Tension with OpenAI: OpenAI may have played a role in this split, as it is rumored to be developing its own humanoid robots. Just last week, the company filed a trademark application mentioning:

  • ā€œUser-programmable humanoid robotsā€

  • ā€œCommunication and learning capabilitiesā€

OpenAI is also backing 1X, a Norwegian robotics startup working on AI-powered home assistantsā€”potentially a direct competitor to Figure.

šŸš€ Figureā€™s Rapid Momentum: Despite the OpenAI split, Figure remains one of the most well-funded robotics startups, having raised $1.5 billion to date, including a $675M round last year that valued the company at $2.6 billion

It recently expanded into a larger Bay Area office and has already deployed its Figure 02 robots on BMW production lines. The companyā€™s goal? To scale up to 100,000 humanoid robots across various industries.

šŸ”® Whatā€™s Next? With Figureā€™s self-built AI set to debut in a month, the robotics industry is watching closely. If Adcockā€™s ā€œbreakthroughā€ delivers, it could mark a major leap forward for humanoid robotsā€”one where they donā€™t just execute pre-programmed actions but adapt and learn in real-time.

Source: OpenAI

OpenAI has unveiled Deep Research, a new AI-powered agent designed to handle complex, long-form research within ChatGPT. Unlike standard chatbot responses, this feature analyzes multiple sources, compiles data, and generates well-cited reportsā€”targeting users in finance, science, policy, and engineering.

Hereā€™s how:

šŸš€ How Deep Research Works: To use it, simply select "Deep Research" in the ChatGPT composer, enter a query, and attach relevant files or spreadsheets if needed. Unlike standard chatbot responses, Deep Research takes its timeā€”ranging from 5 to 30 minutesā€”to analyze information and generate a response. Users will receive a notification when the process is complete.

Right now, Deep Research is available to ChatGPT Pro users, allowing them to conduct more thorough investigations with 100 queries per month. OpenAI plans to expand access to Plus, Team, and Enterprise users soon, with query limits expected to increase over time. In addition, Deep Research is currently web-only and supports only text-based outputs. 

šŸŽÆ Why This Matters: Deep Research is not just a search engineā€”it processes and interprets vast amounts of text, images, and PDFs, pivoting dynamically based on new insights. This makes it valuable for:

  • Academics & Scientists ā€“ Conducting peer-reviewed research with sources and citations.

  • Finance & Policy Analysts ā€“ Gathering market trends, economic indicators, and legal updates.

  • Engineers & Developers ā€“ Analyzing technical documentation, patents, and product research.

  • Serious Shoppers ā€“ Making informed decisions on big purchases like cars, appliances, and electronics.

šŸ“Š How Accurate Is It? OpenAI trained Deep Research using a special version of its o3 "reasoning" model, optimized for web browsing and data analysis. The model was tested on Humanityā€™s Last Exam, an advanced AI benchmark, where it scored 26.6%ā€”outperforming competitors like Gemini Thinking (6.2%), Grok-2 (3.8%), and OpenAIā€™s own GPT-4o (3.3%).

However, OpenAI acknowledges limitations:

āš ļø It can make mistakes or misinterpret sources.
āš ļø It may not always indicate uncertainty in its answers.
āš ļø Citation formatting may be inconsistent.

The key question is whether users will double-check sources or blindly trust AI-generated reports. As Deep Research rolls out, weā€™ll see whether it truly changes how people gather, analyze, and validate information online.

Source: u/TheEphemeric via Reddit

Yes, Potterhead, thatā€™s the ministry of magic ! But this post isnā€™t just for the Harry Potter fans in our midst. Itā€™s for lovers of fictional worlds overall, and thereā€™s something for everyoneā€“we mean it! 

Spoiler alert: a little birdie says thereā€™s something Star Wars-themed here, as well. Enjoy! 

šŸŽÆ Everything else you missed this week.

Source: Freepik

šŸ—£ļø Meta unveils a new programā€”with UNESCOā€”to improve AI models for speech and translation,  

šŸŽØ OpenAI unveils new logo, amidst big rebranding effort. 

šŸ“œ Adobe Acrobatā€™s AI new contract intelligence capabilities allow it to decipher and summarize complicated contracts for users. 

šŸ† This AI-restored Beatles song won a Grammy for Best Rock Performance! 

āš” The Neural Frontierā€™s weekly spotlight: 3 AI tools making the rounds this week. 

Source: Freepik

1. āœļø Hoppy Copy is an AI-powered email marketing platform that helps users create compelling email campaigns 10x faster. With over 100,000 users, the platform combines AI writing assistance with practical tools like spam checking and competitor analysis. 

2. šŸŽ¬ Reap leverages AI to transform long-form video content into engaging social media shorts. The platform offers AI-driven clip curation and optimization, auto speaker detection and framing, multi-format aspect ratio adjustment, and transcript-based editing. 

3.  āœ’ļø QuillAI positions itself as an AI-powered content creation platform focused specifically on generating SEO-optimized content that ranks. The platform stands out with AI content generation with brand customization, built-in SEO optimization tools, and automated keyword research.

All in allā€¦

Excitement doesnā€™t seem to trail behind, especially when AI tech is involved. From Geminā€™s release to Figureā€™s big decision to go in-house, itā€™s been a week full of updates, twists, turns, and everything in between. 

If this is any indicator of the future, weā€™re in for a ride next week. 

And as always, you can count on us for your weekly roundup of the whoā€™s who, and whatā€™s what in the AI space. 

Till next week, Aloha (yes, this means goodbye too)! šŸ‘‹

PS: While youā€™re at it, please hit that Subscribe button šŸ˜