- The Neural Frontier
- Posts
- Gemini 2.0 Has Arrived (for everyone) š!
Gemini 2.0 Has Arrived (for everyone) š!
Also: Figure drops partnership with OpenAI, while OpenAI finally unveils agent for deep research š§ !
Time really does wonders, wouldnāt you say? A couple of months back, OpenAI pretty much dominated the AI headlines. And now? Competitors like Gemini, Claude, and DeepSeek are all sharing the spotlight.
Forward thinkers, welcome to another edition of the Neural Frontier š!
A pretty exciting week so far, donāt you think? Gemini 2.0 is here (yes, for everyone š), while Figure pretty much canceled their partnership with OpenAI, in favor of their in-house models.
And to cap it off, OpenAI finally released its agent for deep research. Our thoughts?
Only one way to find out! ššØ
In a rush? Here's your quick byte:
š¤ Gemini 2.0 has arrived (for everyone)!
š¤ Figure drops partnership with OpenAI.
š OpenAI finally unveils AI agent for deep research!
š AI Reimagines: Iconic offices from your favorite fictional worlds!
šÆ Everything else you missed this week.
ā” The Neural Frontierās weekly spotlight: 3 AI tools making the rounds this week.
Source: Google
Google has officially rolled out Gemini 2.0, making some of its most powerful AI models publicly available. This marks a major milestone in AI accessibility, as top-tier reasoning models, once limited to paid tiers, are now free for anyone with a Google account via the Gemini app.
Hereās what you need to know:
š The Gemini 2.0 Lineup: This lineup is pretty much made of three models:
2.0 Flash Thinking Experimental ā Enhanced reasoning with app integrations like YouTube, Maps, and Search
2.0 Pro Experimental ā The most powerful coding & reasoning model yet, with a 2M-token context window
2.0 Flash-Lite ā A cost-effective, developer-friendly multimodal model
This release indicates that Google is doubling down on AI democratization. By making advanced AI widely accessible, Google is challenging competitors like OpenAI and DeepSeek.
š¤ Whatās New in Gemini 2.0? For one, Gemini 2.0 Flash is now available to everyone. It features high-speed multimodal reasoning (1M-token context window) and is optimized for high-frequency, large-scale AI tasks.
Flash was initially an experimental AI modelānow, itās mainstream, serving two categories of users. Developers can use it to build powerful AI applications, and regular users can integrate it into daily life.
š ļø Googleās answer to GPT-4o: This is undoubtedly Gemini 2.0 Pro, Googleās best AI yet for complex tasks. It works great for coding, reasoning, and world knowledge, with a massive 2M-token context window (twice as large as most competitors). This allows it to provide elite-level reasoning while handling massive amounts of data in a single session.
š° What about 2.0 Flash-Lite: This is Googleās high-performance, low-cost model, faster & more efficient than 1.5 Flash. With multimodal support, it can analyze text, images, and videos, and even generate captions for 40,000 photos at under $1 in Google AI Studio. This makes it the affordable go-to solution for businesses, startups, and developers.
š”ļø Googleās AI Safety & Responsibility Approach: Google is adopting a three-pronged approach to ensure safety:
Gemini 2.0 models critique their own responses for accuracy
Uses automated red-teaming to detect risks like indirect prompt injection
Built-in security to prevent malicious AI misuse
Overall, Googleās move just goes to show that the era of advanced AI being solely for power users might be over. As such, we can expect more free AI tools in search, docs, and other apps in Googleās Workspace.
Figure AI has officially cut ties with OpenAI, opting to develop its own AI models to power its humanoid robots. CEO Brett Adcock calls this a āmajor breakthroughā, claiming that the company has built something revolutionary that will be revealed within 30 days.
As always, hereās the lowdown:
š Why Figure Split from OpenAI
Initially, Figure used OpenAIās models to enable natural language understanding in its humanoid robots. However, Adcock argues that:
Embodied AI needs full vertical integrationāmeaning AI and hardware must be built together.
General-purpose AI models (like OpenAIās) arenāt optimized for physical robots, which require physics-based learning.
Outsourcing AI limits scalabilityācustom, in-house models allow better real-world adaptation.
š The Competitive Tension with OpenAI: OpenAI may have played a role in this split, as it is rumored to be developing its own humanoid robots. Just last week, the company filed a trademark application mentioning:
āUser-programmable humanoid robotsā
āCommunication and learning capabilitiesā
OpenAI is also backing 1X, a Norwegian robotics startup working on AI-powered home assistantsāpotentially a direct competitor to Figure.
š Figureās Rapid Momentum: Despite the OpenAI split, Figure remains one of the most well-funded robotics startups, having raised $1.5 billion to date, including a $675M round last year that valued the company at $2.6 billion.
It recently expanded into a larger Bay Area office and has already deployed its Figure 02 robots on BMW production lines. The companyās goal? To scale up to 100,000 humanoid robots across various industries.
š® Whatās Next? With Figureās self-built AI set to debut in a month, the robotics industry is watching closely. If Adcockās ābreakthroughā delivers, it could mark a major leap forward for humanoid robotsāone where they donāt just execute pre-programmed actions but adapt and learn in real-time.
Source: OpenAI
OpenAI has unveiled Deep Research, a new AI-powered agent designed to handle complex, long-form research within ChatGPT. Unlike standard chatbot responses, this feature analyzes multiple sources, compiles data, and generates well-cited reportsātargeting users in finance, science, policy, and engineering.
Hereās how:
š How Deep Research Works: To use it, simply select "Deep Research" in the ChatGPT composer, enter a query, and attach relevant files or spreadsheets if needed. Unlike standard chatbot responses, Deep Research takes its timeāranging from 5 to 30 minutesāto analyze information and generate a response. Users will receive a notification when the process is complete.
Right now, Deep Research is available to ChatGPT Pro users, allowing them to conduct more thorough investigations with 100 queries per month. OpenAI plans to expand access to Plus, Team, and Enterprise users soon, with query limits expected to increase over time. In addition, Deep Research is currently web-only and supports only text-based outputs.
šÆ Why This Matters: Deep Research is not just a search engineāit processes and interprets vast amounts of text, images, and PDFs, pivoting dynamically based on new insights. This makes it valuable for:
Academics & Scientists ā Conducting peer-reviewed research with sources and citations.
Finance & Policy Analysts ā Gathering market trends, economic indicators, and legal updates.
Engineers & Developers ā Analyzing technical documentation, patents, and product research.
Serious Shoppers ā Making informed decisions on big purchases like cars, appliances, and electronics.
š How Accurate Is It? OpenAI trained Deep Research using a special version of its o3 "reasoning" model, optimized for web browsing and data analysis. The model was tested on Humanityās Last Exam, an advanced AI benchmark, where it scored 26.6%āoutperforming competitors like Gemini Thinking (6.2%), Grok-2 (3.8%), and OpenAIās own GPT-4o (3.3%).
However, OpenAI acknowledges limitations:
ā ļø It can make mistakes or misinterpret sources.
ā ļø It may not always indicate uncertainty in its answers.
ā ļø Citation formatting may be inconsistent.
The key question is whether users will double-check sources or blindly trust AI-generated reports. As Deep Research rolls out, weāll see whether it truly changes how people gather, analyze, and validate information online.
Source: u/TheEphemeric via Reddit
Yes, Potterhead, thatās the ministry of magic ! But this post isnāt just for the Harry Potter fans in our midst. Itās for lovers of fictional worlds overall, and thereās something for everyoneāwe mean it!
Spoiler alert: a little birdie says thereās something Star Wars-themed here, as well. Enjoy!
šÆ Everything else you missed this week.
Source: Freepik
š£ļø Meta unveils a new programāwith UNESCOāto improve AI models for speech and translation,
šØ OpenAI unveils new logo, amidst big rebranding effort.
š Adobe Acrobatās AI new contract intelligence capabilities allow it to decipher and summarize complicated contracts for users.
š This AI-restored Beatles song won a Grammy for Best Rock Performance!
ā” The Neural Frontierās weekly spotlight: 3 AI tools making the rounds this week.
Source: Freepik
1. āļø Hoppy Copy is an AI-powered email marketing platform that helps users create compelling email campaigns 10x faster. With over 100,000 users, the platform combines AI writing assistance with practical tools like spam checking and competitor analysis.
2. š¬ Reap leverages AI to transform long-form video content into engaging social media shorts. The platform offers AI-driven clip curation and optimization, auto speaker detection and framing, multi-format aspect ratio adjustment, and transcript-based editing.
3. āļø QuillAI positions itself as an AI-powered content creation platform focused specifically on generating SEO-optimized content that ranks. The platform stands out with AI content generation with brand customization, built-in SEO optimization tools, and automated keyword research.
All in allā¦
Excitement doesnāt seem to trail behind, especially when AI tech is involved. From Geminās release to Figureās big decision to go in-house, itās been a week full of updates, twists, turns, and everything in between.
If this is any indicator of the future, weāre in for a ride next week.
And as always, you can count on us for your weekly roundup of the whoās who, and whatās what in the AI space.
Till next week, Aloha (yes, this means goodbye too)! š
PS: While youāre at it, please hit that Subscribe button š !