- The Neural Frontier
- Posts
- Google’s Gemini 3 Flash is here ⚡
Google’s Gemini 3 Flash is here ⚡
Also: OpenAI rolls out improved ChatGPT Images, while Zoom releases AI Companion 3.0.

And the product releases keep on…keeping on? 😆
Forward thinkers, welcome to another issue of the Neural Frontier!
In this week’s issue, we’re unwrapping (pun intended) a new model from Google and two pretty hefty product updates from OpenAI and Zoom.
Ready? Let’s dive in!
In a rush? Here's your quick byte:
⚡Google’s Gemini 3 Flash is here!
🎨 OpenAI rolls out improved ChatGPT Images.
🤖 Zoom releases AI Companion 3.0!
🎭 AI Reimagines: The Maiden of the sword!
🎯 Everything else you missed this week.
⚡ The Neural Frontier’s weekly spotlight: 3 AI tools making the rounds this week.

Source: Google
Google is turning up the pressure on OpenAI. The company has released Gemini 3 Flash, its fast, low-cost model built on last month’s Gemini 3 foundation — and immediately made it the default model across the Gemini app and AI Mode in Search.
It’s a loud signal: speed, scale, and everyday usage now matter as much as frontier reasoning.
🧠 What Gemini 3 Flash is designed for: Flash is Google’s workhorse model — optimized for high-volume, repeatable tasks where latency and cost matter more than maximum depth.
While it’s positioned as “fast and cheap,” Google says the jump from Gemini 2.5 Flash is substantial. On several benchmarks, Gemini 3 Flash now competes directly with top-tier models like Gemini 3 Pro and GPT-5.2.
On Humanity’s Last Exam (without tool use), it scored 33.7%, nearly matching GPT-5.2 (34.5%) and closing the gap with Gemini 3 Pro (37.5%). On the multimodal MMMU-Pro benchmark, it topped the field with an 81.2% score.
📱 Rolling out to consumers by default: Google is replacing Gemini 2.5 Flash with Gemini 3 Flash as the default model globally in the Gemini app and AI Mode in Search. Users can still manually switch to Gemini 3 Pro for heavier math or coding tasks, but Flash is now the everyday experience.
The company is leaning hard into multimodality here. Gemini 3 Flash is pitched as better at understanding what users actually mean, not just what they type.
That includes things like:
Uploading a short sports video and asking for technique tips
Sketching a drawing and having the model interpret it
Uploading audio to get analysis or generate quizzes
Getting more visual answers with tables and images
💰 Pricing and performance tradeoffs: Gemini 3 Flash is slightly more expensive than its predecessor:
$0.50 / 1M input tokens
$3.00 / 1M output tokens
That’s up from Gemini 2.5 Flash’s $0.30 / $2.50 pricing. But Google claims Flash outperforms Gemini 2.5 Pro while being three times faster, and uses 30% fewer tokens on thinking tasks, which could lower total costs in practice.

Source: OpenAI
OpenAI is doubling down on visuals. This week, the company rolled out GPT-Image-1.5, a new version of ChatGPT Images that promises stronger instruction-following, more precise edits, and image generation up to 4× faster.
The release is available immediately to all ChatGPT users and via the API — and it’s clearly part of OpenAI’s broader “code red” response to Google’s recent surge with Gemini 3 and Nano Banana Pro.
🧠 What actually improves in GPT-Image-1.5: The biggest upgrade isn’t flashy effects — it’s consistency.
Most image generators struggle with iteration. Ask for a small change, and the entire image gets reinterpreted. GPT-Image-1.5 is designed to avoid that, reliably changing only what you ask for while keeping facial likeness, lighting, composition, and color tone intact across edits.
That makes it far more usable for real workflows like:
Photo retouching and post-production
Brand-safe marketing visuals
Clothing, hairstyle, and style try-ons
Step-by-step creative iteration without visual drift
OpenAI says this is its most capable general-purpose image model yet — not because it’s wild, but because it listens.
🖌️ From prompt box to “creative studio”: Alongside the model upgrade, OpenAI is rethinking how images live inside ChatGPT.
Images now get a dedicated entry point in the sidebar, designed to feel more like a lightweight creative studio than a chat command. Users can browse trending prompts, apply preset styles, and iterate visually — without constantly rewriting instructions.
Fidji Simo describes the goal simply: when visuals communicate better than words, ChatGPT should show them. And when the next step lives in another tool, it should already be there.
🔤 Better text, better structure: GPT-Image-1.5 also makes meaningful progress on text rendering, one of the hardest problems in image generation.
The model handles:
Denser and smaller text
Structured layouts (tables, posters, infographics)
Markdown-like formatting preserved inside images
This makes it more viable for things like ads, slides, posters, and UI-style graphics — areas where previous models regularly broke down.

Source: Google
Zoom is pushing hard to escape the “just meetings” box. This week, the company unveiled AI Companion 3.0, a major upgrade that introduces agentic workflows, a new AI-first web surface, and deeper integration across Zoom’s ecosystem.
🧠 From meetings to “intelligent work orchestration”
AI Companion 3.0 represents a strategic shift. Zoom is framing itself less as a video platform and more as an AI-powered work coordinator that understands context across meetings, chats, docs, and connected tools.
At the core is a new conversational work surface that pulls insights directly from your Zoom activity — without needing you to upload transcripts, hunt for notes, or craft complex prompts. You open it, and it already knows what you’ve been discussing.
⚙️ A federated AI stack (not one model to rule them all): Rather than betting on a single model, Zoom is doubling down on a federated AI approach. AI Companion 3.0 blends Zoom’s own models with third-party LLMs from OpenAI and Anthropic, alongside open-source options like NVIDIA Nemotron.
Zoom says this setup allows it to optimize for different tasks — from transcription and retrieval to reasoning and writing — while maintaining stronger privacy controls. Notably, Zoom reiterated that it does not train models on customer communications.
🧩 What AI Companion 3.0 can now do
Instead of listing everything, here’s the real upgrade: AI Companion now behaves more like a junior operator than a summarizer.
Key capabilities include:
Agentic retrieval, searching across meeting summaries, transcripts, notes, and connected tools like Google Drive and OneDrive (with Gmail and Outlook coming soon)
Post-meeting follow-ups, automatically generating tasks, and draft emails
Daily reflection reports, summarizing meetings, tasks, and updates to help users prioritize their day
Agentic writing, drafting, and refining documents in a canvas-style interface, grounded in actual meetings and resources
Exports and collaboration, with support for formats like PDF, Word, MD, and Zoom Docs, plus shared commenting and version history
For enterprise customers, a deep research mode can analyze multiple meetings and documents to generate more comprehensive insights.
🗂️ Notes, workflows, and less manual cleanup
Zoom is also targeting one of work’s biggest pain points: the mess after meetings.
With My Notes (coming soon), AI Companion can transcribe in-person meetings, Zoom calls, or even meetings on other platforms — keeping everything in one place. Meanwhile, personal workflows (currently in beta) can automate repetitive follow-ups, like compiling insights into a daily report or summarizing Team Chat activity each morning.
The idea is simple: fewer tabs, fewer reminders, less mental bookkeeping.

Source: u/Zaicab via Reddit
Excalibur? Sword in the lake? Yeah, this definitely brings back those memories. For all you fantasy buffs out there, you’re gonna feel right at home with this showcase.
🎯 Everything else you missed this week.

Source: Google
⚡ The Neural Frontier’s weekly spotlight: 3 AI tools making the rounds this week.
1. 📈 AirOps is a content engineering platform that combines SEO, AI search analytics, and automated workflows to help brands create high-quality content that earns visibility and citations across traditional and AI-powered search.
2. 🎯 Cluely is an undetectable AI meeting assistant that provides real-time answers and generates meeting notes without appearing as a visible bot participant in video calls.
3. 🧠 Supermemory is a long-term memory API for AI applications that builds adaptive, graph-based memory systems to help AI agents remember user preferences, relationships, and context across interactions.
Wrapping up…
We’ve had a flurry of product updates in these ember months, and it’s all pointing to one thing: most companies want to end the year with a bang.
Does this mean there’s even more in store this year? Maybe. Maybe not.
If it’s the former, you know where to get your updates from.
Hit that subscribe button if you haven’t already, and as always, we’ll catch you next week on the Neural Frontier ✨.