OpenAI unveils GPT-5.5, its smartest model yet!

Also: SpaceX strikes a $60B deal for the right to buy coding startup, Cursor, while an unauthorized group reportedly gains access to Anthropic’s Mythos 💻.

In partnership with

Write docs 4x faster. Without hating every second.

Nobody became a developer to write documentation. But the docs still need to get written — PRDs, README updates, architecture decisions, onboarding guides.

Wispr Flow lets you talk through it instead. Speak naturally about what the code does, how it works, and why you built it that way. Flow formats everything into clean, professional text you can paste into Notion, Confluence, or GitHub.

Used by engineering teams at OpenAI, Vercel, and Clay. 89% of messages sent with zero edits. Works system-wide on Mac, Windows, and iPhone.

Last week, Anthropic released “its most powerful model yet.” And this week, OpenAI pretty much said hold my beer and dropped GPT-5.5, its smartest model yet. Truly, what a time it is to be alive 😏. 

Forward thinkers, hello and welcome to issue #155 of the Neural Frontier.

Eager to jump in? So are we!

In a rush? Here's your quick byte: 

🤖 OpenAI unveils GPT-5.5, its smartest model yet!

💸 SpaceX strikes a $60B deal for the right to buy coding startup, Cursor.

💻 An unauthorized group reportedly gains access to Anthropic’s Mythos!

⚡ The Neural Frontier’s weekly spotlight: 3 AI tools making the rounds this week.

Source: OpenAI

OpenAI has introduced GPT-5.5, its most capable model yet — and the positioning is very clear: this is a system built to take on real, multi-step work from start to finish.

🧠 From answering questions → completing tasks

GPT-5.5 is designed to handle messy, real-world workflows with far less hand-holding.

Instead of prompting step-by-step, you can give it a complex task, let it plan, use tools, and iterate, and trust it to keep going until the job is done

This spans everything from coding and research to spreadsheets, documents, and even operating software.

⚡ Smarter and more efficient

What’s notable isn’t just capability — it’s efficiency.

GPT-5.5 matches GPT-5.4’s speed (latency), uses fewer tokens to complete the same work, and delivers higher-quality outputs with fewer retries

That combination (better + faster + cheaper per task) is what actually makes it usable at scale.

💻 Where it really shines

The biggest gains show up in areas where AI needs to think across time and context:

  • Agentic coding → handles long, complex engineering tasks end-to-end

  • Knowledge work → research, analysis, reporting, spreadsheets

  • Computer use → navigating tools, files, and workflows

  • Scientific research → multi-step reasoning and experimentation

In practical terms, this means AI is moving closer to being a true collaborator, not just a helper.

📊 Performance jump (quick snapshot)

GPT-5.5 pushes ahead of previous models (and competitors) across key benchmarks:

  • Stronger coding performance (e.g. Terminal-Bench, SWE tasks)

  • Higher knowledge-work accuracy (GDPval ~84.9%)

  • Improved real-world tool usage (OSWorld benchmarks)

Overall, it’s better at actually doing the job, not just sounding smart.

🔐 Built with tighter safeguards

With increased capability comes tighter controls like expanded cybersecurity and misuse protections, new evaluation frameworks for high-risk domains, and “trusted access” pathways for sensitive use cases

OpenAI is clearly anticipating stronger real-world impact — and the risks that come with it.

This release fits into a pattern you’ve probably noticed across everything you’ve been working on:

  • Codex → can use your computer

  • Frontier / Cowork → agents that execute workflows

  • Perplexity Computer → long-running task systems

Now GPT-5.5 ties it all together. The model is more autonomous, more persistent, and more action-oriented.

Source: Mike Blake/Reuters

SpaceX is doubling down on AI — and this time, it’s going straight for one of the hottest categories: AI-powered coding.

🧠 What’s happening

  • SpaceX has secured an option to acquire Cursor for $60 billion

  • Or alternatively, enter a $10 billion partnership instead

  • The move comes shortly after its merger with xAI

Clearly, Musk is building a serious play in the AI developer tools market

⚙️ Why Cursor matters

Cursor isn’t just another startup — it’s part of a new wave of tools reshaping software development. It uses AI to automate coding workflows, is already gaining traction with professional developers, and competes in the same space as tools from OpenAI and Anthropic. 

This is one of the few AI segments with clear, immediate revenue, which makes it strategically valuable.

🧩 The real play: compute + distribution

This deal isn’t just about buying a product — it’s about combining strengths:

  • Cursor → product + developer adoption

  • SpaceX/xAI → massive compute infrastructure

At the center of this is Colossus, xAI’s supercomputer cluster claimed to be among the largest AI training systems and powered by “millions of H100-equivalent” compute.

Together, this could create a vertically integrated stack: models + compute + developer interface.

🌕 And yes… the moon is still involved

In true Musk fashion, this isn’t just about coding tools. Some Cursor engineers have already moved to work on orbital data centers and lunar infrastructure projects. This ties back to Musk’s broader vision of: AI infrastructure extending beyond Earth (yes, really)

This move lands just ahead of SpaceX’s expected IPO:

  • Target valuation: ~$1.75 trillion

  • Potential raise: $75 billion

And it fits a clear pattern: merge xAI into SpaceX, add AI capabilities (Cursor), and build a unified “AI + space + compute” ecosystem.

Source: Benjamin Girette/Bloomberg / Getty Images

Just days after unveiling its new cybersecurity model, Anthropic is already dealing with a potential leak.

A group of unauthorized users reportedly gained access to Claude Mythos Preview — not by breaching Anthropic directly, but through a third-party vendor environment. The company says it’s investigating and, for now, has found no evidence that its core systems were compromised.

Still, the situation is notable because of what Mythos actually is.

🧠 Not just another model

Mythos was introduced under Project Glasswing as a defensive cybersecurity tool — built to scan codebases, identify vulnerabilities, and strengthen critical systems.

That’s also what makes this incident sensitive.

  • The model can surface zero-day vulnerabilities

  • It’s designed to analyze both proprietary and open-source systems

  • In the wrong hands, it could shift from defense → exploitation

Anthropic had already acknowledged this dual-use risk. This just brings it closer to reality.

🔓 How access happened

What stands out here is how the group got in. According to reports, they didn’t exploit a deep technical flaw. Instead, they:

  • Made an educated guess about the model’s endpoint structure

  • Leveraged access tied to someone within a vendor network

  • Began using the tool, even sharing screenshots and live demos

It’s a familiar pattern: not a dramatic “hack,” but a supply chain gap.

🧭 Why this matters

This isn’t just about one model or one company. It highlights a broader shift in how risk shows up in AI: when models become powerful enough to meaningfully impact systems, access itself becomes the vulnerability.

Anthropic tried to limit exposure by releasing Mythos to a small group of partners and keeping it out of general availability. But as this shows, even controlled rollouts can leak once multiple environments and actors are involved.

⚡ The Neural Frontier’s weekly spotlight: 3 AI tools making the rounds this week.

1. 🎤 Wispr Flow is an AI voice-to-text tool that turns spoken words into polished, perfectly formatted writing across every app — 4x faster than typing.

2. 🎓 X-Pilot is an AI course video generator that transforms PDFs, PPTs, and docs into polished, narrated video courses — with zero editing skills required.

3. 📺 Claras is an AI YouTube companion that instantly transcribes, summarizes, and lets you chat with any video — so you can extract knowledge without watching the whole thing.

Wrapping up…

With so many “most powerful models yet,” we’re pretty much spoilt for choice these days. But that’s the beauty of competition, isn’t it?

As you give these tools a whirl, remember that even more updates are on the roadmap. And as always, we’ll be here with the scoop, same time, same place.

Catch you next week on The Neural Frontier! 👋

PS: If you’re the “friend” this mail was forwarded to, and you enjoyed it, hit the Subscribe button to see more content like this every week 🙂