X/Twitter AI Pulse — 2026-05-13
This week's biggest AI story is Anthropic's stunning leap toward a near-trillion-dollar valuation, fueled by its newly released "Mythos" model and surging revenue figures that are putting it ahead of OpenAI in business growth. AI-enabled hacking hit a landmark moment as Google confirmed a major cybercrime group used AI to autonomously discover and exploit a zero-day vulnerability for the first time. Meanwhile, the EU is pressing OpenAI and Anthropic for direct model access, ramping up regulatory scrutiny on the industry's frontier models.
X/Twitter AI Pulse — 2026-05-13
Top AI Discussions This Week
Anthropic's Near-Trillion-Dollar Fundraise Stuns the Tech World
- Who's talking: AI investors, researchers, and tech media across X/Twitter
- What happened: The New York Times reported that Anthropic is in talks to raise funding at a $950 billion valuation — more than doubling its previous valuation of $380 billion. The company recently released a powerful model called "Mythos" and is separately embroiled in a dispute with the Pentagon.
- Key takes: The community is reacting with a mix of awe and skepticism. Forbes noted that Anthropic now leads OpenAI in business growth with a reported $44B ARR and 70% margins, despite OpenAI's $852B valuation. Debates are swirling about whether these valuations reflect sustainable fundamentals or speculative excess — and when either company will actually IPO.
- Why it matters: A $950B AI startup valuation would be historic, signaling that capital markets still believe frontier AI labs are among the highest-value assets on earth, even amid ongoing cost and profitability questions.

AI Used for First-Ever Autonomous Zero-Day Exploit Discovery
- Who's talking: Security researchers, AI safety advocates, and developers on X/Twitter
- What happened: Reuters reported on May 11 that Google confirmed hackers from a prominent cybercrime group used AI to discover a previously unknown software vulnerability and develop an exploit autonomously — the first known instance of this capability being used in the wild.
- Key takes: The AI security community is alarmed. Many researchers note this validates long-standing warnings about offensive AI capabilities accelerating beyond defensive ones. The Air Street Press "State of AI: May 2026" report had already flagged that frontier cyber-offense capability is "doubling every four months."
- Why it matters: This marks a qualitative shift in the threat landscape. AI-assisted zero-day discovery had been a theoretical concern; it is now a confirmed real-world attack vector, likely to intensify regulatory and industry pressure around dual-use AI.

EU Presses OpenAI and Anthropic for Direct Model Access
- Who's talking: EU regulators, AI policy watchers, enterprise AI users
- What happened: The European Commission formally pressed OpenAI and Anthropic for direct AI model access, seeking safety reviews of GPT-5.5-Cyber and Mythos, and asserting EU oversight authority under emerging AI regulations.
- Key takes: Observers on X note this could set a precedent for governments demanding "keys" to frontier AI systems. Some argue this is a reasonable safety mechanism; others warn it creates backdoors and geopolitical leverage. The timing — as both companies are reportedly raising enormous new rounds — adds political complexity.
- Why it matters: This is one of the first concrete enforcement-adjacent moves by a major government under new AI oversight frameworks, and it could reshape how frontier models are deployed in Europe.
Hot Debates & Controversies
Zvi Mowshowitz and the Community Debate Claude Opus 4.7's Benchmarks
- Side A: @TheZvi and others argue Claude Opus 4.7 represents a meaningful capability leap — notably, its training cutoff jumped from May 2025 (Opus 4.6) to end of January 2026, and it has taken the #1 spot on Artificial Analysis benchmarks. The improved knowledge currency is seen as a "big practical deal."
- Side B: Skeptics counter that benchmark leadership is increasingly fleeting — models leapfrog each other every few months — and question whether marginal benchmark gains translate to real-world task performance.
- Current status: The debate is ongoing. Community consensus seems to be that Opus 4.7 is a solid upgrade, but the broader war for model supremacy remains wide open.
Perceptron Mk1: Cheap Video AI vs. Big Lab Pricing
- Side A: Perceptron Mk1 has generated buzz by claiming its video analysis model is 80–90% cheaper than Anthropic, OpenAI, and Google alternatives, while delivering competitive performance. Early adopters report using it for auto-clipping live sports highlights using temporal scene understanding.
- Side B: Skeptics question whether the cost savings hold at scale and whether a smaller lab can sustain model quality and reliability against well-resourced incumbents. Others raise concerns about long-term enterprise support.
- Current status: Still early days, but the announcement is forcing a community conversation about whether frontier pricing by big labs is justified — or whether cost compression is coming faster than expected.

Notable AI Announcements
-
White Circle (French startup): Raised $11M in seed funding backed by operators from OpenAI, Anthropic, DeepMind, Hugging Face, Mistral, Datadog, and Sentry to build enterprise AI monitoring and security tools — community reaction is enthusiastic, noting the all-star backer list signals this is a serious problem worth solving.
-
Anthropic (Mythos + $950B valuation talks): In addition to the funding talks, Anthropic's recently released "Mythos" model is drawing attention as the apparent driver of its accelerating ARR and margin story — community reaction ranges from impressed to questioning whether the valuation math holds long-term.
-
Sequoia Capital (AI Ascent 2026): Sequoia posted highlights from its AI Ascent 2026 event featuring talks with Andrej Karpathy, Demis Hassabis, Jim Fan, and others — community reaction has been positive, with many clipping and sharing Karpathy's segment in particular.
Thought Leader Spotlight
@TheZvi on Claude Opus 4.7 Capabilities
- Key quote/insight: Zvi highlighted that Opus 4.7's training cutoff moved from May 2025 to end of January 2026 — calling it "a big practical deal" — and noted the model now holds the #1 position on Artificial Analysis benchmarks (in a tie).
- Context: Posted as a detailed two-part breakdown of Opus 4.7 capabilities and community reactions to the model's release.
- Community reaction: The knowledge cutoff point resonated strongly — many users noted that an 8-month improvement in training data currency matters enormously for real-world use cases involving current events, regulations, and market data.
@DataChaz on Karpathy's AI Agent Playbook
- Key quote/insight: Referencing Andrej Karpathy's widely circulated advice, the post states: "Karpathy was right. He warned that 90% of AI advice dies in 6 months. Most tools will not even survive 90 days." The thread frames 2026 as the year to focus on durable AI agent fundamentals, not hype-driven tooling.
- Context: Prompted by the rapid churn of AI tools and the Sequoia AI Ascent 2026 event featuring Karpathy.
- Community reaction: Widely reshared, with many developers expressing relief at having a framework for cutting through tool-of-the-week noise. Others debated whether "AI whispering" is actually a durable skill or itself a transitional phase.
What to Watch Next Week
- Anthropic funding round close: Watch for confirmation (or complications) on the reported $950B valuation raise, which would be the largest private AI financing event in history.
- EU AI model access negotiations: The European Commission's demand for direct access to GPT-5.5-Cyber and Mythos is a developing regulatory story — any formal response from OpenAI or Anthropic could set major precedents.
- AI-enabled cyberattacks in the wild: Following Google's confirmation that AI was used to discover a zero-day exploit, security researchers and policymakers are expected to respond — watch for new guidance, disclosures, or emergency regulatory action.
This content was collected, curated, and summarized entirely by AI — including how and what to gather. It may contain inaccuracies. Crew does not guarantee the accuracy of any information presented here. Always verify facts on your own before acting on them. Crew assumes no legal liability for any consequences arising from reliance on this content.