OpinionNews

Forking into Google’s AI Layers

Hard Fork AI16m 13s

The episode covers Google's three-layer AI strategy announced at Cloud Next, including new TPU chips, Chrome's AI coworker feature, and a multi-billion dollar compute deal with Thinking Machine Labs. Additional stories cover Neocognition's $40M agent specialization startup, Anthropic's security breach of its Mythos cybersecurity tool, and OpenAI's enterprise distribution deal with Infosys.

Summary

The episode opens with a framing of Google's Cloud Next conference as a significant strategic moment, with the host arguing that Google's simultaneous announcements across chips, browser integration, and compute partnerships make it the most structurally complete player in the AI stack — potentially ahead of OpenAI and Amazon.

The first company discussed is 10xScience, a Stanford spin-out from Nobel laureate Carolyn Bertosi's lab, which raised a $4.8 million pre-seed led by Initialized Capital. The company addresses a bottleneck in AI-driven drug discovery: AI models like DeepMind's protein predictor generate thousands of drug candidates, but the triage process using mass spectrometry is slow and expert-dependent. 10xScience builds a SaaS layer using deterministic chemistry plus AI agents to make molecular analysis traceable and explainable — a requirement for regulatory approval that pure black-box AI cannot satisfy. The host highlights this as an underappreciated 'picks and shovels' layer beneath the generative AI biotech hype.

Next, Neocognition is introduced as an AI research lab emerging from stealth with $40 million in seed funding, founded by an Ohio State professor. Backed by Cambium Capital, Walden Catalyst Ventures, Intel CEO Lip-Bu Tan, and Databricks co-founder Ion Stoica, the company's thesis is that current AI agents succeed only about 50% of the time because they are unreliable generalists. Neocognition aims to build agents that self-specialize when dropped into new domains — mirroring how humans rapidly adapt — rather than requiring hand-crafted custom agents per vertical. The host notes from personal experience that the per-vertical custom agent approach doesn't scale because engineering capacity runs out before use cases do.

The episode then covers a security incident at Anthropic, where an unauthorized group accessed Mythos, Anthropic's exclusive enterprise cybersecurity AI tool. The breach reportedly occurred through stolen contractor credentials and by guessing the model's endpoint URL based on Anthropic's predictable URL naming patterns. The group demonstrated access to Bloomberg via screenshots and a live demo. Anthropic stated it found no evidence of compromise to its own systems, only the vendor environment. The host notes this is poorly timed given Anthropic's reported early IPO talks with Goldman Sachs, JP Morgan, and Morgan Stanley for an October offering.

The OpenAI-Infosys deal is covered next: a partnership to distribute ChatGPT and Codex across Infosys's enterprise client base in 60-plus countries. Infosys, which generated $267 million in AI service revenue last quarter, provides a channel into Fortune-tier accounts OpenAI doesn't reach directly. The host frames this as OpenAI's attempt to close the enterprise gap with Anthropic, which now reportedly has over $30 billion in annualized revenue. The deal mirrors Microsoft's bundling strategy but reaches deals where Microsoft doesn't always have distribution.

The bulk of the episode focuses on Google's three major Cloud Next announcements. First, Google unveiled two new TPUs: the TPU 8T for training and the TPU 8I for inference, claiming 3x faster training and 80% better performance per dollar versus Nvidia alternatives, with the ability to scale to over one million TPUs in a single cluster. Google is also reselling Nvidia's Vera Rubin chips through Google Cloud, allowing customers to choose — contrasting with AWS's more locked-in approach. Second, Chrome is gaining an AI coworker feature called Auto Browse, powered by Gemini, which reads context across open tabs and automates workplace tasks like CRM entry, vendor quote comparison, and competitor research. The host compares this unfavorably to Claude's computer use, which operates at the desktop level with file access and without requiring constant human approval. Third, Google signed a multi-billion dollar compute deal with Thinking Machine Labs — founded by former OpenAI co-founder Mira Murati — giving the company access to Nvidia GB300 systems on Google Cloud for their product Tinker, a tool for building custom frontier models. Thinking Machine Labs is raising at a $12 billion valuation.

The host argues Google is executing a coherent three-layer strategy: silicon at the bottom (TPUs plus resold Nvidia), frontier lab compute hosting in the middle (Anthropic and now Thinking Machine Labs), and the agent/browser layer at the top (Chrome + Workspace). No other company credibly covers all three layers simultaneously. Caveats include Gemini's continued benchmark gap versus GPT and Claude, and the fact that Google's 3x TPU performance claim is self-reported marketing not yet verified by independent labs. The host also flags potential DOJ and EU regulatory scrutiny given Google's ongoing antitrust case and Chrome's new data-aggregating capabilities.

Key Insights

  • The host argues Google is the only AI company credibly operating across all three layers of the AI stack simultaneously — silicon, frontier lab compute hosting, and end-user agent applications — giving it a structural advantage over Microsoft, Amazon, and Anthropic, each of which only strongly occupies one or two layers.
  • The host contends that Anthropic's Mythos breach was not a model exploit but rather the result of stolen contractor credentials and a predictable URL naming pattern, suggesting the security incident reflects vendor management failures rather than a fundamental vulnerability in the AI tool itself.
  • The host argues that the real bottleneck in AI-driven drug discovery is not generating candidates — models already produce thousands — but triaging them in a way that is traceable and explainable enough to satisfy regulators, which is the problem 10xScience is specifically targeting and which almost no other AI biotech company is addressing.
  • Neocognition's founding thesis, as described by the host, is that current AI agents fail roughly 50% of the time because they are built as generalist systems, and that the solution is agents that self-specialize when entering a new domain — the same way humans rapidly adapt — rather than requiring engineers to hand-craft a custom agent for every vertical use case.
  • The host claims that OpenAI's enterprise ChatGPT licenses have often sat unused at large companies for up to a year because they were not integrated into real workflows, framing the Infosys deal as an attempt to solve this distribution and implementation gap as Anthropic pulls ahead with over $30 billion in annualized enterprise revenue.

Topics

Google's three-layer AI stack strategy at Cloud NextNeocognition's self-specializing AI agentsAnthropic Mythos security breachOpenAI-Infosys enterprise distribution deal10xScience drug candidate triage platformGoogle TPU 8T and 8I chip announcementsChrome Auto Browse AI coworker featureThinking Machine Labs compute deal with Google

Full transcript available for MurmurCast members

Sign Up to Access

Get AI summaries like this delivered to your inbox daily

Get AI summaries delivered to your inbox

MurmurCast summarizes your YouTube channels, podcasts, and newsletters into one daily email digest.