Hey friend. It's Thursday, November 13, 2025 The AI race is splitting into two distinct paths: brute force and new frontiers.
The price of admission to the frontier model club is now measured in tens of billions of dollars.
The original architects of the current AI wave are leaving to build what comes next.
Let's get into it. Don't keep us a secret: Forward this email to your best friend
Must Know
Anthropic is investing $50 billion to build massive AI data centers in Texas and New York. The investment is aimed at creating a sovereign AI infrastructure in the United States.
The plan is one of the largest private infrastructure projects in recent history and will create thousands of American jobs. It signals Anthropic's long-term commitment to developing foundational models at an unprecedented scale.
My Take: This is a declaration that building frontier AI is now a matter of nation-state-level capital expenditure. The $50B figure isn't just a large number; it's a moat designed to be insurmountable for all but a handful of hyperscalers and sovereign wealth funds. Anthropic is forcing the market to accept that the base cost of competing is no longer about research talent alone, but about securing a domestic, vertically-integrated compute supply chain. This changes everything for venture capital and startups dreaming of building a foundational model.
Meta's Chief AI Scientist, Yann LeCun, is reportedly planning to leave the company to launch a new AI startup. The new venture will focus on developing "world models."
World models are systems that learn by simulating environments to predict outcomes, a departure from the current large language model paradigm. The move comes amidst reports of internal friction at Meta and signals a desire to pursue a different architectural path.
My Take: LeCun's departure is a high-profile intellectual rebellion against the current LLM scaling doctrine. While companies like Anthropic are spending billions to scale the existing paradigm, one of its chief architects is signaling that the future lies elsewhere. This isn't just a career move; it's a bet that causal, predictive world models—not just pattern-matching text predictors—are the true path to more capable AI. His exit creates a new pole in the AI landscape, attracting talent and capital to an alternative vision for intelligence. The race is no longer on a single track.
Quote of the Day
IBM just laid off 8,000 workers to AI. The math: $80,000 per worker per year. $640,000,000 saved annually.
🏗️ The Platform & Infrastructure Layer
My take: With the frontier defined by massive capital bets, the race is on to build the physical and digital infrastructure—from autonomous cars to private clouds—that will actually run these models.
SoftBank's $5.83B Nvidia sale isn't a retreat from AI but a strategic reallocation of capital from public holdings to direct investments in the AI infrastructure and robotics those chips power. [Link]
Google's Private AI Compute is a critical enterprise play, directly addressing data privacy concerns to unlock the massive market of companies hesitant to send proprietary data to the cloud. [Link]
Parag Agrawal's Parallel raising $100M shows the next gold rush is in building the infrastructure to make the web legible and useful for AI agents, not just humans. [Link]
Waymo's full Bay Area expansion is a major milestone, proving autonomous ride-hailing can scale beyond limited geofences and tackle complex freeway environments. [Link]
Andrej Karpathy's praise for Tesla's FSD is a significant validation from a top expert, suggesting the vision-only approach is overcoming previous limitations in real-world performance. [Link]
UBTech's demonstration of a self-charging humanoid robot army for a 100M+ factory order signals the transition from R&D to scaled industrial deployment. [Link]
🤖 The Agentic Frontier
My take: As the infrastructure solidifies, the focus shifts to the agent layer, where the true value is unlocked by giving models the tools and autonomy to act in the world.
Grok's upcoming CLI agent is a direct challenge to GitHub Copilot, moving agentic coding from the cloud to the local machine for increased privacy and developer control. [Link]
LangChain's human-in-the-loop middleware is a crucial step towards enterprise adoption of agents, providing the safety and oversight required for high-stakes, autonomous tasks. [Link]
SuperAGI's AI-native project management is a bet that the future of work isn't just using AI tools, but embedding autonomous agents directly into core business workflows. [Link]
Tavus's AI office with human-like agents is a glimpse into the future of the user interface, moving beyond GUIs to conversational, autonomous task execution. [Link]
Google's Gemini File Search API drastically simplifies building RAG systems, lowering the barrier for developers to create powerful, context-aware agents without complex data pipelines. [Link]
🔬 Research Corner
Fresh off Arxiv
Google DeepMind's AlphaProof agent achieves Olympiad-level formal mathematical reasoning, a significant leap in verifiable AI capabilities for scientific discovery and robust system development. [Link]
AlphaResearch presents a framework for LLMs to discover novel algorithms that surpass existing human knowledge, moving beyond simply reusing known solutions. [Link]
Lumine provides the first open recipe for building generalist agents that can complete complex, hours-long missions in 3D open-world environments, demonstrating strong zero-shot generalization. [Link]
The MAKER system solves a task with over one million LLM steps and zero errors by decomposing it into subtasks for micro-agents, suggesting a path to organization-level problem solving. [Link]
Interlat demonstrates that AI agents can communicate entirely in latent space, using the LLM's hidden states as a direct representation of its 'mind' for more efficient collaboration. [Link]
LoopTool creates a fully automated, closed-loop system where a model iteratively probes its own capabilities, verifies the results, and synthesizes new data to improve its tool-use skills. [Link]
Have a tip or a story we should cover? Send it our way.
Cheers, Teng Yan. See you tomorrow.
