Hey friend. It's Wednesday, November 12, 2025
The era of AI as a software-only game is officially over.
The AI stack is bifurcating along geopolitical lines, with control over silicon as the new frontline.
Frontier model players are now forced to become infrastructure companies, spending billions to secure their own compute.
Let's get into it.
Don't keep us a secret: Forward this email to your best friend
Must Know
China has mandated the exclusive use of domestically-produced AI chips in all state-funded data centers, effectively banning foreign suppliers like Nvidia and AMD from a significant portion of the market.
The new directive is part of Beijing's broader strategy to achieve technological sovereignty and reduce reliance on Western technology amid escalating geopolitical tensions. This move forces Chinese tech giants and government entities to accelerate their adoption of homegrown silicon.
My Take: The splintering of the internet is now followed by the fracturing of the AI stack. This isn't just a trade dispute; it's the formal beginning of a technological cold war where parallel, non-interoperable hardware and software ecosystems will develop. The immediate loser is Nvidia, but the long-term consequence is a global AI landscape defined by national borders, not open innovation. The race for technological supremacy just became a matter of national security.
Anthropic has announced a massive $50 billion investment to build and operate its own AI data centers in Texas and New York, signaling a strategic move to control its own compute infrastructure.
The investment aims to create thousands of jobs and reduce Anthropic's reliance on third-party cloud providers like AWS and Google Cloud. This follows a trend of major AI labs vertically integrating to secure the vast computational resources needed for training frontier models.
My Take: The AI arms race is now a capital expenditure war. This move proves that to compete at the frontier, you can no longer just be a model company; you must become an infrastructure company. Anthropic is betting that owning the physical layer provides a durable competitive advantage in a world of compute scarcity and supply chain volatility. The stakes are now clear: secure your own compute or risk being throttled by your rivals.
Quote of the Day
We find that when developers use AI tools, they take 19% longer than without—AI makes them slower.
📈 The New Capital Cycle
My take: The fight for compute infrastructure is triggering a massive capital reallocation, where owning the rails is now more valuable than just riding them.
SoftBank's $5.83B Nvidia stake sale to fund its own AI infrastructure is the ultimate signal that smart money is moving from selling shovels to building the mines. [Link]
Snapchat's $400M deal to make Perplexity its default AI is a massive distribution play that instantly creates a new challenger to Google's consumer search dominance. [Link]
IBM's layoff of 8,000 workers to save $640M annually is one of the first large-scale, publicly-quantified examples of AI-driven labor displacement, setting a precedent for enterprise cost-cutting. [Link]
UBTech's 100M+ factory order for humanoid robots shows that autonomous robotics is moving from R&D to scaled industrial deployment, directly targeting manufacturing and logistics labor. [Link]
Yann LeCun's planned departure from Meta to build a 'world models' startup signals a belief among top researchers that the current LLM paradigm has hit a wall, requiring new foundational approaches. [Link]
🤖 The Agentic Layer Emerges
My take: As capital solidifies the infrastructure layer, the value is moving up the stack to the agents that can actually execute tasks and create new workflows.
Microsoft's confirmation that Windows is becoming an 'agentic OS' is a fundamental platform shift, aiming to embed proactive AI assistance directly into the operating system's core. [Link]
SuperAGI's AI-native project management platform is a direct challenge to incumbents like Asana and Jira, betting that agent-driven workflows will replace manual task management. [Link]
Google's new Gemini File Search API is a critical piece of plumbing for developers, simplifying the creation of agentic RAG systems that can reason over private knowledge bases. [Link]
ElevenLabs' Scribe v2 Realtime transcription model provides the low-latency audio input necessary for building truly responsive, voice-native AI agents. [Link]
The development of 'The Station,' where AI agents autonomously conduct scientific research, marks a significant leap from AI as a tool to AI as an independent engine of discovery. [Link]
🔬 Research Corner
Fresh off Arxiv
AlphaResearch introduces an autonomous research agent that can discover new, state-of-the-art algorithms for open-ended problems, demonstrating that LLMs can accelerate novel scientific and mathematical discovery. [Link]
SciAgent is a unified multi-agent system that achieves or surpasses human gold-medalist performance in math, physics, and chemistry Olympiads, showcasing a new level of generalistic scientific reasoning. [Link]
The Motif-2-12.7B technical report shows that a smaller, open-weight model can rival much larger models through architectural innovation and system-level optimization, proving efficiency can compete with scale. [Link]
MADD (Multi-Agent Drug Discovery Orchestra) uses a team of coordinated agents to build and execute drug discovery pipelines from natural language, making complex AI tools more accessible to lab researchers. [Link]
Spark proposes a shared agentic memory architecture that mimics human developer communities, allowing AI coding agents to learn from a persistent, collective experiential memory to improve code generation. [Link]
Scientists unveiled the first microwave-powered computer chip, a hardware breakthrough that promises dramatically faster processing and lower power consumption, potentially revolutionizing AI hardware efficiency. [Link]
Have a tip or a story we should cover? Send it our way.
Cheers, Teng Yan. See you tomorrow.
