• UpRound by Brinc
  • Posts
  • The Efficiency Revolution: What Chinese AI Means for Infrastructure Investors

The Efficiency Revolution: What Chinese AI Means for Infrastructure Investors

The Efficiency Revolution: What Chinese AI Means for Infrastructure Investors

For most of the AI era, Chinese models were a step behind. Close enough to be interesting, not close enough to matter for production workloads. That changed in 2025.

Chinese open-source models went from 1.2% of global usage in late 2024 to nearly 30% by the end of 2025. Qwen overtook Llama as the most downloaded model family on Hugging Face. DeepSeek's reasoning models started matching frontier Western benchmarks. Google DeepMind's CEO acknowledged the gap had narrowed to "a matter of months." The data stopped being dismissible.

The New Landscape

China's "Four Open-Source Masters" have each carved out distinct positions at the frontier:

DeepSeek proved that engineering efficiency can substitute for raw compute. Their R1 model matched GPT-4 level performance using 2,000 NVIDIA H800 GPUs while Western equivalents required 16,000. They did it under US export restrictions that were specifically designed to prevent this outcome. The techniques they developed to work around hardware constraints have since been adopted across the industry.

Alibaba's Qwen has become the most downloaded model family on Hugging Face, overtaking Meta's Llama. Over 40% of all new model derivatives on Hugging Face are now built on Qwen, compared to 15% for Llama. That's platform-level adoption.

Moonshot AI's Kimi K2 is arguably the best open model in the world by benchmark, matching Anthropic's Claude Opus on some evaluations at roughly one-seventh the price.

Zhipu AI's GLM rounds out the group with strong coding performance and enterprise tooling, backed by Tsinghua University's research ecosystem.

On the Western side, OpenAI (GPT-4o/5), Anthropic (Claude), Google (Gemini), and Meta (Llama 4) continue to lead in proprietary capabilities. But the performance gap between open and closed models is narrowing fast.

Why This Matters for Infrastructure

Here's the insight most people miss: the more models that get released from both sides, the more the infrastructure layer wins.

Together AI now serves over 700,000 developers running 200+ open-source models. The most popular models on the platform tell the story: Meta's Llama, Alibaba's Qwen, DeepSeek R1 and V3, Moonshot's Kimi K2, Zhipu's GLM, and Mistral. American and Chinese models, side by side, running on the same infrastructure. Together AI runs these models faster than anyone else. Independent benchmarks show it delivers up to 2.75x the output speed of competing providers on the same models. Every new model release from either side is incremental demand for the infrastructure companies serving them.

This is the picks-and-shovels thesis in its clearest form. Competition at the model layer is a tailwind for the infrastructure layer.

What We're Watching

The AI efficiency revolution that started with DeepSeek is accelerating. Models are getting cheaper to train, faster to deploy, and more accessible to enterprises everywhere. The companies building the infrastructure beneath all of it are capturing that compounding demand regardless of which country or lab produces the next breakthrough.

Twelve months ago, Chinese open-source models accounted for 1.2% of global usage. Today it's 30%. A 25x increase in a single year, and neither side is slowing down. Western labs are shipping faster than ever. Chinese labs are matching them within months.

The model race has two superpowers. The infrastructure powering both of them is where the durable value compounds.

See our deals at syndicate.upround.xyz