The Infrastructure Bet: Open-Source AI

The Infrastructure Bet: Open-Source AI

The most consequential shift in AI isn't happening at the model layer. It's happening beneath it.

Eighteen months ago, the consensus view was that proprietary models from OpenAI and Anthropic would dominate indefinitely. That thesis is breaking down. DeepSeek-R1 matched frontier model performance at a fraction of the training cost. Meta's Llama family has become the default starting point for enterprise AI deployments. Open-source models now account for the majority of inference workloads globally.

This isn't a philosophical debate about openness. It's an economic restructuring.

Why Open-Source Wins at Scale

The math is straightforward. Enterprises running AI at production scale can't afford to pay per-token pricing to closed-model providers indefinitely. Open-source models give them three things proprietary models can't: full control over fine-tuning, data privacy by default, and the ability to optimize inference costs as volume scales.

The result is a massive migration. Over 450,000 developers are now building on open-source model platforms. Enterprise adoption is accelerating as companies move from experimentation to production workloads where cost predictability and model ownership matter.

Who's Building the Infrastructure Layer

When models commoditize, value accrues to the infrastructure that makes them run. A new class of companies is capturing this shift, and they're scaling faster than almost anything we've seen in enterprise software.

Together AI built the AI-native cloud for open-source models. Their platform powers inference, training, and fine-tuning for over 200 models across all modalities. Customers include Cursor, Salesforce, and Zoom. Their Chief Scientist, Tri Dao, invented FlashAttention, the foundational optimization now used across the entire AI industry. Revenue surpassed $100M ARR as of early 2025, up from $30M the prior year, with the company continuing to scale aggressively. NVIDIA, General Catalyst, Kleiner Perkins, and Salesforce Ventures are all on the cap table.

They're not alone. CoreWeave went from crypto mining to the largest independent GPU cloud provider, now valued north of $35B. Lambda raised $480M to scale its GPU cluster business. Hugging Face, valued at $4.5B, has become the distribution backbone for open-source AI, with over 1M models hosted on its platform. Collectively, the open-source AI infrastructure sector has attracted tens of billions in capital in the last 18 months.

The pattern is clear: every major AI lab and enterprise needs specialized infrastructure to train and deploy open-source models, and the hyperscalers alone can't meet the demand.

Where the Value Compounds

The infrastructure companies that will define this category share three traits: proprietary performance optimizations that create switching costs, owned data center capacity that expands margins over time, and a developer ecosystem that generates compounding usage. The best of them aren't renting compute. They're building technology that makes open-source models run faster and cheaper than anywhere else, and locking in the customers who can't afford to switch once they're in production.

Open-source AI isn't the underdog anymore. It's the default. And the companies powering it are producing the kind of revenue growth and institutional backing that turns infrastructure into generational outcomes.

See our deals at syndicate.upround.xyz