Off-Prem

PaaS + IaaS

Nvidia and chums inject $160M into Applied Digital to keep GPU sales rolling

Datacenters are the lifeline for its $30B ML-fueled boom


AI has made GPUs one of the hottest commodities on the planet, driving more than $30 billion in revenues for Nvidia in Q2 alone. But, without datacenters, the chip powerhouse and its customers have nowhere to put all that tech. 

With capacity in short supply, it's no wonder that VC and chipmakers alike are pumping billions of dollars into datacenters to keep the AI hype train from stalling.

The latest example includes a $160 million investment by Nvidia and partners in Dallas, Texas-based bit-barn operator Applied Digital, which offers a variety of datacenter and cloud services built around Nvidia's GPUs. As one financial journal noted on Thursday, the DC operator will use the cash injection to accelerate development of a datacenter complex in North Dakota and support additional debt financing schemes to pay for the costly accelerators.

With bleeding-edge GPUs commanding as much as a car these days — $30,000 to $40,000 a piece in the case of Nvidia's upcoming Blackwell chips — many datacenter operators have taken to using them as collateral to secure the massive loans.

Applied Digital isn't even the biggest example lately. In July, AI datacenter outfit CyrusOne scored another $7.9 billion in loans to pack its facilities with the latest accelerators. That's on top of the $1.8 billion in capital the firm bagged this spring.

CyrusOne isn't an isolated instance either. CoreWeave, arguably the biggest name in the rent-a-GPU racket, talked its backers into a $1.1 billion series-C funding round back in May. Only a few weeks later, CoreWeave had convinced them to shell out another $7.5 billion of debt financing. 

While multi-billion-dollar loans may grab headlines, most don't rise quite to that level. AI cloud upstart Foundry, for instance, managed to pick up $80 million in series-A and seed funding ahead of its launch in August.

Even some chipmakers have been vying for their share of the funding while it lasts. Groq, which is unique in that its inference cloud isn't based on off-the-shelf GPUs and instead uses its custom language processing units (LPUs), scored $640 million to expand its offering last month.

Meanwhile, Lambda, one of the original GPU-cloud operators, started the year with a $320 million funding round. Along with another $500 million in loans secured this spring, it now plans to add tens of thousands of Nvidia GPUs to its compute clusters.

Unsurprisingly, there are a number of bit-barn operators looking to replicate this strategy. TensorWave is working to scale out compute clusters based on AMD's MI300X accelerators, while Voltage Park is following Lambda and others' lead and sticking with Nvidia GPUs.

Those are just the ones that spring to mind, but the takeaway here is that it's a good time to be in the datacenter business, especially if those plans include renting out GPUs.

Alongside the usual cast of VC firms, like BlackRock, Magnetar Capital, and Coatue, Nvidia has also got behind some of these endeavors, having previously thrown its weight behind CoreWeave.

Nvidia's motivation in financing these projects is obvious. It can only sell as many GPUs as there is capacity for them. Once deployed, each of these accelerators also have the potential to generate $1/hour of subscription revenues if it can convince customers its Enterprise AI suite is worthwhile.

A buck an hour might not sound like much, but, as we've previously discussed, it adds up pretty quickly when you're talking about clusters with 20,000 or more GPUs.

It's not a bad deal for the datacenter operators or their financiers, either, so long as their revenues are enough to cover their loan payments anyway.

That shouldn't be too much of a problem, according to our sibling site The Next Platform, which found that an investment of $1.5 billion to build, deploy, and network a cluster of roughly 16,000 H100s today would generate roughly $5.27 billion in revenues within four years. ®

Send us news
5 Comments

Where does Microsoft's NPU obsession leave Nvidia's AI PC ambitions?

While Microsoft pushes AI PC experiences, Nvidia is busy wooing developers

AI frenzy continues as Macquarie commits up to $5B for Applied Digital datacenters

Bubble? What bubble?

CoreWeave drops £1bn in UK datacenters – but don't expect the latest Nvidia magic just yet

Rent-a-GPU outfit's latest datacenters are packed to the brim with H200s

Nvidia shovels $500M into Israeli boffinry supercomputer

System to feature hundreds of liquid-cooled Blackwell systems

Nvidia snaps back at Biden's 'innovation-killing' AI chip export restrictions

'New rule threatens to squander America's hard-won technological advantage' says GPU supremo

With AI boom in full force, 2024 datacenter deals reach $57B record

Fewer giant contracts, but many more smaller ones, in bit barn feeding frenzy

Nvidia shrinks Grace-Blackwell Superchip to power $3K mini PC

Tuned for running chunky models on the desktop with 128GB of RAM, custom Ubuntu

Additional Microprocessors Decoded: Quick guide to what AMD is flinging out next for AI PCs, gamers, business

Plus: A peek at Nvidia's latest hype

AI hype led to an enterprise datacenter spending binge in 2024 that won't last

GPUs and generative AI systems so hot right now... yet 'long-term trend remains,' says analyst

Just as your LLM once again goes off the rails, Cisco, Nvidia are at the door smiling

Some of you have apparently already botched chatbots or allowed ‘shadow AI’ to creep in

AI datacenters putting zero emissions promises out of reach

Plus: Bit barns' demand for water, land, and power could breed 'growing opposition' from residents

UK unveils plans to mainline AI into the veins of the nation

Government adopts all 50 venture capitalist recommendations but leaves datacenter energy puzzle unsolved