Coreweave: Risks Trump Opportunities
Coreweave may have a great potential, but if I owned it now, I wouldn't be able to sleep tight at night.
“Unsustainable valuations.”
This was the title of the Dow Theory Letters on March 22, 200. It was a highly respected periodical about investing and finance founded by Richard Russell.
Unsurprisingly, it featured Cisco in the top place of the unsustainable valuations list. It wasn’t wrong. The company had $452 billion market cap trading at nearly 150 times earnings with an estimated growth of 30%.
Adjusting the valuation to the current prices, accounting for inflation, draws the real picture of how exaggerated it was. $452 billion means $844 billion in today’s dollars.
Imagine a company valued at $844 billion trading at 150 times earnings.
This is how big a bubble it was.
Yet, what perplexes me is that most people don’t even know that Cisco had a track record of justifying lofty valuations.
Indeed, for most of the 1990s, Cisco wasn’t a bubble. It was an unstoppable juggernaut.
The computer revolution of the 1970s led to the rapid digitalization of all types of institutions, from corporations to universities. Yet, there was a big problem. The industry wasn’t standardized. There were many manufacturers using different software and protocols. Thus, different computer networks weren’t able to communicate with each other.
Two computer scientists at Stanford, Leonard Bosack and Sandy Lerner, noticed how big a problem that was when they tried to send an email from a computer in one lab on the campus to another. Even though both computers were HP, they were incompatible because their models were different.
They thought this was a universal problem, and it really was.
They found a company called Cisco and invented what is called the “router,” a specialized computer that enables communication between different networks.
The router would enable the internet revolution, it would unleash the global networking, thus the demand could be infinitely high.
And it was.
Cisco’s revenue grew from $70 million in 1990 to $19 billion in 2000, average annual growth rate of 66%. Its operating income also grew at the average annual rate of 63%:
Cisco wasn’t a bubble in the way the term is used to describe dotcom companies of the 2000s that had a billion-dollar valuation without revenue. No, Cisco was generating real revenue, and it was highly profitable.
So, I think the question that is highly relevant for today is simple—What happened then?
Why did such a fast-growing company with a product that was the backbone of the internet revolution fail to live up to the expectations?
Well, two things:
Technological development.
Excess capacity.
One of Cisco’s great advantages was that the routers of different companies weren’t compatible out of the box. Thus, customers who started with Cisco remained with Cisco. However, that started to change as technology developed and software enabled compatibility between different routers. Over time, new technologies developed like LAN switches, and replaced routers.
The second is that the market ignored the fact that there were now many companies building capacity for what looked like an infinite demand at the time. When thousands of global actors try to respond to the same need, excess demand can quickly become excess supply.
The historical lesson is that Cisco wasn’t a bubble at first. In the course of 10 years, changing market conditions made it a bubble.
This was the historical incident that passed my mind when I dived deep into Coreweave:
An infrastructure product enabling the AI revolution.
Clear technological edge against the competitors.
Global demand is way exceeding its capacity.
First mover advantage in the market.
Many small competitors.
Result? A company that is growing extremely fast:
It’s a company that is generating real revenues and growing triple digits year-over-year.
So, the Cisco incident gives us the right questions to ask:
Is this growth sustainable?
That requires understanding:
Business model.
Market structure and competition.
Will there be an excess demand or excess supply in the medium term?
This is what we are going to discuss today.
So, let’s cut the introduction and dive deep into Corewave.
What are you going to read:
1. Understanding The Business
2. Competitive Analysis
3. Opportunities
4. Risks
5. Valuation
6. Conclusion
🏭 Understanding the Business
Every big technological leap comes with inherent infrastructure challenges.
This is because most technological leaps are a result of a breakthrough product innovation. Naturally, most breakthrough products are breakthrough because they drastically change something in the old designs. This makes them inherently incompatible with the established infrastructure built for the old technology.
Think about smartphones.
The old cell towers were optimized for voice transfer through 2G/3G, so they struggled with data. Even after the 4G standard was developed, most regions didn’t have access to it because cell towers weren’t refitted.
Today, we are going through the same technological shift for cloud computing.
The internet infrastructure we are currently sitting on was optimized to host and run websites and web applications.
What are websites and web applications? Well, websites are quite simple; it’s a front-end layer that shows you the data retrieved from a database. It becomes an application when it allows you to make changes in the database with backend involvement.
So, most of the web as we know it is just a user interface on databases.
This is where CPUs, or logic chips, thrive. They are optimized for many serial code executions because they are always ready, and one thread can keep going while another thread sleeps. This is the case with most web operations:
User log in → Give access if right, error if wrong.
Wait for user input → Make calculations based on user input.
Wait for output → Wait for the calculations to show the result to the user.
Most of the traditional web runs on sequential tasks like this. This is why the traditional data centers were built on Data Center CPUs, and the traditional cloud providers built their services on this architecture:
However, in the age of AI, requirements change.
AI training and inference, especially training, requires massive parallelization, unlike the sequential execution requirements of the traditional web.
Why? Because what’s called machine learning basically depends on feeding the model with data and making the computer recreate that data correctly. This is what’s called next-token-prediction.
This requires thousands of cores working together at scale. Think about this like each core making a bit of the same picture, while in the CPU, just one worker tries to recreate the picture from the start.
Naturally, the traditional cloud infrastructures struggle when it comes to GPU-heavy AI workloads.
The hardship starts with the hardware layer.
GPUs are much more energy-intensive than CPUs. Traditional CPU-based data centers were created in a way that one rack could safely consume 25-30 kW. This was made possible by air-cooling. This is not enough for GPUs. Thus, in GPU-intensive data centers, liquid cooling is generally preferred as it’s able to offload more heat from the GPUs, allowing them to consume 80-130 kW.
Expanding the existing architecture to accommodate this level of power demand requires permits and retrofits that can take up to 18 months.
On top of that, traditional CPU-oriented data-centers usually rely on 25–100 Gb/s Ethernet links—plenty for web traffic, but too slow and too laggy for AI work, where thousands of GPUs must swap data through much wider, lower-latency connections.
The software layer is another hardship.
Traditional clouds save money by running your container inside a shared virtual machine, so the physical CPU and GPU are time-sliced among many customers. That tiny virtualization overhead is fine for web servers, but it can slow down large, compute-intensive AI jobs that need every bit of the available resource.
This is where Coreweave comes in.
I think of its business model as AI infrastructure as a service.
It has created a cloud business optimized for AI workloads on a hardware infrastructure built specifically to run AI works.
It leases data center space ready to be equipped and equips it exclusively with Nvidia GPUs, Infiniband connectors, and server racks from Dell and SuperMicro. It uses liquid cooling that enables a rack to consume up to 130 kW safely. As a result, its hardware infrastructure is already better suited for AI workloads than the traditional players.
Its software stack is also optimized for AI workloads.
Instead of virtualized environments, it uses a bare metal approach, which means that you directly own the capacity you rented. Your code runs directly in the GPU with no sharing.
The managed-software layer runs and automatically updates all the “plumbing” — the scheduler, GPU drivers, and monitoring agents — so you don’t have to pause your work to tinker with infrastructure.
On top of that, the application-software layer gives you one-click add-ons such as experiment-tracking dashboards, built-in checkpoint storage, and ready-made inference endpoints, letting you turn advanced features on with a toggle instead of building them from scratch.
This architecture is optimized for GPUs and improves the performance of the tech stack by 35-45%:
This is an impressive performance improvement that can provide its clients with a critical edge in the high-stakes AI race.
Providing an edge to the AI labs isn’t just a benefit of using its technology; it’s Coreweave’s business model.
What the hell does it mean?
Perhaps the most significant bottleneck in developing cutting-edge AI models is computing power. This is why Nvidia is operating at an annual product cadence, introducing more and more powerful chips every year. As a result, a cutting-edge GPU becomes outdated in a matter of 4-5 years.
To keep their edges, AI companies should have access to the cutting-edge GPUs as fast as possible.
To enable this, Coreweave operates on a long-term, 2+ years contract model:
They contract with AI labs to provide certain capacity.
Clients pay 15%-20% of the contract upfront.
On top of the prepayment, Coreweave borrows money to buy the required hardware, secured by the hardware itself.
They deliver the promised capacity in approximately 15 weeks, and then the customers are billed monthly on a take-or-pay basis.
Once the contract term ends, Coreweave either resells the hardware or keeps it to expand its on-demand consumption capacity.
This model allows them to provide their customers with cutting-edge GPUs all the time while avoiding excess capacity buildout.
In sum, Coreweave has built a full-fledged AI infrastructure service from the ground up based on the cutting-edge hardware and proprietary software customized for AI-heavy workloads.
This model obviously worked extraordinarily well for them so far, as they have grown revenues from $16 million in 2022 to $1.9 billion in 2024.
Thus, the question is not whether it works but how long it can keep working and how profitable it is.
Let’s dig.
🏰 Competitive Analysis
For investors, it usually doesn’t matter how fast the business is currently growing. The market generally doesn’t struggle to reflect the current and near-term growth projections in the prices.
So, there is only one way for investors to generate outsized returns on their investment in the long term—the underlying business should have a sustainable competitive advantage.
I think this is now pretty well known to most people.
What I think people still get wrong is that they think the competitive advantage stems from the company.
Well, for 99% of the cases, this is not the case.
Of course, the company owns the competitive advantage once it’s there, but what gives rise to competitive advantages is the industry dynamics more than the individual companies.
Some industries are much more conducive to competitive advantages, and some are not.
Think about the computers from the 1990s to the 2000s.
If you looked at the industry map, you would see that there were three main groups of players:
In the whole industry, only Intel and Microsoft had above-average operating margins and high return on capital. The competition between the box makers was intense, and their return on capital was minuscule compared to that of Intel and Microsoft.
This is because the chip and operating system market was much more conducive to concentration than the market for assembling computers.
Chip and the operating system were the two main components, and it was in everybody’s best interest to make the computers compatible with each other. Thus, manufacturers predominantly picked the Intel-Windows standard.
This way, massive barriers to entry emerged in the chip and operating system business while box-making remained competitive.
So, what were the ingredients of the barriers to entry?
Scale and consumer captivity.
These factors fed each other.
As box makers adopted the Intel-Windows standard, these two companies quickly reached massive scales, enabling them to invest more in R&D and thus develop better products, which supported their position as the top choice for customers, reinforcing the customer captivity.
Due to the massive barriers to entry, these two companies thrived very well for decades while most box makers exhibited average profitability and generated average returns.
Both scale advantages and customer captivity exist in cloud computing, erecting massive barriers to entry.
To start with, cloud computing is an extremely capital-intensive business.
This is why the three largest cloud companies in the world are also three of the most successful firms that history has seen—Amazon, Microsoft, and Google.
It requires massive capital expenditures for hardware, plant, and property. And it’s not a one-time spending. Chips complete their useful lives in approximately 5 years and need to be replaced. So, cloud giants have to replace a portion of the equipment with newer versions every year.
This means billions of dollars in annual capital expenditures. For reference, Amazon, Google, and Microsoft are set to spend $250 billion this year in capital expenditures. Even if we assume just $200 billion of this will go to cloud computing, the scale you need to afford this is massive.
There are only a handful of companies at that scale.
Starting small is no option, as there is no reason any customer should pick a newcomer over established providers that are time-tested. Anything new a nascent competitor can offer, giants can match in days.
Thus, either you enter at scale directly, or you don’t hold a chance.
Second, customer captivity is also intense.
Once a company builds its infrastructure using a specific provider, it hardly changes it.
Changing a provider exposes your business to the risks you may not be able to foresee. Operations may be disrupted, and the learning curve that comes with the new platform may stall the development. So, most companies stick with their cloud providers except for the edge cases.
If this is the case, how did Coreweave gain traction?
Well, it was at the right place at the right time.
It benefited from a sudden spike in demand that the existing providers couldn’t meet because it was a different kind of demand, as we explained above. The traditional cloud architecture wasn’t able to meet it.
That opened a window for a new entry in a niche sub-market, and Coreweave took great advantage of it.
They didn’t make a slow entry as they knew they wouldn’t hold a chance in the market if other providers scaled before them. They made an entry at scale, heavily using financing to meet the demand. Their business model, as explained above, prioritizes capturing the demand and scaling.
As a result, Coreweave itself has quickly become an incumbent in the sub-market for cloud computing optimized for AI workloads.
Their market share in overall cloud computing may be ignorable; however, when you reduce it to just GPU cloud, and further to cutting-edge GPU cloud for AI, its share becomes much larger.
Now it enjoys the barriers to entry that we mentioned above.
Any new entrant now has to find a way to spend billions of dollars in GPUs every year, and it has to convince customers to use its platform, not AWS, Azure, Google, Oracle, Nebius, or Coreweave. This is an arduous task. Even if it carves out a way, it has to establish a relationship with the suppliers like Nvidia to secure shipments because the chip demand way exceeds the supply currently.
Thus, I think Coreweave’s competitive position in the industry is strong, at least against the nascent competitors. As long as there is excess demand, it’ll be able to capture a fair share of it.
I think the bigger question here is its competitive position against the bigger incumbents like AWS, Azure, and Google Cloud in the longer term.
For now, it looks like it made a successful entry and carved out a market for itself where it’ll be enjoying competitive advantages due to scale and customer captivity. However, it’s always easy to confuse excess demand with a successful entry.
Has the Coreweave permanently penetrated the market, or is it thriving purely because of the excess demand?
When the demand stabilizes, will Coreweave keep capturing its fair share, or will it go down?
That’s a risk that needs to be observed carefully.
💰 Opportunities
There are two opportunities ahead of Coreweave. The first one is the upcoming demand, and the second is the existing demand that it can divert from incumbents to itself.
1️⃣ Cloud AI market will grow insanely fast.
Cloud AI, cloud computing optimized for AI workloads, is projected to become a massive market.
Market research firms estimate that the Cloud AI market size is currently around $120 billion. They project this to reach nearly $650 billion in 2030:
This is a massive market opportunity.
Even if we assume that the market will reach half of the projections in 2030, it’ll still triple in size in the next 5 years. This means that Coreweave can triple its revenue even if it grows at the market rate.
Yet, this will be just the beginning.
It’s highly likely that AI Cloud buildout will keep growing at a double-digit annual rate for the decade after 2030, as the infrastructure buildout for the Internet has been growing at a double-digit annual rate for three decades now. Given that AI workloads are even more compute-intensive, there is no reason that the Cloud AI market will be saturated in just a few years.
In short, Coreweave is looking at arguably the largest market opportunity in the history of capitalism.
2️⃣ Coreweave can take market share from the incumbents.
Aside from benefiting the future growth of the market, Coreweave is also well-positioned to steal AI Cloud business from the large incumbents like AWS, Azure, Oracle, and Google.
This is mainly thanks to its architectural difference from the existing providers.
As we explained above, the cloud giants of today were created for the Internet, not AI workloads.
This is why they have developed many layers, like virtual machines, to make their business more efficient. Though that approach works well for web-based applications, it adds an unnecessary layer when it comes to AI training.
Their data center architecture is also optimized for CPU-based applications. They use air-cooling most of the time, which provides enough cooling for CPU racks. However, GPUs are more energy hungry, and they need better cooling. It takes time and increases overhead costs to build new halls with liquid cooling.
Further, these companies mostly provide cloud solutions pay-as-you-go basis, which forces them to behave more conservatively in pricing. They price their products so that the capacity in use can subsidize the idle capacity.
Coreweave, on the other hand, is GPU native.
As opposed to traditional clouds, Coreweave adopted the bare metal approach. It avoids the overhead of launching virtual machines for each workload.
It equipped its data centers to maximize GPU performance. Its racks come optimized for liquid cooling, with power feeds sized for 80-130 kW. These racks cram whole 72-GPU pods in one cabinet, resulting in more compute per square foot and lower cooling cost per GPU.
Its contract-based business model also provides better foreseeability for future revenues, enabling it to work with lower margins than the incumbents.
Result? It’s able to price aggressively compared to the alternatives:
Lower prices enable Coreweave to take the AI Cloud business away from the traditional hyperscalers.
OpenAI is an example. Despite their partnerships with Microsoft and Oracle, they entered into $11.3 billion cloud contract with Coreweave because it’s much cheaper than any alternatives currently available.
If they can sustain their price advantage, over time, they may steal more customers away from the traditional hyperscalers. This will lead to a more fragmented customer base and reinforce their position as a permanent player in the market.
In sum, Coreweave has a massive market opportunity. It’s well-positioned to scale in the future as the demand remains strong. Its superior product offering at a lower price point also enables it to take market share from the incumbents.
However… A massive opportunity comes with massive risks.
⚠️ Risks
Coreweave’s business model opens up great opportunities and amazingly fast growth in a short period of time; however, it also exposes it to some lethal risks.
1️⃣ Customer Concentration
As of last year, Coreweave generated 77% of its revenue from two companies.
The largest of them, Microsoft, accounted for a whopping 62% of the revenue last year.
While most investors expect this concentration to decline over time, it only got worse in the first half of the year.
Microsoft is currently accounting for 72% of Coreweave’s revenue, up from 71% in the first quarter.
This level of revenue concentration on one customer, which itself is the second-largest cloud provider in the world, gives me the impression that Microsoft is basically using Coreweave to bridge its capacity gap at the moment.
Satya Nadella previously said that Microsoft doesn’t necessarily look to own and operate the data centers itself, and it may rent the capacity when it’s a better financial decision.
Looks like they are doing just that.
Why does Microsoft have this gap? After all, they have deep pockets and the capability not to have this gap.
Two potential reasons:
They may be worried about overbuilding, so they avoid aggressive scaling.
They are developing their own chips to reduce dependence on Nvidia. Meanwhile, they rent capacity to bridge their gap until they scale on their own chips.
The problem is that, in both cases, it’s almost certain that Microsoft will reduce its contract size with Coreweave over time.
The market currently bets that new customers will fill the place of Microsoft over time:
Goldman Sachs projects that Microsoft will account for only 38% of the revenue in 2026, while others, as a group, will represent 35%.
Microsoft currently makes up 72% of the revenue, so this projection doesn’t look very realistic. Plus, if Microsoft is backing down, it means that its capacity gap is getting smaller, and it’ll be able to offer capacity to the customers at more competitive prices.
Plus, provided that hyperscalers start using their own GPUs in a few years, Coreweave may suddenly fall on the more expensive side of the market. In that case, it’s not clear whether Coreweave’s tech stack will be appealing enough for customers to pay a premium.
For me, the equation is simple. Any short-term scenario where Coreweave keeps growing fast despite shrinking Microsoft revenue isn’t realistic.
This is a big risk for the future of the company.
It may be like a temporary backup power, which is thriving because of the excess demand. It’s highly uncertain whether it’ll hold its position if the demand slows down or hyperscalers build up enough capacity to provide at competitive prices.
2️⃣ Finances
Coreweave’s business model is highly leveraged:
It signs the contract with customers.
Barrows to build the capacity, collateralizing the hardware itself.
This is the definition of leverage.
This system works primarily through the Delayed Draw Term Loan Facility or DDTL.
These are different from the traditional loans.
In DDTL, the borrower doesn’t have to take the whole amount at the beginning and start interest payments. Instead, specified money waits ready in the bank for the borrower in exchange for a small commitment fee, and the borrower starts paying interest only for the tranches he has withdrawn.
Coreweave currently has three DDTL facilities.
DDTL 1
This is a $2.3 billion DDTL facility financed by Blackstone and Magnetar Capital.
The effective interest rate on this loan is 15% and it has been fully drawn.
The terms of the loan require quarterly payments based on the company's cash flow and, starting in January 2025, the depreciated value of the GPUs that are used as collateral to provide the loan, and CoreWeave has until March 2028 to fully pay it off. As of December 2024, Coreweave had paid $288 million in principal and $255 million in interest since the inception of the loan.
This means that it’ll have to pay, on average, $192 million every quarter until March 2028 to service the remaining $2 billion principal balance and interest
There are two nuances:
The first is that it has committed to increasing principal payments as the underlying collateral, GPUs, depreciate in value.
The covenant says that the final payment will be a balloon payment.
We don’t have any means to estimate the balloon payment or how much the principal payment will increase, reducing the interest burden. So we may stick with the fact that it’ll have to pay, on average, $192 million to service this debt every quarter until March 2028.
DDTL 2 & DDTL 3
This was a massive $7.6 billion facility again financed by Blackstone and Magnetar.
The actual interest rate charged here is around 10.5%, and the loan must be paid in 60 months from whenever it’s used.
As of the last quarter, they have used $5 billion of this facility, while $2.6 billion remains for borrowing.
This facility has a brutal obligation that requires the company to pay the DDTL 2 loan first if it raises any more debt.
The problem is that the company signed a new $4 billion contract with OpenAI in May, which required it to spend at least $2.6 billion to meet the obligations. The DDTL 2.0 facility wasn’t enough for this, given that it would be used to meet other obligations. On the other hand, the company wasn’t able to resort to a new DDTL facility because it would have to use proceeds to pay back DDTL 2.
What did it do?
It created a shell subsidiary named CoreWeave Compute Acquisition Co. VII, LLC., and raised a new $2.6 billion DDTL facility through it.
The interest rate on this loan will be around 8.5% given that the actual covenant cites a SOFR+4% rate. SOFR (The Secured Overnight Financing Rate) is currently around 4.3% so it’s reasonable to assume that the effective interest rate on this loan will be around 8.5%.
In sum, the company currently has around $11 billion in debt on the balance sheet:
This is set to increase as it uses the DDTL 3 facility.
Against this mountain of debt, it’s generating $753 million in adjusted EBITDA.
Interest expense takes 35% of the adjusted EBITDA.
This won’t decrease anytime soon, as it has a contracted power of 2.2 GW while operational capacity is only around 470 MW. It’ll have to spend billions of dollars that it doesn’t currently have to reach this capacity, as it only holds $1.1 billion in cash.
This means it’ll have to either aggressively issue new shares or raise more debt.
In both cases, it’s a red flag.
3️⃣ Supplier Concentration
Currently, Coreweave buys 100% of its GPUs from Nvidia.
It’s a member of the Nvidia Partnership program, and Nvidia has 7% stake in the company. Thus, it has been able to secure preferential access to the latest generation Nvidia chips so far.
Though people tend to think of this as Coreweave’s advantage, I think it’s a significant long-term risk.
Why? Well, analyst Nick Del Deo puts it best—“Corewaeve exists because Nvidia wants it to exist.”
Why does Nvidia provide Coreweave with preferential access?
It’s a genius strategy on Nvidia’s part.
It provides Coreweave with preferential access, so it builds capacity made up of the latest and most powerful GPUs. This naturally attracts the AI labs and AI enterprises that are competing to train the most developed foundational model.
That creates an immense competitive pressure on the traditional hyperscalers to also buy the latest Nvidia chips as soon as possible, not to lose big customers to the new providers like Coreweave.
If it weren’t for Nvidia partners like Nebius and Coreweave, hyperscalers could delay adoption of the new chips because of economic reasons.
Who wins from this? Nvidia.
Coreweave, on the other hand, is out in the open.
Once the hyperscalers find a window where demand stabilizes so they can afford to delay upgrades a bit, they’ll likely try to replace the Nvidia-based infrastructure with their custom silicon. If this happens, they will reduce their purchases from Nvidia.
In that case, to whom do you think Nvidia will turn to squeeze? Of course, to Coreweave, because it depends solely on Nvidia.
While Coreweave keeps buying Nvidia, AWS, Azure, and Google will increasingly run on their custom silicon, cutting costs and thus offering more attractive prices to the customers. In that case, Coreweave will suddenly find itself on the more expensive side of the market, losing customers.
In short, there are some massive risks for Coreweave. The opportunity is great, but it needs to act fast and decisively to capture it. However, the very tools that it relies on to act fast are also exposing it to undue financial and operational risks.
When I look at the current picture, risks scare me more than the opportunities hype.
📈 Valuation
Corewave has grown insanely fast since 2023.
It has scaled from a near $500 million annualized run-rate in 2023 to a near $5 billion annualized run-rate in the last quarter.
The problem is that it’s very hard to forecast its future growth rate for the next 5 years, given the massive financial and operational risks ahead of it.
Microsoft makes up 77% of the revenue. This is a very simple equation; if it loses Microsoft, there’ll be no growth for sure.
So, its ability to grow consistently from this level depends on keeping Microsoft as a customer, and consistently expanding its customer base on top of it. We have discussed above why this isn’t guaranteed. Still, we need this assumption as it’ll be impossible to forecast growth without Microsoft.
The management guides for $5.1 billion in revenue for this year.
Let’s be very optimistic and assume that the business will grow 50% annually in the next 5 years.
It’ll generate $38 billion in revenue in 2030.
What’ll be the net income margin?
Very hard to forecast. Google Cloud currently has a 17% operating margin. In a bit optimistic scenario, we can assume that Coreweave will have around 15% net margin.
Meaning, it’ll generate $5.7 billion in net income in 2030.
Slap it a 25 times exit multiple, and we get $142 billion business.
It’s currently valued at $45 billion, meaning the current valuation promises a 3x return in the next 5 years.
The problem is that this is all very speculative.
I don’t see how CoreWeave will post positive GAAP earnings with massive depreciation and interest payments. On top of that, it’ll have to resort to further financing to complete the 2.2 GW contracted power it has committed to.
Assuming there is a path, it still requires sustained growth. Which very much depends on the external factors like demand and supply balances.
Let me be very direct—at the current stage of Coreweave, anything titled valuation is a speculation.
It’s not possible to conduct a DCF valuation with fair accuracy.
Lack of DCF valuation on Alpha Spread proves this point. I don’t like their valuations. I generally find them very superficial, and their valuations usually miss the real potential of businesses by a lot. Still, Coreweave isn’t something that can be valued even by their low standards. Go to their website and search DCF valuation for Coreweave. This is what you’ll see:
I have valued thousands of businesses, and when it comes to Coreweave, I am stuck because I have to rely on assumptions out of thin air. This is the definition of speculation.
And the first thing I learnt in investing was that you shouldn’t put your money in any company based on speculation.
🏁 Conclusion
I made most of my money not on stocks that I believe will 10x, but in stocks that I believe I won’t lose much money on.
I haven’t always been this way.
When I first started, I was looking for the moonshot positions like most beginner analysts. That almost always ends up in disaster.
Moonshot positions are moonshots because the market refuses to assign them much value due to high risk, and high risk generally materializes. This is not the path to long-term compounding in the stock market. If your portfolio loses 50%, it has to appreciate 100% just to break even again.
What you should do instead is limit the downside first, and the upside will take care of itself.
With Coreweave, the downside isn’t limited.
It’s very leveraged, and many things should happen in the future for its business to deliver the upside it promises:
It should be able to secure financing at attractive rates.
Demand should remain strong for years.
It should diversify its customer base.
It shouldn’t lose any big customers.
Even if one of these things doesn’t materialize, the whole model may collapse. The downside is huge.
Maybe all of these conditions will materialize, and it’ll skyrocket from here; maybe it’ll become the largest AI Cloud company in the future.
It could be. I cannot know any of these.
What I know is that avoiding a position right now makes more sense for individual investors, as the downside currently looks unlimited.
If I see that there is a clear path to sustainable and profitable growth, I’ll be willing to revisit. But for now, I am out.
The leverage is highly speculative and the loan conditions are leonine (benefiting Blackstone and Magnetar). Not for me. Rather buy Nebius.
there is a zero missing in the date in the first sentence