Back to Analysis

Nvidia's Moat: How Deep Is It Really?

Revenue grew 8x in two years. Operating margins hit 60%. The question isn't whether Nvidia has a moat, it's whether that moat is durable.

March 28, 2026
10 min read

A Moat Built on Code, Not Just Chips

Nvidia's revenue grew from $27 billion in fiscal year 2023 to $215.9 billion in fiscal year 2026, an 8x increase in three years. Operating margins hit 60.4%. Free cash flow reached $96.7 billion. These are not the numbers of a normal semiconductor cycle.

The question every serious investor asks is whether this performance is durable or whether it reflects a one-time infrastructure build-out that will taper. The answer depends less on chip specifications and more on whether CUDA's 15-year software lead can be dislodged.

At 34x trailing earnings and 21x forward earnings, the market is pricing in continued dominance but not explosive further growth. The PEG of 0.71 is the market's quiet admission that even at today's prices, the earnings growth rate is not fully reflected in the multiple.

How an 8x Revenue Increase Actually Happened

In fiscal year 2023 (ending January 2023), Nvidia generated $27.0 billion in revenue, primarily from gaming and professional visualization. The data center segment existed but was not yet dominant. Operating income was $4.2 billion, a 15.7% margin.

The ChatGPT moment in late 2022 triggered a shift in hyperscaler AI spending that Nvidia was uniquely positioned to capture. The H100 GPU, built on the Hopper architecture, became the single most in-demand piece of hardware in technology history. Delivery times stretched to six to twelve months. Revenue in fiscal year 2024 hit $60.9 billion, more than doubling in a single year.

Fiscal year 2025 brought another doubling: $130.5 billion. Fiscal year 2026 brought $215.9 billion, a further 65% increase. In three years, Nvidia went from the size of a mid-tier semiconductor company to generating more revenue than all but a handful of companies in the world.

The Blackwell architecture, which began shipping in volume in late fiscal year 2026, drove a large part of the most recent revenue acceleration. Customers moved quickly to upgrade from H100 to B100 and B200 systems, creating a refresh cycle on top of greenfield AI infrastructure spending. Nvidia benefited from both simultaneously.

TickerXray Report

Run the full forensic analysis on Nvidia

Get the complete Nvidia report with all 12 quantitative models, AI-generated investment thesis, and real-time data.

12 forensic models
AI investment thesis
Manipulation detection
Expected return forecast

NVDA Annual Revenue (USD Billions)

Operating Margin Trend (%)

The Margin Profile Is the Moat Signal

Operating margins of 60.4% in fiscal year 2026 are not normal for a hardware company. Intel's operating margins run in the teens. AMD is at 10.7%. Even Apple, renowned for its margin discipline, runs below 35%. Nvidia at 60% is in a category almost entirely its own.

Gross profit was $153.5 billion on $215.9 billion in revenue, a gross margin of 71.1%. The decline from fiscal year 2025's 75.0% gross margin reflects the higher cost structure of Blackwell systems, which require more complex packaging and components. Management expects gross margins to recover toward the mid-70s as Blackwell production scales.

Net income was $120.1 billion in fiscal year 2026. Return on equity was 101.5%, meaning Nvidia generated more in net income than it has in total shareholders' equity. That ratio is only achievable with exceptional asset turns or extreme leverage, and Nvidia has neither: it is pure operating efficiency.

R&D of $18.5 billion was only 8.6% of revenue. That ratio will increase as revenue growth moderates, but it underscores the point: Nvidia's operating model generates extraordinary profits because the variable cost of serving incremental demand is very low once the chip architecture is designed and manufacturing is outsourced to TSMC.

Revenue by Fiscal Year (FY2022 to FY2026)

CUDA: The Real Source of the Moat

Nvidia's physical moat, the GPU chip itself, is real but limited in durability. AMD, Google, Amazon, and Microsoft all have competitive GPU and custom silicon architectures. The chip moat alone would not support a 34x trailing PE.

The CUDA software ecosystem is the deeper and more defensible moat. CUDA has been the dominant programming model for GPU-accelerated computing since 2006. Over 15 years, developers have built more than 30,000 libraries, frameworks, and tools on top of it. PyTorch, TensorFlow, and virtually every major AI framework is optimized for CUDA. The collective developer investment in CUDA-specific code is estimated in the hundreds of thousands of person-years.

Switching from CUDA to an alternative like AMD's ROCm requires rewriting or porting code, retraining development teams, and accepting performance uncertainty. For the broader population of AI developers and researchers, CUDA inertia is near-total.

Nvidia compounds the software advantage through its NIM microservices, NeMo framework, and the expanding suite of enterprise AI software products. These generate recurring software revenue that is more predictable than hardware sales and commands higher margins. The software business is small relative to hardware today but represents a meaningful diversification of the moat.

The question for long-term holders is whether any custom silicon alternatives, Google's TPUs, Amazon's Trainium, Microsoft's Maia, will reach a scale and ecosystem quality that challenges CUDA's dominance. None has done so yet, and switching costs for the developer ecosystem make it a slow process even if the hardware reaches parity.

Who Is Actually Competing, and How

The most credible near-term threat to Nvidia's market share is AMD's MI300X and MI350 series. AMD has genuine traction at Microsoft Azure and Meta, primarily for inference workloads. The MI300X's large unified memory capacity is a specific advantage for certain model architectures. But AMD's share of total AI accelerator revenue remains well below 10%.

The hyperscalers' custom silicon programs are a longer-term structural risk. Google's TPU v5, Amazon's Trainium2, and Microsoft's Maia are all designed to reduce dependence on Nvidia for specific internal workloads. Progress has been real but slow: custom silicon requires years of software ecosystem development to reach production quality.

Intel's Gaudi accelerator has largely failed to gain traction despite significant investment. Intel's manufacturing, software, and go-to-market execution problems have prevented Gaudi from becoming a viable alternative. This has, counterintuitively, kept the non-Nvidia budget more concentrated with AMD.

China presents a complicated picture. US export controls have blocked Nvidia from selling H100-class chips to Chinese customers, and the H20 variant developed for the China market was subsequently restricted. Chinese domestic AI chip companies like Huawei's Ascend series are making progress, but none is close to Nvidia's performance or ecosystem quality for frontier model training.

Sovereign AI, Robotics, and the Longer Runway

Nvidia's growth narrative is broadening beyond hyperscaler AI training. Sovereign AI, governments and national entities building domestic AI infrastructure, has emerged as a meaningful new demand category. Countries including Japan, France, India, and Singapore have announced large-scale AI infrastructure investments using Nvidia hardware. This government-directed spending is less cyclical and more politically durable than enterprise IT budgets.

Autonomous driving and robotics represent the next hardware frontier. Nvidia's DRIVE platform is used by dozens of automotive manufacturers for autonomous driving development. The Jetson platform powers a wide range of edge AI and robotics applications. Neither segment is close to matching the data center business in revenue today, but both benefit from the same CUDA software ecosystem, meaning Nvidia's software moat extends into these markets without requiring additional software investment.

Nvidia's NIM inference microservices and the growing enterprise AI software stack are beginning to generate recurring subscription-like revenue. Enterprises deploying Nvidia hardware increasingly purchase software support, optimization tools, and managed AI services. As the installed base of data center GPUs grows toward millions of units, this software and services revenue layer becomes materially more significant as a percentage of total revenue.

The Rubin architecture, Nvidia's next-generation chip platform after Blackwell, is on the roadmap for fiscal year 2027. If the historical pattern holds, each new architecture generation has driven a new wave of infrastructure refresh spending. Customers who built on H100 are already moving to Blackwell. Those who build on Blackwell will be natural upgrade candidates for Rubin. The perpetual upgrade cycle is itself a moat.

Operating Margin by Fiscal Year (FY2022 to FY2026)

Is 34x Trailing PE Justified?

At 34x trailing earnings and 21x forward earnings, Nvidia is not cheap by historical semiconductor standards. But the comparison to historical semiconductor valuations may be the wrong frame. Nvidia is generating software-like margins, near-monopoly market share in its primary segment, and FCF of $96.7 billion annually.

The forward PE of 21.5x suggests the market is pricing in continued strong earnings but not another step-change in revenue. If revenue grows 20% in fiscal year 2027 to around $260 billion and margins hold at 60%, EPS expands materially. At 21x forward, that outcome is already largely priced.

EV/EBITDA of 28.7x is the most useful comparison point. Microsoft trades around 25x EV/EBITDA. Alphabet around 15x. On that basis, Nvidia carries a meaningful premium to comparable quality, justified by the higher growth rate and superior return on equity of 101.5%.

Analyst sentiment is near-unanimous: 43 strong buy ratings, 12 buys, 7 holds, and 1 strong sell, with a median price target of $268. The lone strong sell is a notable outlier in a consensus that has been consistently too conservative on Nvidia's earnings trajectory over the past two years.

The Bear Case for Nvidia

The primary structural risk is that the current AI infrastructure spending cycle is partially a bubble. Hyperscalers are spending hundreds of billions on AI compute based on expected future revenue that has not yet fully materialized. If AI monetization disappoints, spending growth decelerates sharply, and Nvidia's revenue growth slows or reverses. The 34x trailing PE would be harder to defend in a slower growth environment.

Gross margin pressure from Blackwell is real and worth monitoring. The decline from 75% gross margins in fiscal year 2025 to 71.1% in fiscal year 2026 may not be temporary. Each new architecture generation requires more complex supply chain components, and TSMC has pricing leverage as volumes scale. If gross margins settle in the mid-60s rather than recovering to the mid-70s, the earnings power is structurally lower than the current model implies.

Geopolitical risk is a direct earnings risk. Export controls have already removed China as a viable market for Nvidia's best chips. Further restrictions, or a deterioration in US-Taiwan relations affecting TSMC manufacturing, would create supply disruptions that no amount of demand can offset.

Concentration risk is underappreciated. Microsoft, Google, Amazon, and Meta collectively account for a substantial share of Nvidia's data center revenue. If any of these hyperscalers accelerates its transition to in-house silicon, the revenue impact would be significant and immediate.

Durable Until Proven Otherwise

Nvidia's moat is real and multi-layered. The CUDA software ecosystem, the NIM and NeMo software stack, the Blackwell hardware lead, and the manufacturing partnership with TSMC together create a competitive position that cannot be quickly replicated. Revenue of $215.9 billion and FCF of $96.7 billion are not accidents.

At 34x trailing and 21x forward earnings, the valuation prices in continued strong performance but not another step-change in revenue. The primary risk is not competition; it is whether AI infrastructure spending sustains its current trajectory or plateaus in the near term.

For long-term holders, the durable question is CUDA's staying power. If the software moat holds for another decade, which developer inertia and the compounding library ecosystem makes plausible, Nvidia's current premium is defensible. If custom silicon closes the software gap faster than expected, the multiple has room to compress. On current evidence, the moat is deep.

TickerXray Reports

Forensic-grade stock analysis, powered by AI

Every report runs 12 quantitative models and generates an AI investment thesis. From Piotroski scores to manipulation detection -- get the full picture in seconds.

12 forensic models

Piotroski, Altman, Beneish, DuPont & more

AI investment thesis

Synthesized outlook on every stock

Manipulation detection

Spot red flags before they hit the news

150,000+ tickers

Global coverage across 60+ exchanges

Expected return

Forward return projections for every stock

Real-time data

Live prices, insider trades, news sentiment

Free accounts get 1 report per month. Pro gets unlimited.