Nvidia at 37x Earnings: The Multiple Looks Rich Until You Model the Cash
Trailing P/E is the wrong frame for a company generating $96.7 billion in free cash flow.
Gross margins fell from 75% to 71% in FY2026. Free cash flow hit $97 billion. Those two facts belong in the same sentence.
Nvidia's gross margin fell from 75.0% in FY2025 to 71.1% in FY2026. That is the number bears cite when arguing the Blackwell cycle marks peak profitability for the company.
They are pointing to the right data and drawing the wrong conclusion.
FY2026 free cash flow came in at $96.7 billion on $215.9 billion in revenue. That is a 44.8% FCF margin. For context, Apple's FCF margin runs around 26%. The company generating less gross margin per dollar of revenue also generated more absolute free cash flow than almost any business in history.
The margin compression story matters. It just does not determine whether the stock is priced correctly. The cash flow story does.
Two years ago, Nvidia was understood primarily as a gaming and workstation GPU company with an interesting data center division on the side. That framing no longer fits. The data center segment now generates the overwhelming majority of revenue, and its economics are structurally different from anything Nvidia sold before 2023.
The H100 cycle, which drove the FY2024 and FY2025 margin expansion, had unusually favorable economics. Supply was constrained relative to demand, and hyperscalers were paying meaningful premiums to secure allocation. Gross margins reaching 75% reflected that dynamic: extraordinary pricing power in a seller's market where the buyer had no alternative.
FY2026 saw a transition to the Blackwell architecture, which introduced higher manufacturing complexity and cost per unit. Initial Blackwell yields were lower than mature H100 production, compressing margins in the first several quarters of the ramp. What also happened in FY2026 was that revenue grew from $130.5 billion to $215.9 billion, a 65.5% increase. Higher volume at slightly lower margins still produced $153.5 billion in gross profit versus $97.9 billion the prior year.
The Magnificent 7 as a group had a difficult stretch in early 2026, with several names down 20% or more from late-2025 peaks on tariff and macro growth concerns. Nvidia was not immune to the sentiment shift. The underlying trajectory on the income statement did not reverse.
TickerXray Report
Get the complete Nvidia report with all 12 quantitative models, AI-generated investment thesis, and real-time data.
Revenue compounded from $27.0 billion in FY2023 to $215.9 billion in FY2026, roughly 8x in three fiscal years. Operating income went from $4.2 billion to $130.4 billion over the same period. These are not rounding errors in a spreadsheet model.
The operating margin trajectory tells a more nuanced story. FY2023 margins compressed to 15.7% as Nvidia cycled through post-gaming inventory and invested in data center infrastructure ahead of the H100 ramp. FY2024 saw the inflection: 54.1% operating margins on an H100 demand surge that caught the entire supply chain short. FY2025 peaked at 62.4%. FY2026 came in at 60.4%, a modest step back from the peak, but still a level that most enterprise software companies would not reach in their lifetimes.
Earnings per share has beaten consensus estimates in seven consecutive reported quarters. The beat rate has moderated from the 8% surprise in Q3 FY2025 to a 6.6% beat in Q4 FY2026 ($1.62 actual versus $1.52 estimate), but the direction has been consistent: management guides conservatively, the business performs above guidance.
Balance sheet: $10.6 billion in cash against $7.5 billion in total debt. Net cash positive, no leverage risk, and EBITDA of $144.6 billion means the debt load is a rounding error. The capital structure is clean, which matters when the question is how much of the FCF can be deployed toward shareholder returns versus balance sheet defense.
The Blackwell architecture transition introduced two cost realities that directly explain the margin step-down. First, the GB200 NVL72 rack-scale systems are physically larger and more complex to manufacture than H100-based designs. The cost of goods sold per compute unit increased. Second, Nvidia shifted toward custom liquid cooling configurations and larger integrated rack deployments that require more assembly and validation work. Early Blackwell production yields were lower than what TSMC had achieved with mature H100 node production, compressing margins in the initial quarters of the ramp.
The counterargument rests on performance-per-dollar. Blackwell systems deliver roughly three to four times the training throughput of H100 equivalents for large language model workloads. Hyperscalers building data center capacity in 2026 are making a different cost comparison than they made in 2023: the relevant benchmark is Blackwell versus AMD's MI350 or custom accelerators, not H100. The competitive landscape for the highest-performance AI accelerators remains thin enough that Nvidia retains substantial pricing authority.
Gross margins at 71.1% in FY2026 are almost certainly suppressed by ramp dynamics rather than secular pressure. The H100 margin trajectory offers a reasonable historical analogy: H100 margins peaked above 75% roughly 18 months into volume production as TSMC yield rates matured. Whether Blackwell follows the same curve depends on AMD's ability to close the ROCm software gap and whether hyperscaler custom silicon programs (Google TPU, Amazon Trainium) capture a larger share of new cluster builds.
The Marvell-Nvidia partnership announced in late March 2026, targeting custom silicon for AI data center interconnect, extends the Nvidia architecture into networking optimized specifically for Blackwell GPU topologies. This raises switching costs for hyperscalers who might otherwise slot AMD or custom accelerators at the margins of new deployments.
The Rubin architecture, targeted for production ramp in 2027, is already generating analyst projections of 40% to 50% incremental revenue potential above Blackwell peak run rates. That number is speculative, but Nvidia's execution cadence supports taking it seriously. The product cycle has been disciplined: H100 in volume production by 2023, Blackwell ramping 2025 to 2026, Rubin on deck for 2027. Each cycle has delivered performance improvements that maintained the pricing premium over alternatives.
Inference workloads are the underappreciated medium-term driver. Training large foundation models requires concentrated compute from a small number of hyperscalers and AI labs. Inference, running deployed models at scale across billions of daily queries, requires distributed compute from a much broader customer base including enterprises, cloud mid-tier providers, and telecom operators. As the installed base of deployed AI models grows, inference demand broadens the addressable market well beyond the six or seven hyperscalers that dominate training procurement.
Sovereign AI spending is a third contributor that does not show up clearly in public filings. Governments across Europe, the Middle East, and Southeast Asia are funding domestic AI infrastructure as a policy priority. These projects are GPU-intensive and tend to be less price-sensitive than hyperscaler procurement. OpenAI's $122 billion funding round announced in April 2026, which includes significant compute commitments, signals continued concentration of AI capex at the frontier, where Nvidia hardware remains the default.
The consensus revenue estimate for FY2027 sits in the range of $260 billion to $290 billion, implying another 20% to 35% top-line growth. Hitting the low end of that range would put Nvidia's annual revenue above the entire 2023 market cap of many Fortune 100 companies.
Nvidia repurchased $40.1 billion of its own shares in FY2026, following $33.7 billion in FY2025. The company has deployed $83.8 billion in buybacks across the two most recent fiscal years. To frame that number: $83.8 billion is larger than the entire 2023 market cap of companies like Starbucks, Boeing, or Goldman Sachs.
The share count effect is modest in percentage terms because the buybacks are occurring at a multi-trillion dollar market cap. Shares outstanding fell from 25.38 billion in 2021 to 24.48 billion by the end of 2025, a 3.5% reduction over four years. At current valuations, retiring even 1% of shares annually requires approximately $42 billion in annual repurchases just to hold the count flat against stock-based compensation issuance, which ran at $6.4 billion in FY2026.
Stock-based compensation at $6.4 billion is high in absolute terms but represents only 3.0% of FY2026 revenue. More importantly, the net buyback effect after accounting for SBC dilution was still substantially positive for shareholders, as $40.1 billion in repurchases against $6.4 billion in SBC issuance means the company was removing shares at a rate six times faster than it was issuing them.
Dividends remain symbolic at $1.0 billion annually, a yield of approximately 0.02% on the current market cap. Management is not making a dividend argument. The argument is buybacks: tax-efficient, flexible, and concentrated in periods when management believes the stock is attractively priced relative to future earnings power. Buying back $40 billion at a $4 trillion valuation is itself a statement about where management sees the earnings trajectory.
The trailing PE of 35.6x sits near the lower end of the range Nvidia has traded at since the AI infrastructure trade began in earnest in 2023. At FY2023 earnings of $4.4 billion, the stock was priced at hundreds of times trailing earnings. At FY2026 earnings of $120.1 billion, the same multiple framework produces a far more grounded valuation conversation.
Consensus estimates for FY2027 (ending January 2027) imply approximately $150 billion to $175 billion in net income if revenue grows 25% to 35% and operating margins hold near 60%. On forward estimates, the stock is trading at roughly 24x to 28x earnings, a range consistent with the broader S&P 500 median multiple for a business growing at a fraction of Nvidia's current rate.
The EV/EBITDA ratio of 44.1x looks elevated in isolation. Place it in context: EBITDA was $6.0 billion in FY2023 and $144.6 billion in FY2026. A multiple that appears rich relative to a static snapshot looks different when the denominator has grown 24-fold in three years and the forward trajectory remains positive.
FCF yield at current market cap: $96.7 billion divided by approximately $4.24 trillion equals a 2.3% FCF yield. That sits below the 10-year Treasury yield, which is the standard argument for why AI-adjacent stocks remain structurally overvalued. A 2.3% FCF yield growing at 25% annually crosses 4% within two years and 6% within three, under assumptions that do not require believing the Rubin cycle plays out exactly as management describes.
The standard Nvidia moat argument centers on CUDA: a software ecosystem built over 15 years that gives developers a strong reason to default to Nvidia hardware regardless of competing hardware benchmarks. Every foundation model trained on Nvidia GPUs, every inference stack optimized for CUDA kernels, and every researcher who learned GPU programming on the Nvidia toolchain represents another brick in that moat.
AMD has made genuine progress with its ROCm software stack and MI300X and MI350 accelerators, which have shown competitive performance on specific benchmark workloads. Several hyperscalers have reportedly deployed AMD hardware at scale for certain inference tasks. The question is whether AMD can close the software and ecosystem gap fast enough to break the Nvidia default assumption in new cluster procurement decisions.
Custom silicon from hyperscalers represents the structural risk the valuation must price over a five-year horizon. Google's TPUs, Amazon's Trainium, Microsoft's Maia, and Meta's MTIA chips are purpose-built for specific workload patterns. They do not need to beat Nvidia on general benchmarks; they need to beat Nvidia on total cost of ownership for the specific tasks those companies run at scale. As custom silicon programs mature, they are likely to absorb an increasing share of internal hyperscaler compute, reducing the addressable market for third-party GPU sales.
The Marvell partnership for AI data center interconnect is a meaningful strategic response to that threat: by embedding Nvidia-optimized networking into the data center architecture, the company extends the total solution value and increases the switching cost for any buyer considering a heterogeneous accelerator mix.
China revenue impairment is the most concrete near-term risk. Nvidia is currently prohibited from selling its most advanced AI chips to Chinese customers, a restriction that has been tightened progressively since 2023. China represented a material share of historical data center GPU demand, with credible estimates suggesting 20% or more of AI accelerator revenue in 2022 and 2023 came from Chinese buyers. The coverage circulating in early April 2026 asking whether investors should accumulate Nvidia ahead of a potential China policy reversal reflects genuine uncertainty about whether or when the export restrictions ease.
Custom silicon displacement is the slower-moving but potentially larger structural risk. If Google, Amazon, Microsoft, and Meta collectively capture even 15% of the training and inference workload currently running on Nvidia GPUs through internally developed silicon, the revenue impact at current scale approaches $30 billion annually. These programs take years to mature and may never achieve the performance-per-watt of Nvidia's purpose-built AI accelerators, but they do not need to. They need only to be good enough at a lower total cost for the deployer's proprietary workloads.
Tariff and supply chain concentration risk rounds out the near-term picture. Nvidia designs but does not manufacture its chips; it relies on TSMC for the advanced nodes that underpin Blackwell and Rubin performance. Any disruption to TSMC production capacity, whether through geopolitical escalation in the Taiwan Strait or US-Asia trade friction affecting semiconductor equipment imports, has no short-term substitute. These risks are not new and are widely discussed, which means they are partially priced in. Partially.
The margin compression bears have identified a real trend. Gross margins falling from 75% to 71% during the Blackwell ramp is not nothing. If margins continue to compress toward 65% over the next two product cycles, the FCF story changes meaningfully.
But $96.7 billion in annual free cash flow, at a 44.8% FCF margin, from a business that did not exist in its current form three years ago, is not a thesis that collapses under modest margin pressure. The $40.1 billion buyback program confirms management's own read: this is a company confident enough in its future earnings trajectory to retire shares aggressively at a four trillion dollar valuation.
The bear case requires either custom silicon displacement accelerating faster than current adoption curves suggest, China revenue remaining permanently impaired, or gross margins compressing below 65% as Blackwell matures. Any one of those outcomes changes the calculus materially. The bull case is simpler: the AI infrastructure build-out has more runway than a single architecture transition suggests, the FCF engine continues to compound, and 35 times earnings on $120 billion of net income is a multiple the business can grow through.
Full forensic analysis of Nvidia
+ 6 more models included
150,000+ stocks covered
Global coverage across 60+ exchanges. Every report includes all 12 quantitative models and AI analysis.
View plansEvery report runs 12 quantitative models and generates an AI investment thesis. From Piotroski scores to manipulation detection -- get the full picture in seconds.
12 forensic models
Piotroski, Altman, Beneish, DuPont & more
AI investment thesis
Synthesized outlook on every stock
Manipulation detection
Spot red flags before they hit the news
150,000+ tickers
Global coverage across 60+ exchanges
Expected return
Forward return projections for every stock
Real-time data
Live prices, insider trades, news sentiment
Free accounts get 1 report per month. Pro gets unlimited.
Trailing P/E is the wrong frame for a company generating $96.7 billion in free cash flow.
The gross profit compression from 75% to 71% is $8 billion in forgone earnings. At $4.3 trillion, the question is whether that is the floor or the beginning.
At $4.3 trillion and 36x earnings, the market is pricing perfection — while tariff escalations threaten 17% of data centre revenue and the entire supply chain runs through Taiwan.