Epistrophy Week Ahead

The Week Of March 2, 2026

The attack on Iran and killing of the country’s supreme leader Ayatollah Ali Khamenei will surely add a thick layer of volatility to the coming week (and months and years). A less-publicized, but serious coordinated wave of cyber activity occurred early Saturday alongside U.S.–Israeli military operations targeting locations in Iran.

In that context, Crowdstrike (CRWD:NASDAQ) earnings and a look at how AI is impacting the world of hacking will surely be a focus in an otherwise unpredictable week. Broadcom (AVGO:NASDAQ) earnings also have a different focus after Meta’s big move away from custom Silicon and towards AMD, as discussed below.

Hey! Check out our prior notes, some great artwork and the full research archive at 👉https://epistrophy.beehiiv.com.

As always, I’m focused on three things:
1) Technology-driven change;
2) the latest in innovation and startup trends, and;
3) stock fraud.

Companies Discussed

Ticker

Name

Market Cap ($B)

Price

AMD

Advanced Micro Devices

$326.42 B

$200.21

META

Meta Platforms

$1,639.61 B

$648.18

NVDA

NVIDIA

$4,305.72 B

$177.19

AVGO

Broadcom

$1,515.07 B

$319.55

In This Note:

The Calling of St. Matthew, 1600 by Caravaggio
Source: San Luigi dei Francesi, Rome, Italy

META Calls AMD

A $72B Deal Secures AMD’s Inference Future

Advanced Micro Devices (AMD: NASDAQ) booked the deal of a lifetime with Meta Platforms (META: NASDAQ), structuring a five-year agreement to deploy up to 6 gigawatts of AMD Instinct GPUs inside Meta’s AI infrastructure. The first gigawatt is binding and begins shipping in 2H 2026 – yeah, in just a few months. The remaining 5 gigawatts vest against milestones. Epistrophy Capital Research estimates the deal could bring AMD at least $72 billion in additional revenue over the next five years. 

As Lisa Su told us on a call just after the deal was announced:  “When you talk about gigawatt scale deployments and six gigawatts over five years, that is transformational in terms of where we see our business.”

This signals a new era that we’ve seen coming for a long time: inference chips challenging the training chips that NVIDIA’s dominated (NVDA: NASDAQ). (See our September 1, 2025 note: “NVIDIA’s Ferrari Problem”.

The technical core of this deal is a custom Instinct GPU derived from the MI450 chiplet architecture and configured around Meta’s generative AI inference workloads. Su described the process as starting “with the workload first,” then working backward through AMD’s chiplet building blocks. “What’s unique about our chiplet architecture is we have all the building block pieces, but you can put them together and configure them in different ways to give you different performance and system characteristics,” she said.

MI450 uses a chiplet-based architecture that separates compute dies, I/O dies and cache or memory resources into distinct silicon blocks within a single package. Each block performs a specific function, with compute dies handling matrix math, I/O dies managing memory and connectivity and cache resources supporting data locality and bandwidth. These chiplets are linked by a high-bandwidth on-package interconnect that moves data between them at extremely high speed and low latency. Configuration latitude allows changes in active compute density, memory stack allocation and interconnect topology within an AMD “silicon family.” This is about inference-dominant clusters, memory bandwidth-per-token, batching efficiency and latency stability – a different complication that pure peak training throughput that dominated NVIDA. A configured MI450 variant can bias clock policies, voltage curves and power gating toward efficiency at Meta’s dominant production loads.

No additional tape-out concentrates customization in configuration, packaging and system integration. “It’s highly leveraging the base capability,” Su said, describing a design that captures workload-specific optimizations while retaining the MI450 ecosystem. That constraint shifts engineering effort to firmware, board layout, validation and rack integration. Shared instruction compatibility preserves ROCm portability. Kernel and library work executed under Meta’s deployment can propagate across other Instinct customers.

ROCm Sock’em Software

Radeon Open Compute (ROCm), AMD’s open software platform, is about to be tested at scale. Long-uptime inference clusters surface memory fragmentation, scheduler contention, tail latency spikes and thermal throttling. Su emphasized that ecosystem optimizations “extend well beyond this engagement,” strengthening the broader Instinct franchise. If 95%-plus of the software work remains portable, Meta’s deployment functions as forced hardening for AMD’s AI stack – which will help AMD sell elsewhere.

The GPUs and 6th Gen EPYC “Venice” CPUs will reside in Meta-built racks aligned with Helios – the AMD’s rack-scale AI system architecture of hardware and software. At gigawatt scale, rack geometry and cooling topology determine compute density per data hall. Bus bar capacity, liquid loop routing and service clearances shape sustained operating frequency under load.

Board-level integration aligns EPYC hosts with MI450 accelerators to reduce orchestration overhead. Inference orchestration—request routing, embedding lookup, caching and control-plane tasks—consumes CPU cycles that directly affect GPU utilization. Su said CPUs remain “a strategic foundation of the compute stack,” particularly as inference and agentic AI scale. AMD’s chiplet architecture extends across both EPYC and Instinct, allowing independent scaling of compute cores, cache and I/O dies on the CPU side while tuning accelerator resources on the GPU side. Venice and the next-gen Zen 6-based “Verano” target performance per watt per dollar, aligning CPU throughput with accelerator demand while preserving architectural flexibility at rack scale.

According to Their Number

So how many chips is this? Assume liquid-cooled AI racks at 80–120 kilowatts each and accelerator boards in the 700–1K watt range once host and networking overhead are included. One gigawatt of deployed capacity implies roughly 800K to 1.2 million accelerators at steady state, depending on redundancy and utilization. If shipments in 2H 2026 represent roughly one-third of that first gigawatt build, year-one volumes could plausibly fall in the 250K–400K GPU range.

Su framed revenue as “something like double-digit billions per gigawatt.” Even at $12–15 billion per gigawatt, average system-level revenue per accelerator node would likely land in the low five figures after allocating value across GPUs, CPUs, networking and rack integration. Over the full 6 gigawatts, total accelerator count could approach 5–7 million units over five years if rack densities and board wattages remain constant. Cumulative contract value would then stretch into the $70–100+ billion range across silicon and systems. Variability hinges less on nominal chip price and more on rack power density and the revenue mix between GPU silicon and the broader Helios stack.

2026 (2H)

2027

2028

2029

2030

Total

GW Deployed

0.33 GW

1.00 GW

1.50 GW

1.50 GW

1.67 GW

6.0 GW

Chips (Low Case)

260K

800K

1,200K

1,200K

1,336K

4.8M

Chips (High Case)

400K

1,200K

1,800K

1,800K

2,004K

7.2M

Revenue @$12B /GW

$4.0B

$12.0B

$18.0B

$18.0B

$20.0B

$72B

Revenue @ $15B /GW

$5.0B

$15.0B

$22.5B

$22.5B

$25.0B

$90B

The agreement embeds a performance-based warrant for up to 160 million AMD shares at $0.01 per share, vesting against shipment milestones and escalating stock price thresholds up to $600. Exercise also depends on technical and commercial milestones. Su characterized the structure as “a very aligned incentive structure,” noting that Meta is “making a big bet on deploying at large scale for AMD.”

Engineering risk concentrates in validation breadth and configuration control. Variant discipline across firmware, board revisions and rack assumptions must remain tight to preserve ecosystem portability. Thermal margins at high density require conservative design to avoid throttling during peak inference surges. The first gigawatt becomes the proving ground for sustained throughput and cost-per-query efficiency.

The Called and the Chosen

The competitive implications are immediate. For NVIDIA, the agreement establishes a parallel supply line into Meta’s AI infrastructure at meaningful scale. NVIDIA retains software depth and an integrated rack ecosystem, but dual-vendor architectures gain credibility when an alternate stack ships at density and sustains uptime. As Su put it, AMD is positioning itself “at the core of their next-generation AI infrastructure.” Execution at gigawatt scale narrows insulation inside hyperscaler procurement.

For Broadcom (AVGO: NASDAQ), the move is towards the big GPU makes, and AWAY from their custom silicon and, by extension, the fabric surrounding Broadcom compute. Hyperscaler ASIC programs target workload-specific acceleration and high-performance interconnect. A configurable GPU variant tuned within a standardized chiplet architecture addresses part of that specialization demand without a bespoke silicon cycle. At the same time, rack-scale deployments expand the appetite for switching, optics and connectivity silicon. Compute diversification coexists with rising fabric intensity per megawatt.

Meta’s ambition to build tens of gigawatts this decade sets the trajectory. Execution on the first gigawatt determines expansion across the remaining contingent capacity. Su called the partnership “transformational.” That label will be measured in rack density, sustained efficiency and the ability to convert megawatts into tokens at predictable cost. Six gigawatts defines intent. The shipped megawatts will define reality.

Tweet O’ The Week

A week of hits with Yahoo! Finance’s Josh Lipton, Schwab Network’s Jenny Horne and NewsNation’s Connell McShane.

Epistrophy In The News

On Yahoo Finance with Josh Lipton we talked about Comfort Systems (FIX:NYSE) and why it’s the perfect expression of AI Spending now. On Schwab Network with Jenny Horne, we examined the reported $100 billion commitment between Meta Platforms and Advanced Micro Devices. The discussion separated training from inferencing economics, rack density from power constraints and narrative from cash flow. Capital intensity at that scale reshapes supplier leverage, grid demand and competitive moats across the AI stack.

On two hits on NewsNation with Connel McShane, we dissected President Trump’s State of the Union claims on artificial intelligence and electricity, alongside Nvidia earnings and then later in the week took on the breaking news of the Trump Administrations ban on Anthropic’s AI platform and it’s embrace of new, unimaginable robot-controlled weapons systems.

Who knew?

📆 of Epistrophy Events

Ticker

Name

Market Cap

Expected Date

Type

MDB

Mongodb

$27 B

Mar 2

Earnings

CSP

Construction Spending

Mar 2

Economic Event

CRWD

Crowdstrike

$96 B

Mar 3

Earnings

VEEV

Veeva Systems

$30 B

Mar 4

Earnings

OKTA

Okta

$13 B

Mar 4

Earnings

DG_FULL

Factory Orders (M3 Full Report)

Mar 5

Economic Event

EMPSIT

Employment Situation

Mar 6

Economic Event

SXSW

South By Southwest

-

Mar 6

Festival

HPE

Hewlett Packard Enterprise

$28 B

Mar 9

Earnings

Game Developers Confereece GDC

Mar 9

Conference

CPI

Consumer Price Index

Mar 11

Economic Event

SNPS

Synopsys Users Group (SNUG) 2025

$82 B

Mar 11

Conference

ADBE

Adobe

$107 B

Mar 12

Earnings

RBRK

Rubrik

$11 B

Mar 12

Earnings

PPI

Producer Price Index

Mar 12

Economic Event

RS

Advance Retail & Food Services Sales

Mar 16

Economic Event

IP

Industrial Production & Capacity Utilization

Mar 16

Economic Event

NVDA

NVIDIA GTC AI Conference

$4,499 B

Mar 16

Conference

OFC 🔦

Optical Fiber Communications Conf.

Mar 16

Conference

NHC

New Residential Construction

Mar 17

Economic Event

FOMC

FOMC two-day meeting

Mar 17

Economic Event

FOMC

FOMC two-day meeting

Mar 17

Economic Event

NRS

New Residential Sales

Mar 24

Economic Event

DG_ADV

Durable Goods Orders (Advance)

Mar 25

Economic Event

RSA Conference 2026

Mar 26

Conference

GDP

GDP Third Q4 2025

Mar 27

Economic Event

PCE

Personal Income & Outlays (incl. PCE)

Mar 27

Economic Event

Availability This Week

In San Francisco at the Ferry Building all week and available for interviews, background conversations and client briefings on AI infrastructure, semiconductors, power constraints and cyber risk.

Written reports are available to clients, with video summaries on YouTube, and of course our popular summaries of the summaries on Instagram, TikTok, and YouTube Shorts.

We certify that (1) the views expressed in this report accurately reflect our views about all of the subject companies and securities and (2) no part of my compensation was, is or will be directly related to the specific recommendations or views expressed in this report.

Important disclosures

Important disclosures are available by calling (347) 619-2489 or writing to Epistrophy Capital Research, One Ferry Building, Suite 201, San Francisco, CA 94105.

Epistrophy Capital Research is an independent research provider and does not operate as a financial institution. Epistrophy explicitly does not provide investment advice, stock recommendations, or solicitations to buy or sell any securities.

The research reports provided by Epistrophy Capital Research contain opinions derived from publicly available information, issuer communications, recognized statistical services, and other reputable sources considered reliable. However, Epistrophy does not independently verify the accuracy or completeness of such information and explicitly disclaims responsibility for any errors or omissions.

Opinions and analysis contained within Epistrophy's research reports are current only at the time of publication and are subject to change without notice. Readers must independently verify facts and conduct their own due diligence before making investment decisions.

Epistrophy Capital Research does not consider or evaluate individual investor circumstances, including investment objectives, financial situations, or risk tolerance. Investing in securities, particularly small-cap and micro-cap stocks, involves substantial risks, including significant volatility and potential loss of principal. Readers are strongly advised to consult their financial advisor or another qualified professional before acting on any information provided. Readers should assume Epistrophy Capital Research, its principals or its contributors may have positions, long or short, any of the companies discussed and the Epistrophy Capital Research principals or contributors may have had or currently have business interests in the companies discussed.

Past performance referenced in Epistrophy reports is not indicative of future results. Security prices can fluctuate widely, and investors should be aware that investments can result in significant financial losses. Epistrophy Capital Research or its Unless explicitly stated otherwise, prices quoted in reports reflect market closing prices from the previous trading day.

Epistrophy Capital Research publications are intended solely for direct recipients and should not be redistributed or shared with third parties without explicit permission from Epistrophy Capital Research LLC.

Epistrophy Capital Research reports are provided strictly for informational purposes and do not constitute a comprehensive analysis of any company, security, or industry. No content within these reports should be considered accounting, tax, legal, or professional advice.

Links or references to third-party websites or external resources are provided solely for informational convenience. Epistrophy Capital Research expressly disclaims endorsement of, and responsibility for, the content, accuracy, or reliability of such external information. Accessing third-party information is done entirely at the user's own risk.

For additional details, clarification, or specific inquiries regarding Epistrophy Capital Research reports, please contact Epistrophy Capital Research LLC directly.

Reply

or to participate.