- Epistrophy Capital Research
- Posts
- Epistrophy Week Ahead
Epistrophy Week Ahead
The Week Of June 23, 2025
Micron (MU: NASDAQ) reports earnings this week, and with it, a glimpse into the state of AI infrastructure’s physical backbone. Memory is no longer a commodity business; it’s a geopolitical chess piece. The company’s high-bandwidth memory, in particular, has become a bottleneck and a bargaining chip—caught between hyperscaler demand and trade war politics. We’ll be watching not just for margins or shipments, but for clues about whether the U.S. can sustain its AI ambitions without the parts it increasingly cannot make.
As always, I’m focused on three things:
1) Technology-driven change;
2) the latest in innovation and startup trends, and;
3) stock fraud.
Companies Discussed
Ticker | Name | Market Cap | Current Price |
|---|---|---|---|
MU | Micron Technology | $136.43 B | $122.08 |
NVDA | NVIDIA | $3,517.75 B | $144.17 |
AMD | Advanced Micro Devices | $210.10 B | $129.58 |
In This Note:
Micron: HBM For Dummies
Of Micron and High Bandwidth Memory
It looks like a stack of pancakes glued to a logic die. It behaves like a firehose compared to the dribble of conventional memory. And in the last quarter it likely made Micron Technology (MU: NASDAQ) more than $1 billion in revenue, about $11 million a day.
So what better time to do a top-down look at High Bandwidth Memory, how it differs from the DRAM of old and how it’s changing the business of both Micron and the entire market of semiconductor memory.

DRAM vs. HBM
DRAM, or Dynamic Random Access Memory, stores each bit of data in a separate capacitor within an integrated circuit. Because capacitors leak charge over time, DRAM must constantly refresh—typically thousands of times per second—giving it the “dynamic” in its name. While slower than static RAM (SRAM), DRAM offers far greater density and lower cost per bit, making it ideal for main system memory. Standard DDR (Double Data Rate) DRAM interfaces—now in their fifth generation—transfer data on both the rising and falling edges of the clock signal, but physical distance from the CPU and limited parallelism constrain bandwidth and increase latency. This architecture made sense when compute cycles were expensive and data movement was incidental. But not anymore.
Also, traditional DRAM is laid out like suburban housing—rows of identical memory cells spaced widely on DIMM (Dual In-line Memory Module) slotted into the motherboard. They’re cheap and easy to build, but the commute is long: electrons must traverse centimeters of circuit board and socket latency to reach the CPU or GPU.
HBM is urban housing by contrast—memory dies stacked vertically atop a logic die and bonded via Through-Silicon Vias (TSVs), with the entire stack placed just millimeters from the processor.
This has been an evolution as Micron’s DRAM roadmap has moved from 1x through 1γ, each squeezing more bits and less power from the same sliver of silicon.
The HBM configuration reduces latency, minimizes power loss and dramatically increases bandwidth—up to 1.2 TB/s in HBM3E configurations. Each stack can have multiple layers (8-high, 12-high, etc.), with the latest configurations exceeding 24 GB per stack and using significantly less power per bit transferred.

Micron HBM3E 12-high boasts 36GB capacity, a 50% increase over last generation HBM3E 8-high offerings.
Source: Micron
The tradeoff is cost, complexity and yield. HBM wafers are harder to manufacture and assemble than planar DRAM and they require expensive interposers and advanced packaging techniques. But when the bottleneck is bandwidth, not transistor count, those tradeoffs become tolerable—even mandatory.
Generative AI broke the memory bank. Unlike traditional compute tasks, inference workloads running on large language models are bottlenecked not by compute cycles but by memory bandwidth. NVIDIA (NVDA: NASDAQ) figured this out years ago and engineered the H100 and now GB200 GPUs around HBM stacks placed within millimeters of the GPU core. That proximity—and vertical stacking—allows for data movement at terabytes per second, versus the hundreds of gigabytes per second offered by even the best DDR5 in distant DIMM slots.
Micron, long known as a trailing competitor in commodity DRAM, now finds itself at the tip of the AI spear. Its HBM3E 12-high product delivers 50% more capacity and ~30 % lower power per bit compared to rival parts,, with design wins already in NVIDIA’s GB300 systems. The company is sold out of HBM capacity for calendar 2025, expects HBM4 shipments to ramp in 2026 and has begun building out advanced HBM packaging capabilities in both Singapore and the United States.
How Micron Got Here
This has been long-coming for Micron, as the company has moved from mode to mode, now using greek letters to denote the evolution.
Node | nm-class | Ship Year | Example |
1x | 19 – 17 nm | 2016 | 8 Gb DDR4-3200 (DRAM) |
1y | 16 – 14 nm | 2018 | 12 Gb LPDDR4X-4266 (DRAM) |
1z | 13 – 11 nm | 2019 | 16 Gb DDR4-3200 (DRAM) |
1α | 14 nm-class | 2021 | LPDDR5-6400 (DRAM) |
1β | 13 nm-class | 2023 | HBM3E 24 GB 8-high (HBM) |
1γ | 11 – 12 nm | 2025 | LPDDR5X-10.7 Gb/s (DRAM) & HBM4 48 GB 12-high (HBM) |
1β, Micron’s HBM3E ramp, is now in an aggressive phase. Last quarter it moved the majority of its volume to 12-high stacks, tripling data center DRAM sales year-over-year and beating internal forecasts. Unlike peers, Micron also dominates the LP (low power) DRAM market for servers, with products designed in collaboration with NVIDIA to lower memory power use by over 65% compared to standard DDR5.
Most importantly, Micron’s roadmap appears believable. Two weeks ago, Micron, began shipping the first 1γ LPDDR5X parts. HBM4 is on track for 2026 with 60% more bandwidth than HBM3E. New fabs in Idaho and Virginia are explicitly designed to bring HBM packaging onshore, backed by more than $6 billion in CHIPS Act funding and further tax credits from the Trump administration. The company projects $200 billion in domestic investment through 2030 and aims to produce 40% of its DRAM in the United States.
In an AI-dominated compute landscape, HBM is no longer an exotic luxury. It’s a gating factor for system performance. That flips the economics of memory on its head. No longer does price per gigabyte rule. Now, it's bandwidth per watt.
Competitive Implications
In the HBM market, Micron competes directly with Samsung and SK Hynix. SK Hynix seized first-mover advantage when it began shipping HBM3 for NVIDIA’s H100 in 2022. Micron has since leap-frogged on the next node: its 1β-based HBM3E delivers 24 GB in an 8-high stack and 36 GB in a 12-high stack, sustaining 1.2 TB/s while drawing about 30 percent less power per bit than rival parts. Samsung—larger than both—is still completing NVIDIA’s full-package qualification, keeping its output confined to lower-end or China-specific GPUs.
Design-win momentum is tilting accordingly. Micron’s 24 GB 8-high is inside NVIDIA’s GB200 NVL72, its 36 GB 12-high is qualified for the GB300 NVL72 and it has begun volume shipments to a third unnamed “large HBM3E customer.” It has also landed AMD’s (AMD: NASDAQ) upcoming Instinct MI350 series and co-developed SOCAMM—a compression-attached LPDDR-class module—with NVIDIA as a bridge to low-power server memory.
HBM performance is converging but yield curves are not. If Micron sustains reported yields of roughly 75 percent on 8-high and 70 percent on 12-high stacks, manufacturing readiness rather than datasheet specs could decide share allocation throughout the Blackwell cycle.
Micron’s edge is operational, not proprietary. Its 1β HBM3E is running at volume yields competitors have yet to match, but manufacturing advantages in memory vanish once rivals finish qualification. Samsung and SK Hynix will close the gap; until they do Micron ships the densest, lowest-power stacks on the market.
In short: it may not be Micron’s HBM design alone that wins the market, but its manufacturing readiness.
What to Expect
Micron has raised its 2025 HBM total addressable market estimate to over $35 billion. It’s already sold out for the year and is in early agreements with customers for 2026 supply. If HBM ramps follow a similar trajectory to GPU demand, a $100 billion TAM by 2030 seems within reach. That would be nearly one-third the size of the global DRAM market.
The shift from commodity DRAM to application-specific, high-bandwidth configurations suggests an industry in structural transition. Bit growth remains, but not at the same pace. Value growth—driven by packaging, proximity and integration—is the new axis of competition.
Micron has its share of risk. Advanced packaging is capital-intensive. AI server demand is volatile. And any further GPU node delay or overcapacity cycle could dent margins. But for now, HBM’s economics are as tight as its layout: if you’re building for bandwidth, you’re paying for it.
Is This The HBM Quarter for Micron?
Micron’s second-quarter results reinforced that shift. Overall revenue hit $8.05 billion, up from $5.82 billion a year before. Gross margin rose to nearly 38%, driven by mix shift toward high-value DRAM and HBM. The company turned a GAAP profit of $1.58 billion, with operating cash flow of nearly $4 billion—its strongest showing since the pandemic-era memory boom.
Capital expenditures reached $3.09 billion in the quarter, with much of that targeted at expanding HBM production. For the full year, Micron projects record revenue and expects demand tightness in leading-edge DRAM due to HBM ramp requirements. Management guided to $8.8 billion in Q3 sales, with gross margins holding above 36% – we shall see.
The Future
HBM isn’t just a memory product. It’s an interconnect strategy. It redefines system architecture and forces coordination across CPU, GPU, memory controller and software stack. It collapses the distance between memory and compute—physically, electrically and economically.
Micron didn’t invent it. But it may be the company that turns it from exotic to essential.
Tweet O’ The Week
Epistrophy In The News
On Friday I was on NewsNation with Connell McShane, explaining why Donald Trump Jr.’s claim that a $500 smartphone could be manufactured in the U.S. ignores decades of industrial consolidation and billions spent on component specialization. Our research note breaks down the cost stack and fabrication constraints—why even Apple (AAPL: NASDAQ), with its scale and leverage, hasn’t reshored assembly. It’s not just a wage problem, it’s a supply chain physics problem. (You can read the report here).
📆 of Epistrophy Events
Ticker | Name | Market Cap | Date | Type |
|---|---|---|---|---|
HPE | HPE Discover | $24 B | Jun 23, 2025 | Conference |
AVGO | Broadcom Tech Forum | $1,176 B | Jun 24, 2025 | Conference |
NXPI | NXP Connects 2025 | $53 B | Jun 24, 2025 | Conference |
MU | Micron Technology | $138 B | Jun 25, 2025 | Earnings |
NRS | New Residential Sales | Jun 25, 2025 | Economic Event | |
NTAP | NetApp | $21 B | Jun 29, 2025 | Earnings |
CSP | Construction Spending | Jul 2, 2025 | Economic Event | |
TSLA | Q2 Production & Deliveries | $1,009 B | Jul 2, 2025 | Press Release |
🎉 | Early Close* | Jul 3, 2025 | Market Holiday | |
UNRATE | Unemployment Rate | Jul 3, 2025 | Economic Event | |
🎉 | Independence Day | Jul 4, 2025 | Market Holiday |
Availability This Week
I’ll be at the HPE Discover conference this week, tracking their AI infrastructure push and customer adoption stories. I’m also covering Micron (MU: NASDAQ) earnings closely and available for calls, interviews, and questions.
Written reports are available to clients, with video summaries on YouTube, and of course our popular summaries of the summaries on Instagram, TikTok, and YouTube Shorts.
I hope these notes are helpful to you. I’d love to discuss them further and, as always, comments, questions and ideas are appreciated. If you have a friend or even a frenemy whom you think might benefit from this note, have them reach out and I’ll put them on the list.

The information contained here is provided for informational purposes only and should not be construed as legal, financial, or professional advice. While we strive to ensure the accuracy and reliability of the information presented, we make no warranties or representations as to its completeness or accuracy.
This note and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please notify the sender immediately and delete this email from your system. Any unauthorized use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. We do not endorse or guarantee the content herein and have no obligation to update or correct any information that may be found to be inaccurate or incomplete. The full context and additional information may be necessary for a complete understanding of this communication, which may be known only to the intended recipient.
This is not a recommendation or solicitation to buy or sell securities. Any investment decisions should be made in consultation with a qualified financial advisor and based on your own research and judgment.
We may retain and archive copies of written communications, including emails, indefinitely. This may include this note and any replies to it. By reading and acting upon the contents of this email, you acknowledge and agree to the terms outlined in this disclaimer. If you do not agree with these terms, please notify the sender immediately and delete this note.



Reply