- Epistrophy Capital Research
- Posts
- Epistrophy Week Ahead
Epistrophy Week Ahead
The Week of March 3, 2025
We cover the fifty most important companies in technology. While there’s plenty to debate at the bottom of that list, there’s no debating its top: technology in 2025 is all about Nvidia (NVDA: NASDAQ).
Below is an excerpt of our post earnings report from last week. Read it! It’ll give you a smart take on Nvidia, sure to make you a star at the next 🍸 party. We also had a bunch of our other companies tell us about last quarter, including compelling reports from Elastic (ESTC: NYSE) and Synopsys (SNPS: NASDAQ.)
This week? Lots of news, including consequential reports from chipmaker Broadcom (AVGO: NASDAQ) and Hewlett Packard Enterprise (HPE: NYSE) (see our preview below.)
If you subscribe to our YouTube channel (go on, you know you want to), you’ll get quick-hit, graphic-filled video summaries of our research (and if you play it at 2x speed, you can enjoy the Alvin and the Chipmunks take on tech analysis.)
As always, we’re focused on three things:
1) Technology-driven change
2) The latest in innovation and startup trends
3) Stock fraud
Companies Discussed
Ticker | Name | Market Cap. | Current Price |
|---|---|---|---|
NVDA | NVIDIA | $3,059.29 B | $124.81 |
SNPS | Synopsys | $70.69 B | $457.28 |
ESTC | Elastic NV | $12.06 B | $116.36 |
AVGO | Broadcom | $934.90 B | $199.45 |
HPE | Hewlett Packard Enterprise | $26.06 B | $19.81 |
TPVG | Triplepoint Venture Growth BDC | $0.32 B | $8.09 |
AMD | Advanced Micro Devices | $162.05 B | $99.86 |
GOOG | Alphabet | $2,092.29 B | $172.13 |
AMZN | $2,228.81 B | $211.97 | |
META | Meta Platforms | $1,686.86 B | $668.20 |
Positron | |||
Groq | |||
TSLA | Tesla | $918.04 B | $293.05 |
In This Note:

AI Servers Have Driven HPE to New Heights
Source: SEC filings, Epistrophy estimates
HPE: Liquid-Cooled, AI-Fueled, and DOJ-Dueled
Hewlett Packard Enterprise reports fiscal Q1 2025 earnings this week, with Wall Street expecting revenue of $7.81 billion and EPS of $0.48. My estimates are slightly higher ($7.94 billion), but with a lower EPS of $0.44, reflecting increased AI system investments.
But who cares about estimates, really? The real story is that HPE has quickly carved out a dominant position in AI infrastructure, fueling record quarterly revenue of $8.5 billion in Q4 2024, up 15% year-over-year. AI systems revenue hit $1.5 billion last quarter, a 16% sequential increase, with backlog exceeding $3.5 billion. HPE’s AI business remains resilient despite a puzzling $700 million order "de-book" last quarter (a mystery we wrote about on December 8, 2024). HPE cited unspecified "risk" as the reason, but bookings rebounded post-quarter, bringing AI backlog back to $3.5 billion.
The surge in AI-driven server sales is reshaping enterprise hardware. HPE’s differentiation lies in high-performance computing, particularly its Cray EX and Slingshot interconnect technologies, which optimize low-latency communication for large-scale AI workloads. Its 100% fanless direct liquid cooling system is a critical innovation, reducing power consumption while enabling higher-density deployments—an approach Dell Technologies (DELL: NASDAQ) and Super Micro Computer (SMCI: NASDAQ) have yet to match. Unlike its rivals, HPE has engineered solutions specifically for sovereign AI and exascale supercomputing, rather than relying solely on standard enterprise rack servers.
AI is now central to HPE’s income statement. Compute server revenue rose 31% in Q4 as enterprises modernized for AI workloads. CEO Antonio Neri noted on the December earnings call that HPE’s AI pipeline is now a “multiple” of the existing backlog, signaling further AI budget reallocations.
I’ve seen this cycle before. AI isn’t the first seismic shift in enterprise IT. The cloud boom of the last decade spurred similar hype, with some vendors flourishing while others faded into irrelevance. This time, HPE is choosing stability over volatility. Rather than prioritizing hyperscalers, it’s doubling down on sovereign AI and private cloud deployments. Enterprise and government contracts bring margin stability, an advantage over hyperscaler-driven cycles. The expansion of HPE Private Cloud AI, bolstered by its partnership with Deloitte, underscores this strategic pivot.
HPE has also ramped up its supply of Nvidia chips, securing a significant volume of H100 Tensor Core GPUs for its Cray EX and ProLiant XD systems. In December 2024, the company received shipments of 15,000 H100 GPUs to fulfill new enterprise AI and supercomputing contracts. In February 2025, HPE announced the shipment of its first NVIDIA Grace Blackwell system, featuring 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs interconnected via high-speed NVIDIA NVLink (that single system likely cost the customer a cool $3.56 M.)
AI infrastructure spending shows no sign of slowing. The company has already deployed seven of the ten fastest supercomputers in the world, including the 1.7-exaflop El Capitan system, leveraging Cray’s deep roots in high-performance computing.
The question is whether HPE can sustain AI-driven growth without sacrificing margins. So far, it has managed well, with non-GAAP operating margins at 11.1% last quarter. With IT spending expected to rise in 2025, networking demand recovering, and the pending acquisition of Juniper Networks (NYSE: JNPR) set to enhance its AI and cloud portfolio.
On that: The U.S. Department of Justice filed a lawsuit to block the deal. HPE and Juniper argue that the DOJ’s analysis is flawed, asserting that the acquisition will enhance competition in networking, particularly in the Wireless Local Area Network (WLAN) market. The companies contend that WLAN is a highly competitive space, with at least eight viable alternatives, and that the deal will create a stronger U.S.-based competitor in an industry dominated by a few global incumbents. Regulators in 14 other jurisdictions, including the European Commission and the U.K. CMA, have already approved the transaction without conditions, leaving the U.S. as the primary holdout – so we think the deal goes through.
HPE enters the year with momentum and this quarter should show us lots of that.

“Hydrow”, the Peloton of Rowing, is the kind of second-tier deal backed by TriplePoint
Source: Hydrow
Silicon Valley’s Shadow Banker Plays With Fire
There are few Silicon Valley VC’s capitalists like TriplePoint Venture Growth BDC (TPVG: NASDAQ). It’s a publicly traded Business Development Company (BDC)—a hybrid of venture capital and high-yield lending, designed to fund startups too risky for banks. Think of it as a hybrid of venture capital and private lending, funneling investor money into cash-burning startups chasing growth but not yet ready for an IPO or acquisition.
Representative borrowers include the subscription-box FabFitFun, European fintech Monzo Bank, connected‑fitness brand Hydrow (the ‘Peloton for rowing’) and a fitness app called FitOn—all classic examples of second-tier derivative startups.
A look at TPVG’s financials suggests its current dividend is on shaky ground. The firm increasingly relies on non-cash accounting adjustments.
TriplePoint’s reliance on aggressive fair value adjustments and a growing proportion of non-cash income, particularly payment-in-kind (PIK) interest, masks underlying cash flow issues. Net investment income per share has declined, while PIK income has jumped from 6% in March 2023 to 16% last quarter. This widening gap between reported earnings and actual liquidity cannot hold—setting the stage for a significant dividend cut.

Non-Cash Payments Might Mean Dividend Trouble For This VC
Source: SEC filings, Epistrophy
Meanwhile, the economic backdrop worsens TPVG’s challenges—venture-backed exits remain scarce, M&A activity is sluggish and borrowing costs are climbing. With amortization and accretion expenses rising, the firm’s ability to generate liquid earnings is increasingly constrained.
Finally, a sudden CFO change raises concerns. Christopher Mathieu’s departure in January 2025 leaves financial oversight with Mike Wilhelms, whose background in commercial real estate and “outsourced educational diversion programs” suggests a steep learning curve in venture finance. His first challenge? Assessing TPVG’s dependence on non-cash earnings and the precarious dividend structure.
Could improved deal flow and an IPO revival save TPVG’s dividend? Perhaps. March 5 earnings will tell the tale.

“Unsupervised” by Refik Anadol, the first-ever Non-Fungible Token (NFT) in MoMA's collection
Source: Museum of Modern Art
Inference War: Nvidia's Next Battle
Nvidia has dominated AI training, but the next war is in inference—where AI models operate in real-time, generating insights on demand.
It couldn’t enter that arena on strong footing. Fourth-quarter 2025 results showed a staggering $39.3 billion in revenue, up 78% year-over-year, driven by insatiable demand for its Blackwell GPUs. The Data Center segment alone pulled in $35.6 billion, more than doubling from the previous year. CEO Jensen Huang said on the February 26 earnings call: “Blackwell production is in full gear across multiple configurations, and we are increasing supply quickly to meet growing customer demand.”
Yet, as Nvidia dominates AI training, a new fight is emerging in inference —the space where AI models operate in real time, generating answers, recommendations, and insights on demand. This is the battleground where companies like AMD (AMD: NASDAQ), Google (GOOGL: NASDAQ), Amazon (AMZN: NASDAQ), and Meta (META: NASDAQ) are mounting their challenge, armed with custom AI accelerators designed to reduce costs and optimize efficiency.
The AI war is no longer just about who trains the Large Language Models—it’s about who runs them. And for the first time in years, Nvidia finds itself forced to defend its dominance.
Nvidia’s Blackwell architecture GPUs are manufactured using a custom-built 4NP process by Taiwan Semiconductor Manufacturing Company (TSMC). These GPUs pack 208 billion transistors and feature two reticle-limited dies connected by a high-speed interconnect.
Unlike the training market, where Nvidia is nearly unchallenged, AI inference presents a more competitive and fragmented landscape. Inference workloads require lower power consumption and cost efficiency, making alternatives to Nvidia’s GPUs attractive for deployment at scale.
Company | Chip Name | Build | Process Node | Inference Focus | Notable Features |
|---|---|---|---|---|---|
Nvidia | Blackwell | GPU (Tensor Cores, Transformer Engine) | 4NP (TSMC Custom) | General-Purpose AI, LLMs, Multi-GPU Scaling | FP8/FP16 Precision, NVLink, Hopper Transformer Engine |
AMD | MI300X, MI325, MI350, MI400 | GPU (Chiplet-based, High-Bandwidth Memory) | 5nm (TSMC) | Scalable Inference, High Memory Bandwidth | Infinity Fabric, FP8 Support, ROCm Optimization |
Amazon | Inferentia 2, Trainium | Custom ASIC (NeuronCore, Low-Latency Design) | 7nm (AWS Custom) | Cloud Cost Optimization, Low-Latency AI | Neuron SDK Integration, INT8/BF16 Support |
TPU v5 | Custom TPU (Systolic Arrays, Optical Interconnects) | 5nm (Google Custom) | High-Speed Transformer Inference, Hyperscale AI | Systolic Array-Based Matmul, Optical Transport | |
Meta | MTIA | Custom ASIC (Optimized for Ranking & Recommendation) | Unknown (Meta Custom) | Real-Time Ranking & Recommendations | Custom AI Stack for Meta Infrastructure |
Apple | Neural Engine | Custom Neural Engine (On-Device AI Processing) | 5nm (TSMC/Apple Custom) | On-Device AI, Privacy-Preserving Inference | Optimized for Low-Power AI Tasks |
Positron | Positron P1 | Sparse Tensor Cores, Event-Driven Processing | Unknown (Startup) | LLMs, Sparse Tensor Computation, Low-Power AI | In-Memory Compute, Sparse Execution |
Groq | LPU (Language Processing Unit) | Functionally Sliced Microarchitecture, Deterministic Execution | 14nm (First-gen, Groq) | LLMs, High-Throughput Deterministic Execution | High Compute Locality, Cache-Free Deterministic Model Execution |
Source: Epistrophy
Several companies are rolling out custom AI inference chips to reduce dependence on Nvidia, including:
AMD’s MI300X
Designed to challenge Nvidia’s GPUs, it features chiplet architecture and high-bandwidth memory (HBM). At the Goldman Sachs Technology Conference in September, AMD CEO Lisa Su gave investors a hint: “Training is very, very important, but inference is increasing over time,” she said. “I'm a big believer that there's no one-size-fits-all in terms of computing. And so our goal is to be the high-performance computing leader that goes across GPUs and CPUs and FPGAs and also custom silicon as you put all of that together.”
AMD's MI325, MI350, and forthcoming MI400 accelerators aim to challenge NVIDIA’s Blackwell GPUs by leveraging a multi-chiplet architecture that optimizes memory bandwidth and power efficiency for inference workloads, potentially reducing latency bottlenecks in transformer-based models that were designed for training. Additionally, AMD’s focus on FP8 precision and advanced interconnect technologies, such as Infinity Fabric enhancements, may allow their accelerators to scale more efficiently in multi-GPU clusters, directly competing with NVIDIA’s NVLink and Hopper Transformer Engine optimizations.
Transformer-based models, while highly effective for complex AI tasks, suffer from significant inefficiencies during inference due to their quadratic scaling in attention computation and high memory bandwidth requirements. The self-attention mechanism requires computing pairwise interactions between all tokens in an input sequence, making inference latency particularly high for long sequences. Additionally, transformers demand extensive memory for storing large parameter weights and activations, leading to increased memory access overhead, which can be a bottleneck even with high-bandwidth memory (HBM). These factors make transformers less efficient for real-time applications, where lower-latency and power-efficient architectures, such as sparsity-aware or retrieval-augmented models, may offer advantages.
To be sure, NVIDIA and AMD chips both support transformer-based models, but their architectures handle them differently. NVIDIA's Blackwell and Hopper GPUs incorporate specialized hardware, such as the Transformer Engine, which dynamically switches between FP8, FP16, and FP32 precision to optimize both training and inference performance. NVIDIA also integrates Tensor Cores that accelerate matrix multiplications critical for transformer workloads and features like NVLink to improve multi-GPU scaling efficiency.
AMD’s MI300 and upcoming MI325/MI350 series, meanwhile, leverage RDNA/CDNA architectures, with an emphasis on high-bandwidth memory (HBM) and chiplet-based scalability, aiming to compete with NVIDIA in both training and inference. AMD also supports transformer-based AI workloads through optimized ROCm software, though it currently lacks a direct equivalent to NVIDIA’s Transformer Engine—which gives NVIDIA an edge in automatic precision scaling and sparsity optimizations. However, AMD is working to close this gap with improvements in FP8 support and interconnect efficiency, which could help inference workloads become more competitive.
Amazon’s Inferentia and Trainium
Amazon Web Services (AWS) has developed Inferentia and Trainium to target cost-efficient AI inference and training, respectively. Inferentia 2, the latest iteration, features a custom-designed NeuronCore architecture that optimizes low-latency inference while reducing power consumption per operation. Its support for bfloat16 (BF16) and INT8 precision allows it to outperform traditional GPUs in cost-sensitive deployments. Trainium, on the other hand, is designed to accelerate training workloads but also provides benefits for inference by supporting FP8, FP16, and mixed-precision optimizations. AWS claims that customers running LLMs on Inferentia 2 experience a 40% cost reduction compared to NVIDIA GPUs, making it an attractive choice for cloud-scale deployments.
In an announcement on November 1, 2024, AWS CEO Adam Selipsky reinforced this advantage, said: “Customers running generative AI models at scale are seeing substantial performance and cost benefits with Trainium and Inferentia.” The key to this cost efficiency is Amazon’s tight hardware-software integration, which allows optimized inference on AWS infrastructure using Neuron SDK. Unlike NVIDIA’s Blackwell GPUs, which are general-purpose AI accelerators, Inferentia is specialized for inference, avoiding the overhead of training-optimized designs. This approach makes AWS’s chips highly competitive for deploying LLMs in production where cost per inference is critical.
Google’s TPUs
Google’s Tensor Processing Units (TPUs) are engineered specifically for large-scale AI inference, offering a custom hardware stack tightly integrated with Google Cloud and TensorFlow. The latest TPU v5 introduces “systolic array-based matrix multiplication” (say that ten times fast) a technique that minimizes memory access overhead and boosts efficiency for transformer-based models. Unlike NVIDIA’s general-purpose GPU architecture, which relies on CUDA and Tensor Cores, TPUs use dedicated tensor accelerators optimized for matrix-heavy computations, significantly improving energy efficiency. TPU v5 also enhances interconnect speed via optical interconnects, reducing the communication bottleneck common in multi-node inference workloads.
During an earnings call on October 20, 2024, Google CEO Sundar Pichai emphasized the role of TPUs in AI scalability, stating: “Our TPU advancements ensure that AI inference runs efficiently at Google scale.” This underscores TPU’s advantage in serving models across Google’s vast infrastructure, from Google Search to YouTube recommendations. Unlike NVIDIA’s Blackwell, which is designed for broad AI workloads, TPUs are purpose-built for Google's AI services, allowing faster execution of transformer-based models with lower power consumption. While TPUs may lack the programmability and market flexibility of NVIDIA’s ecosystem, their deep vertical integration gives them an edge in hyperscale AI inference deployments.
Meta’s AI Chips
Meta’s Meta Training and Inference Accelerator (MTIA) is purpose-built for AI-powered recommendation engines, addressing the unique inference workloads of social media platforms. Unlike general-purpose GPUs, MTIA is custom-optimized for low-batch, high-throughput inference, which is critical for real-time engagement ranking on Facebook and Instagram. MTIA’s architecture prioritizes efficient memory access and tensor computation, reducing the power and latency costs associated with deploying large-scale AI models. While NVIDIA’s Blackwell GPUs offer superior raw compute for diverse AI applications, MTIA’s tight coupling with Meta’s infrastructure allows it to outperform generalist hardware in Meta-specific workloads.
On October 5, 2024, Meta CEO Mark Zuckerberg outlined the strategic importance of MTIA, stating: “MTIA allows us to tailor AI workloads to our infrastructure, reducing dependency on external chips.” This marks a significant shift in Meta’s AI strategy, reducing reliance on NVIDIA’s H100 and Blackwell GPUs. By designing application-specific accelerators, Meta can optimize inference efficiency for ranking and recommendation models, which differ significantly from LLM and generative AI inference workloads. This move mirrors Amazon and Google’s strategies, indicating a broader industry trend toward in-house silicon to reduce dependency on NVIDIA’s AI accelerators.
Apple’s AI Expansion
Apple’s AI ambitions center on on-device inference, with its custom AI accelerators integrated into Apple Silicon. Unlike cloud-centric inference solutions from AWS, Google, and Meta, Apple’s focus is on privacy-preserving AI workloads that execute directly on iPhones, iPads, and Macs. The Neural Engine within Apple’s M-series chips is designed for low-power inference, optimizing tasks such as Siri interactions, photo enhancements, and real-time language processing. Apple is reportedly investing in larger AI accelerators for future devices, incorporating advanced matrix multiplication units similar to those in NVIDIA’s Tensor Cores but optimized for on-device efficiency rather than cloud-scale throughput.
On October 25, 2024, Apple CEO Tim Cook reinforced this strategy, stating: “On-device AI is a key part of our silicon strategy, enhancing user privacy and efficiency.” This signals Apple’s long-term vision for AI inference, leveraging its hardware-software integration to enable low-latency, private AI features without relying on cloud servers. Unlike NVIDIA’s Blackwell GPUs, which are designed for data center inference, Apple’s AI chips prioritize energy-efficient matrix computation, ensuring that AI applications can run seamlessly on battery-powered devices. While Apple’s AI chips may not challenge Blackwell in raw performance, they are setting new standards for personalized, low-latency AI experiences that compete in a different segment of the AI inference market.
Positron: The Dark Horse of AI Inference
Positron, a lesser-known but highly ambitious AI accelerator startup, is positioning itself as a formidable competitor in the inference market (full disclosure: I’m a seed investor in Positron). Unlike traditional GPU-based approaches, Positron’s architecture is built around event-driven processing, which allows AI models to execute inference workloads in a more power-efficient and latency-reducing manner. The company’s flagship Positron P1 accelerator is designed with specialized sparse tensor cores, which can dynamically skip unnecessary computations in transformer-based models—an advantage that significantly reduces inference costs in LLMs. By leveraging in-memory computing and an optimized interconnect fabric, Positron claims its chips can outperform NVIDIA’s Blackwell GPUs in cost per inference, particularly for workloads involving real-time language models and generative AI applications.
Positron’s architecture also stands out for its custom software stack, which integrates directly with open-source frameworks like PyTorch and TensorFlow. Unlike NVIDIA’s proprietary CUDA ecosystem, Positron’s approach allows AI developers to optimize their workloads without vendor lock-in, making it attractive to cloud providers and enterprise AI users looking to diversify their hardware choices. While the company remains a startup, its early benchmarks suggest it could offer a high-efficiency alternative to NVIDIA’s dominance in inference workloads, particularly for cost-sensitive AI applications. If Positron’s technology delivers on its promise, it could pose a disruptive challenge to the entrenched GPU-based AI inference paradigm.
Groq’s LPU
Groq’s Language Processing Unit (LPU) represents a fundamental shift in AI inference architecture, designed to maximize throughput and minimize latency. Unlike NVIDIA’s Blackwell GPUs, which rely on traditional tensor cores and CUDA-based memory hierarchies, Groq’s functionally sliced microarchitecture enables deterministic execution, meaning every computation is scheduled and completed without unpredictable delays. This eliminates the need for branch predictors and caches, which can introduce inefficiencies in real-time inference workloads. The dataflow-based approach of Groq’s LPU allows AI models to execute with predictable timing and lower energy consumption, making it particularly well-suited for high-throughput, low-latency applications like LLMs and real-time conversational AI.
Additionally, Groq’s architecture optimizes compute locality, reducing memory bottlenecks that often plague transformer-based models running on GPUs. Instead of relying on massive external memory bandwidth like Blackwell, the LPU interleaves memory with vector and matrix computation units, keeping data closer to the processing cores. Early benchmarks suggest Groq’s chips can achieve up to 4× the throughput of GPU-based inference at a fraction of the cost, making them an attractive alternative for cloud-scale LLM deployments. While Blackwell remains dominant due to NVIDIA’s entrenched software ecosystem, Groq’s compiler-driven, deterministic model execution could redefine AI inference efficiency—potentially forcing NVIDIA to adapt or risk losing ground in the high-speed, low-cost inference market.
Nvidia’s Expanding Role in AI Inference
Nvidia isn’t giving up the fight. It is expanding its inference footprint with new inference-focused GPUs, L4 & L40S, already deployed in Google Cloud's AI Platform and enterprise applications.
“"Inference demand is accelerating, driven by test-time scaling and new reasoning models,” Huang said in this week’s conference call. “ong-thinking AI can require 100x more compute per task compared to one-shot inferences. Blackwell was architected for reasoning AI inference."
And importantly, Huang notes that the training phase has many parts, and is far from over. "Blackwell addresses the entire AI market from pretraining, post-training, to inference across cloud, on-premise, and enterprise.” he said. “Our architecture is fungible and easy to use in all these stages."
To address the growing demand, Nvidia has partnered with Foxconn, which is expanding production capabilities in the U.S., Mexico, and Taiwan. Nvidia is also in talks with TSMC to shift some Blackwell production to its Fab 21 facility in Arizona to improve supply chain resilience.
Yes, Nvidia remains the undisputed leader in AI training, but the AI inference market is now a battleground. With AMD, Amazon, Google, and Meta developing custom inference chips, Nvidia must innovate quickly to retain its edge. The real war isn’t over who trained AI—it’s over who controls it in the future.
Tweet O’ The Week

What's better than an hour with Julie Hyman, Josh Lipton and Brian Sozzi? Two!
Source: Yahoo! Finance
Epistrophy In The News

The AI Groove Is Strong With This One
Source: Schwab Network
On Schwab Network, I joined Nicole Petallides for a sharp discussion on Nvidia’s earnings and the broader A.I. chip race. While Nvidia’s Blackwell GPU cycle is ramping fast, we also highlighted why Synopsys is poised to thrive in this era of custom silicon. Their role in enabling next-gen chip design makes them a critical player as A.I. reshapes the semiconductor landscape.
And when that connection is tight on TV there’s nothing like it.

Laura Ingle And I Dig Into Elon Musk’s Domination Of A White House Cabinet Meeting
Source: NewNation
Check this one out, please: NewsNation had me take a hard look at Elon Musk’s track record—not through hype or outrage, but through business analysis. From Dogecoin to Tesla (TSLA: NASDAQ) to Twitter to his government dealings, the pattern is clear: bold promises, erratic execution and diminished value. In particular, his chaotic Twitter leadership wasn’t an outlier—it foreshadowed the chaos at the Department Of Government Efficiency, which Laura Ingle an I discussed.
Even some of Musk’s critics reached out to thank me for being even-handed in breaking down the risks of his impulsive approach.
📆 of Epistrophy Events
Ticker | Name | Market Cap | Date | Type |
|---|---|---|---|---|
OKTA | Okta | $16 B | Mar 3, 2025 | Earnings |
CSP | Construction Spending | Mar 3, 2025 | Economic Event | |
BOX | Box | $5 B | Mar 4, 2025 | Earnings |
CRWD | Crowdstrike | $96 B | Mar 4, 2025 | Earnings |
ZS | Zscaler | $30 B | Mar 5, 2025 | Earnings |
MDB | Mongodb | $20 B | Mar 5, 2025 | Earnings |
MRVL | Marvell Technology | $79 B | Mar 5, 2025 | Earnings |
VEEV | Veeva Systems | $36 B | Mar 5, 2025 | Earnings |
AVGO | Broadcom | $935 B | Mar 6, 2025 | Earnings |
HPE | Hewlett Packard Enterprise | $26 B | Mar 6, 2025 | Earnings |
UNRATE | Unemployment Rate | Mar 7, 2025 | Economic Event | |
SXSW | South By Southwest | Mar 7, 2025 | Festival | |
ADBE | Adobe | $191 B | Mar 12, 2025 | Earnings |
BFS | Business Formation Statistics | Mar 12, 2025 | Economic Event | |
DOCU | Docusign | $17 B | Mar 13, 2025 | Earnings |
PPI | Producer Price Index | Mar 13, 2025 | Economic Event | |
UMCSENT | U. of Mich. Consumer Sentiment | Mar 14, 2025 | Economic Event | |
LAC | Lithium Americas | $0 B | Mar 17, 2025 | Earnings |
RS | Advance Monthly Sales | Mar 17, 2025 | Economic Event | |
IT | Gartner Data & Analytics Summit | $38.4 b | Mar 17, 2025 | Conference |
Game Developers Conference (GDC) | - | Mar 17, 2025 | Conference | |
NHC | New Residential Construction | Mar 18, 2025 | Economic Event | |
AI | C3 Transform | $3.0 b | Mar 18, 2025 | Conference |
FOMC | Federal Open Market Committee Meeting | Mar 19, 2025 | Economic Event | |
HS | Housing Starts | Mar 19, 2025 | Economic Event | |
ESTC | ElasticON Public Sector Summit '25 | $12.1 b | Mar 19, 2025 | Conference |
NVDA | NVIDIA GTC | $3,056.5 b | Mar 23, 2025 | Conference |
NRS | New Residential Sales | Mar 25, 2025 | Economic Event | |
ADBE | Adobe Summit | $190.8 b | Mar 25, 2025 | Conference |
ADG | Advance Report on Durable Goods | Mar 26, 2025 | Economic Event | |
MU | Micron Technology | $104.3 b | Mar 30, 2025 | Earnings |
SAP | SAP SE | $337.8 b | Mar 30, 2025 | Earnings |
NOK | Nokia Oyj | $27.4 b | Mar 30, 2025 | Earnings |
NXPI | NXP Semiconductors NV | $54.8 b | Mar 30, 2025 | Earnings |
Availability This Week
I'm available throughout the week and will be in San Francisco throughout. I welcome your thoughts via email or text. I think the biggest story of the week might indeed be HPE, but we’re back to a nutty news cycle, so who knows.
You can find our analysis in multiple formats:
Full written reports for clients;
Video summaries: YouTube;
Summary summaries: @drilldownpod on Instagram, @drilldownpod on TikTok.
Questions, insights, or a compelling theory about leading economic indicators and technology? I'm eager to hear them. And if you know others who'd value this intersection of economics and tech analysis, please have them reach out.
The information contained here is provided for informational purposes only and should not be construed as legal, financial, or professional advice. While we strive to ensure the accuracy and reliability of the information presented, we make no warranties or representations as to its completeness or accuracy.
This note and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please notify the sender immediately and delete this email from your system. Any unauthorized use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. We do not endorse or guarantee the content herein and have no obligation to update or correct any information that may be found to be inaccurate or incomplete. The full context and additional information may be necessary for a complete understanding of this communication, which may be known only to the intended recipient.
This is not a recommendation or solicitation to buy or sell securities. Any investment decisions should be made in consultation with a qualified financial advisor and based on your own research and judgment.
We may retain and archive copies of written communications, including emails, indefinitely. This may include this note and any replies to it. By reading and acting upon the contents of this email, you acknowledge and agree to the terms outlined in this disclaimer. If you do not agree with these terms, please notify the sender immediately and delete this note.
Reply