- Epistrophy Capital Research
- Posts
- Epistrophy Week Ahead
Epistrophy Week Ahead
The Week Of February 16, 2026

Next week, the technology industry moves from earnings theater to a federal courtroom. Facebook’s parent, Meta Platforms (META: NASDAQ), faces trial, with the possibility that founder and CEO Mark Zuckerberg takes the stand on Wednesday.
At the same time, Cadence Design Systems (CDNS: NASDAQ) and Palo Alto Networks (PANW: NASDAQ) report results. One sits at the foundation of chip design. The other guards the perimeter of an increasingly automated enterprise. Together, they offer a read on the next direction for artificial intelligence spending.
You can find prior notes and the full research archive at https://epistrophy.beehiiv.com.
As always, I’m focused on three things:
1) Technology-driven change;
2) the latest in innovation and startup trends, and;
3) stock fraud.
Companies Discussed
Ticker | Name | Market Cap ($B) | Price |
|---|---|---|---|
Positron | 1.1 B (private) | NA | |
NVDA | NVIDIA | $4,441.55 B | $182.78 |
ARM | Arm PLC - | $133.05 B | $125.28 |
In This Note:
An NVIDIA Competitor Emerges
A new investment shows how Positron AI predicted the limits of memory and power
Jump Trading had a problem. The multi-billion dollar Chicago-based proprietary trading firm deploys increasingly complex quantitative strategies across equities, futures and derivatives, competing in markets where microseconds influence execution quality and profit.
But over time, the firm's massive AI inference engines, tightly managed on-prem racks typically capped at roughly 7–15 kilowatts began to struggle. As AI-driven inference workloads expanded — refining signal generation, optimizing execution and managing real-time risk — memory capacity and bandwidth became binding constraints inside those fixed envelopes. Scaling performance meant adding hardware. Adding hardware meant adding power.
Then in late 2025, an experimental new kind of chip from Positoron AI came to the rescue.
In late 2025, Jump tested an experimental inference system from Positron AI. On transformer workloads, Jump reported that Positron’s Atlas system delivered roughly three times lower end-to-end latency than a comparable Nvidia (NVDA: NASDAQ) H100-based configuration while operating in an air-cooled, production-ready footprint. Jump CTO Alex Davies said the bottlenecks that mattered were memory and power, and those were the dimensions in which Positron’s architecture differentiated itself.
No longer content being a customer, Jump quickly became a co-lead investor in a $230 million Positron’s financing, a round that valued the Reno-based startup at more than $1 billion. Investors included Arm Holdings (ARM: NASDAQ) and the Qatar Investment Authority.
Just a year earlier, Positron had raised $23.5 million to scale production of U.S.-manufactured inference chips, backed by Elon Musk-backer Atreides Management, Scott McNealy’s Flume Ventures, Resilience Reserve and Valor Equity Partners (full disclosure: Epistrophy Capital was part of that round as well.) . The progression from seed capital to unicorn valuation coincided with commercial shipments of FPGA-based inference systems and a roadmap toward a custom ASIC called Asimov.
We don’t think Jump’s problems are unique – they mirror the broader AI system constraints now emerging: memory and power.

Positron’s Asimov accelerator pairs multi-die compute with high-capacity LPDDR to prioritize sustained bandwidth over peak FLOPS
SOURCE: Positron AI
The Battle Against Memory Cost
Rising DRAM and HBM prices directly increase cost per deployed accelerator. And yet transformer inference demands increasing loads of memory bandwidth and capacity. Large language models routinely require 40–80 GB per accelerator for weights. Key-value cache growth scales with context length and concurrent users. In multi-tenant inference environments, working sets expand rapidly.
GPU-based systems often fail to utilize peak theoretical memory bandwidth during transformer inference. Positron reports sustained utilization near 93% of theoretical bandwidth on its Atlas FPGA-based system. GPUs in similar workloads have been observed operating materially below peak theoretical bandwidth in real deployments.
Atlas uses Altera Agilex-7M FPGAs with 32 GB of HBM and four DDR5 channels. HBM stores model weights. DDR stores context and KV cache. The system is delivered as a 4U appliance ingesting standard model binaries without recompilation. Company-reported metrics cite 70% higher tokens per second versus comparable NVIDA Hopper-based systems and 3.5× performance per watt and per dollar in certain workloads.
Asimov, Positron’s next-generation ASIC, replaces HBM with LPDDR5x. Each chip is designed for up to 2 TB of LPDDR5x capacity, expandable via CXL to more than 2 TB. Peak bandwidth is cited at roughly 3 TB/s. Sustained bandwidth utilization approaching 90% is a central design claim. Nvidia’s Rubin architecture is reported to include 288 GB of HBM4 with peak bandwidth around 22 TB/s. Capacity and utilization define effective throughput under inference workloads.
Asimov integrates a 512×512 systolic array operating at 2 GHz and supports FP16, BF16, FP8 and lower-precision formats. Each chip includes 16 Tbps of chip-to-chip bandwidth.

Positron AI’s air-cooled Titan supports unprecedented amounts of memory
SOURCE: Positron AI
Four chips form a Titan platform. Up to 4,096 Titan systems are designed to interconnect in a mesh topology supporting more than 32 petabytes of aggregate memory. Mesh interconnect reduces reliance on large external switches.
The economic effect is direct. When DRAM prices rise and HBM remains tight, inefficient bandwidth utilization increases cost per generated token. Higher memory capacity per chip – which Positron’s Atlas is delivering – reduces cross-rack communication and decreases the number of accelerators required for long-context inference.
The Grid Is No Longer Elastic
Power density has shifted materially in AI deployments. Traditional hyperscale racks operated around 10–14 kW. Modern GPU racks can exceed 60 kW in certain liquid-cooled configurations. Increased density requires upgraded switchgear, cooling and interconnection capacity.
Interconnection queues across major U.S. markets now stretch multiple years in certain regions. PJM Interconnection has processed more than 170,000 MW of generation requests since 2023 and continues to manage significant transition backlogs. New AI campuses are being proposed at scales measured in hundreds of megawatts.
Higher rack density compounds facility constraints. More accelerators increase cooling loads. Cooling infrastructure increases power draw. Utility upgrades extend project timelines. Capital expenditures rise as density rises.
Positron’s Atlas cards reportedly consume 150–200 W each. Asimov is described as a 400 W chip. The architecture targets deployment within conventional air-cooled environments. Greater memory per chip reduces scale-out requirements. Fewer racks reduce aggregate megawatt demand.
Jump’s deployment environment illustrates this dynamic. Trading firms operate within fixed rack envelopes. Power budgets are planned years in advance. Latency and determinism influence revenue directly. A system delivering higher tokens per watt is a literal money-maker for the firm.
To be sure, Nvidia continues to improve inference efficiency across generations. Blackwell systems are described as materially more efficient per watt than prior architectures. Rubin expands memory bandwidth and capacity further. Competitive pressure remains intense. The market remains open to heterogeneous deployment models, particularly for inference.
Independent benchmarking across sustained multi-tenant workloads will determine whether Positron’s bandwidth utilization and performance-per-watt claims persist outside curated tests. ASIC tape-out execution and yield consistency will determine manufacturing cadence. Software compatibility and operational reliability will influence adoption beyond early constrained buyers.
Memory pricing and grid constraints now sit alongside compute throughput in AI infrastructure decisions. Marginal token cost under fixed megawatt and fixed memory supply conditions will influence capital allocation across hyperscalers, neocloud providers and enterprises.
The first phase of AI rewarded peak training throughput – memory and power consumption be damned. But the next phase of AI will reward architectures that minimize delivered token cost under physical constraints measured in watts and bytes.
We think Positron AI is just the company to deliver that.
Tweet O’ The Week

(Note: MicroStrategy was not convicted of “accounting fraud” but did pay $10 million in disgorgement and $1 million in penalties to settle charges brought by the SEC for overstating revenues and earnings. from 1997 through 2000. https://www.sec.gov/enforcement-litigation/litigation-releases/lr-16829)
Epistrophy In The News
A busy week of TV appearances had me on NewsNation, discussing a Reuters investigation alleging that certain AI-enabled surgical tools mishandled procedures. In breaking coverage the same evening, I was asked about bitcoin’s role in ransom payments and the persistent myth of anonymity.
I did a busy midweek media tour of Manhattan. At the New York Stock Exchange with Schwab Network’s Sam Vardas, we walked through how hyperscaler capital spending flows downstream to chipmakers and networking firms, and why follow-the-money analysis beats hand-wringing about valuations. On Yahoo Finance with Josh Lipton (back in Greenwich Village!), we dissected the recurring claim of an “AI bubble” and why the debate often ignores order backlogs and power constraints. And on NewsNation with Connell McShane in Midtown, we examined artificial intelligence’s effect on employment against new U.S. labor data and the political stakes heading into the midterms.
📆 of Epistrophy Events
Ticker | Name | Market Cap | Expected Date | Type |
|---|---|---|---|---|
🎉 | President's Day | Feb 16 | Market Holiday | |
CDNS | Cadence Design Systems | $82 B | Feb 17 | Earnings |
RS | Advance Retail & Food Services Sales | Feb 17 | Economic Event | |
ADI | Analog Devices | $165 B | Feb 18 | Earnings |
IP | Industrial Production & Capacity Utilization | Feb 18 | Economic Event | |
NHC | New Residential Construction | Feb 18 | Economic Event | |
TEM | Tempus AI | $9 B | Feb 24 | Earnings |
HPQ | HP | $18 B | Feb 24 | Earnings |
MELI | MercadoLibre | $101 B | Feb 24 | Earnings |
LCID | Lucid Group | $3 B | Feb 24 | Earnings |
TDOC | Teladoc Health | $1 B | Feb 25 | Earnings |
AI | $2 B | Feb 25 | Earnings | |
SNOW | Snowflake | $62 B | Feb 25 | Earnings |
ZM | Zoom Communications | $27 B | Feb 25 | Earnings |
SNPS | Synopsys | $84 B | Feb 25 | Earnings |
CRM | Salesforce | $178 B | Feb 25 | Earnings |
NVDA | NVIDIA | $4,442 B | Feb 25 | Earnings |
NTAP | NetApp | $20 B | Feb 25 | Earnings |
NRS | New Residential Sales | Feb 25 | Economic Event | |
INTU | Intuit | $111 B | Feb 26 | Earnings |
DELL | Dell Technologies | $78 B | Feb 26 | Earnings |
ZS | Zscaler | $28 B | Feb 26 | Earnings |
ADSK | Autodesk | $49 B | Feb 26 | Earnings |
XYZ | Block | $30 B | Feb 26 | Earnings |
ESTC | Elastic NV | $6 B | Feb 26 | Earnings |
SOUN | SoundHound AI | $3 B | Feb 26 | Earnings |
GDP | GDP Second Q4 2025 | Feb 26 | Economic Event | |
DG_ADV | Durable Goods Orders (Advance) | Feb 26 | Economic Event | |
PCE | Personal Income & Outlays (incl. PCE) | Feb 26 | Economic Event | |
/
/
/
Availability This Week
I’ll be in our San Francisco office at the Ferry Building all week. It’s a good stretch for follow-ups, background conversations, and longer, off-the-record discussions that don’t fit neatly into a TV segment or a text.
Written reports are available to clients, with video summaries on YouTube, and of course our popular summaries of the summaries on Instagram, TikTok, and YouTube Shorts.

We certify that (1) the views expressed in this report accurately reflect our views about all of the subject companies and securities and (2) no part of my compensation was, is or will be directly related to the specific recommendations or views expressed in this report.
Important disclosures
Important disclosures are available by calling (347) 619-2489 or writing to Epistrophy Capital Research, One Ferry Building, Suite 201, San Francisco, CA 94105.
Epistrophy Capital Research is an independent research provider and does not operate as a financial institution. Epistrophy explicitly does not provide investment advice, stock recommendations, or solicitations to buy or sell any securities.
The research reports provided by Epistrophy Capital Research contain opinions derived from publicly available information, issuer communications, recognized statistical services, and other reputable sources considered reliable. However, Epistrophy does not independently verify the accuracy or completeness of such information and explicitly disclaims responsibility for any errors or omissions.
Opinions and analysis contained within Epistrophy's research reports are current only at the time of publication and are subject to change without notice. Readers must independently verify facts and conduct their own due diligence before making investment decisions.
Epistrophy Capital Research does not consider or evaluate individual investor circumstances, including investment objectives, financial situations, or risk tolerance. Investing in securities, particularly small-cap and micro-cap stocks, involves substantial risks, including significant volatility and potential loss of principal. Readers are strongly advised to consult their financial advisor or another qualified professional before acting on any information provided. Readers should assume Epistrophy Capital Research, its principals or its contributors may have positions, long or short, any of the companies discussed and the Epistrophy Capital Research principals or contributors may have had or currently have business interests in the companies discussed.
Past performance referenced in Epistrophy reports is not indicative of future results. Security prices can fluctuate widely, and investors should be aware that investments can result in significant financial losses. Epistrophy Capital Research or its Unless explicitly stated otherwise, prices quoted in reports reflect market closing prices from the previous trading day.
Epistrophy Capital Research publications are intended solely for direct recipients and should not be redistributed or shared with third parties without explicit permission from Epistrophy Capital Research LLC.
Epistrophy Capital Research reports are provided strictly for informational purposes and do not constitute a comprehensive analysis of any company, security, or industry. No content within these reports should be considered accounting, tax, legal, or professional advice.
Links or references to third-party websites or external resources are provided solely for informational convenience. Epistrophy Capital Research expressly disclaims endorsement of, and responsibility for, the content, accuracy, or reliability of such external information. Accessing third-party information is done entirely at the user's own risk.
For additional details, clarification, or specific inquiries regarding Epistrophy Capital Research reports, please contact Epistrophy Capital Research LLC directly.


Reply