Open coordination layer for frontier AI infrastructure
What's blocking 100× cheaper, faster, more abundant AI? A versioned, community-editable list of the real bottlenecks — with known solutions, adoption blockers, and dependency edges. Built so engineers, researchers, and funders can coordinate against the same map.
Public discourse on AI compute oscillates between two errors: marketing-grade "200× speedup" claims that collapse under audit, and nihilistic "we need $10 trillion in fabs" framings that paralyze coordination. The truth is in between, and it is more actionable than either.
Of the eighteen frontier-inference bottlenecks catalogued here:
The binding constraint on AI abundance is shared map, not money or physics. This registry is one attempt at that map.
Bottleneck-Driven Projection of Frontier-Class LLM Inference on Dedicated ASICs · v0.2 (2026-04-22)
| Decode throughput | 10-70× over H100 |
| Energy per token | 50-200× |
| Cost per million tokens | 20-100× |
| Agent density per $1M CapEx | 3,000-12,000 streams |
Each range is conditional on model size fit, batch regime, and software maturity. Single-scalar comparisons across these metrics are misleading.
Read paper (PDF) Markdown source BibTeX (.bib)
Eighteen open bottlenecks. Filter by type, status, or priority. Click any entry for full detail, known solutions, blockers, and dependencies.
| ID | Name | Type | Status | Prio | Diff | Unlock |
|---|
bottleneck_registry.md. Schema is documented
at the top of that file; one Markdown section per entry.This project is run by an independent researcher on a $20/month budget. Donations go directly to compute (LoRA training runs, benchmark replication) and registry maintenance. Every dollar is accounted for in a public ledger.
Donation channels (GitHub Sponsors, Ko-fi, etc.) pending — to be filled in by author.
Placeholder: data-todo="confirm-donation-channels-with-author".