HACKOBAR_item
[THENEWSTACK]score: 0.24

Subquadratic launches 12M-token context window model

May 6, 2026
Subquadratic released a model featuring a 12M-token context window that beats GPT-5.5 on retrieval benchmarks by sidestepping attention's O(n²) complexity bottleneck. The architecture likely leverages linear or state-space mechanisms enabling practical long-context inference at scale. A 50M-token version is planned, which would redefine document-level reasoning for RAG pipelines, legal, genomics, and multi-modal workloads.