How do we get to AGI?
Everyone has a theory. We test them.
The Problem
Most AGI discourse is untestable speculation. Papers, blog posts, Twitter threads — claims you can argue about forever but never verify. "Scaling is all you need." "We need symbolic reasoning." "It's just around the corner." How do you know who's right?
The Solution
Here, theories break into claims. Claims face challenges. What survives is what's true.
No popularity contests. No appeals to authority. Just evidence and survival.
What's Surviving
View all →Claims that hold up across multiple theories become the emerging consensus — the Highway.
LLMs perform N-dimensional Bayesian accounting that humans cannot; they handle hundreds or thousands of dimensions effortlessly while humans max out at 5-7 items.
Operational language creates a shared data structure that both humans and LLMs can use without divergence, converging the plane of reciprocal information exchange into words.
The evolution of reasoning proceeds from association to explanation to justification to falsification to adversarialism—Darwinian survival of claims through staged tests.
Claims must pass survival hurdles: true, ethical, possible, warrantable, liable. If a claim cannot pass these gates, it cannot survive as testimony.
Constructive logic from first principles is another form of falsification—building from the bottom up exposes what you are missing.
Explore the Roads
View all →Each road is a theory of how we get to AGI, broken into testable claims.
The General Reasoning Road
AGI requires human-machine commensurability—shared reasoning structures that both can read, write, and verify. Scaling alone won't get us there; we need operational language, adversarial testing, and operator-auditor separation.
The Hybrid Systems Road
AGI requires combining neural networks with symbolic reasoning—explicit knowledge representation, formal logic, and structured world models that neural nets alone cannot provide.
The Scaling Road
AGI emerges from scale—sufficient compute, data, and architectural refinements will produce general intelligence. The path is more of the same, done better.