← posts

Always five years away

Nobody can agree on what AGI means, and this has been going on long enough that the disagreement itself is worth examining.

The rough consensus, when there is one, goes something like this: AGI is a system capable of doing anything a human can do cognitively, not just one specific thing. It can write, reason, plan, learn, adapt. It is not a chess engine. It is not a very good autocomplete. It is, more or less, a replacement for the thinking parts of a person.

That definition sounds clean until you try to pin it down. Replace which person? At what tasks? Under what conditions? The word "general" is doing enormous work and nobody is checking its credentials.

This matters because AGI is the thing that justifies the current pace of AI development. The argument is roughly: we are building toward something transformative enough that normal caution doesn't apply. Slowing down is dangerous because someone else might get there first. The stakes are too high for standard timelines or standard governance.

If AGI is the premise of all that reasoning, you would expect a reasonably stable definition of it. You would be wrong.

The target moves. In the early 2000s, passing the Turing test was a plausible proxy. Then it wasn't, once we could build systems that passed it without doing anything impressive. "Masters a new domain without additional training" was floated for a while. Then came benchmarks: score above human average on a standardised set of tasks. The benchmarks get saturated, new ones appear, and the goalposts shift again without announcement.

What is interesting is not that the definition is imprecise. Lots of useful concepts are imprecise. What is interesting is the direction of the imprecision. AGI never seems to arrive. Each time a system does something that would have qualified as general intelligence five years ago, the definition expands to exclude it. The thing we built is impressive, but it is not the thing we meant.

This is convenient. A goal that is always just out of reach requires indefinite investment to pursue. Telling investors and governments that you are five years from AGI is a better pitch than telling them you have built a very good text predictor that is mostly useful for drafting emails. The vagueness is not a failure of definition. It is load-bearing.

The researchers who work on this are often more careful. "AGI" in an academic context is frequently treated as useful shorthand for a research direction, not a claim about what is going to happen or when. The problem is that the word travels out of those conversations and into earnings calls and Senate testimony, where it is stripped of its hedges and used to justify specific claims about urgency and stakes.

The European regulatory instinct, to ask not "when does it arrive" but "who is accountable when it causes harm," is better framing. You do not need a settled definition of AGI to write a liability regime for automated decision systems. The accountability question does not wait for the philosophers to finish.

We have been ten years from AGI for most of the time the concept has existed. That is either evidence that the problem is very hard, or evidence that the definition is doing exactly what it needs to do.