Levels of AGI

We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors.

Defining AGI: Six Principles ...

Morris et al (2024)

We analyze existing definitions of AGI, and distill six principles that a useful ontology for AGI should satisfy.

What are the principles that a useful ontology for AGI should satisfy? They are given below.

The first claim is the orthogonality thesis.

AGI systems need not necessarily think or understand in a human-like way.
Focus on Capabilities, not Processes.
It is not a necessary precursor for AGI that systems possess qualities such as consciousness or sentience.
Focus on [both] Generality and Performance.
We suggest that the ability to perform physical tasks increases a system’s generality, but should not be considered a necessary prerequisite to achieving AGI.
Focus on Cognitive and Metacognitive, but not Physical, Tasks.
It is possible that embodiment in the physical world is necessary for building the world knowledge to be successful on some cognitive tasks.
Requiring deployment as a condition of measuring AGI introduces non-technical hurdles such as legal and social considerations, as well as ethical and safety concerns.
Focus on Potential, not Deployment.
Focus on Ecological Validity.
This may mean eschewing traditional AI metrics that are easy to automate or quantify but may not capture the skills that people would value in an AGI.
We posit there is value in defining “Levels of AGI.”
Focus on the Path to AGI, not a Single Endpoint.

Argument map

the levels of AGI argument