AGI (Artificial General Intelligence)
Hypothetical AI with the ability to understand or learn any intellectual task that a human being can perform.
Definition
Artificial General Intelligence refers to AI systems with the broad, flexible cognitive abilities of humans: learning any new task from minimal examples, transferring knowledge across domains, reasoning about novel problems, and setting and pursuing complex goals. All deployed AI today is "narrow" — excelling in specific domains but failing catastrophically outside them. AGI has neither been achieved nor formally defined.
Definitions and timelines vary enormously among researchers. Some (Demis Hassabis, Sam Altman) suggest AGI may arrive within years; others (Gary Marcus, Yann LeCun) argue current architectures are fundamentally insufficient and the path is much longer.
AGI's potential arrival is a central concern of AI safety research. Transformatively capable AI systems would likely be enormously economically valuable but could also concentrate power, cause labour market disruption, or — if misaligned — pose existential risks.
Examples
- No verified examples exist
- OpenAI's stated mission is developing beneficial AGI
- DeepMind's Gemini was claimed by some as approaching AGI on benchmarks
Related Terms
AI Safety
The field focused on ensuring AI systems remain beneficial, controllable, and aligned with human values as they become more capable.
AI Alignment
The challenge of ensuring AI systems reliably pursue goals that align with human intentions and values.
Large Language Model (LLM)
A transformer-based AI system trained on billions of tokens of text, capable of generating, reasoning about, and transforming language.
Foundation Model
A large model trained on broad data that can be adapted to many downstream tasks via fine-tuning or prompting.