Beyond the Return Point: AGI is Coming

Overview of the current state of Generality in AI and expert insights on its future.

DRL Team
AI R&D Center
06 Jun 2024
7 min read
Beyond the Return Point: AGI is Coming

What is AGI?

AGI stands for Artificial General Intelligence, a theoretical form of AI equivalent to human intelligence. There is no scientific consensus on methods for determining the AGI, as the necessary conditions for intelligence are still being discussed.

OpenAI sees AGI as highly autonomous system that outperforms humans at most economically valuable work. Most AI researchers support this statement and agree that AGI must solve the most intellectual tasks humans can, at least at the human level.

Recently, in an essay "AGI is already here" by Blaise Aguera, VP of Engineering at Google, and Peter Norvig, Director of Research at Google, authors suggested that SOTA LLMs like GPT-4, Claude-2, LLama-2, and Bard are already AGIs because they can handle various set of tasks, work with multimodal inputs and outputs, operate in multiple languages, and "learn" from few-shot examples set directly into the prompt.

Levels of AGI: Where are we now?

In November last year, researchers from Google DeepMind published a paper titled "Levels of AGI: Operationalizing Progress on the Path to AGI", revising the "AGI is already here" idea. They distinguished between "Narrow AI" and "General AI". "Narrow AI" is a system with a clearly scoped task or set of tasks. "General AI", in contrast, is a system capable of performing a wide range of non-physical tasks, including metacognitive abilities like learning new skills.

The authors grouped the competence of "Narrow" and "General" systems into 6 "Levels" — from "Can’t Run Without a Human" to "Superhuman". From this perspective, the "Superhuman" level in various "Narrow" tasks has already been achieved, outperforming humans in high-intelligence and specialized jobs.

The current state of a "General" AI, which now is represented by the LLM-based systems, is rated only as a "Level 1: Emerging" or "Weak AGI", which are equivalent to capabilities of unskilled humans. Despite having impressive conversational skills and in-depth multi-domain knowledge, modern AI typically cannot solve general tasks, like making phone calls, ordering a pizza, or filling a form on a website, without being explicitly programmed to solve them. Thus, they lack Generality.

A system at "Level 2: Competent" that outperforms 50% of skilled adults can already be considered what is commonly called "AGI". However, none of the existing AI systems now can reach this level.

LevelNarrow AIGeneral AI

Level 0: No AI
Non- or semi-automated systems

Narrow Non-AI
already achieved

calculator software; compiler

General Non-AI
already achieved

human-in-the-loop computing, e.g., Amazon Mechanical Turk

Level 1: Emerging
equal to or somewhat better than an unskilled human

Emerging Narrow AI
already achieved

GOFAI (Boden, 2014); simple rule-based systems, e.g., SHRDLU (Winograd, 1971)

Emerging General AI
already achieved

ChatGPT (OpenAI, 2023), Bard (Anil et al., 2023), Llama 2 (Touvron et al., 2023), Gemini (Pichai and Hassabis, 2023)

Level 2: Competent
at least 50th percentile of skilled adults

Competent Narrow AI
already achieved

toxicity detectors such as Jigsaw (Das et al., 2022); Smart Speakers such as Siri (Apple), Alexa (Amazon), or Google Assistant (Google); VQA systems such as PaLI (Chen et al., 2023); Watson (IBM); SOTA LLMs for a subset of tasks (e.g., short essay writing, simple coding)

Competent General AI
not yet achieved

This level is typically referred to as an ''AGI''

Level 3: Expert
at least 90th percentile of skilled adults

Expert Narrow AI
already achieved

spelling & grammar checkers such as Grammarly (Grammarly, 2023); generative image models such as Imagen (Saharia et al., 2022) or Dall-E 2 (Ramesh et al., 2022)

Expert General AI
not yet achieved

Level 4: Virtuoso
at least 99th percentile of skilled adults

Virtuoso Narrow AI
already achieved

Deep Blue (Campbell et al., 2002), AlphaGo (Silver et al., 2016, 2017)

Virtuoso General AI
not yet achieved

Level 5: Superhuman
outperforms 100% of humans

Superhuman Narrow AI
already achieved

AlphaFold (Jumper et al., 2021; Varadi et al., 2021), AlphaZero (Silver et al., 2018), StockFish (Stockfish, 2023)

Artificial Superintelligence (ASI)
not yet achieved

AGI is the Path, not the Goal

"Levels of AGI" co-author Shane Legg, Chief AGI Scientist at Google DeepMind, who initially coined the AGI as a term in 2002, said that one of the motivations for writing was to emphasize the evolution of the AGI concept and simplify progress measurement along the way.

The authors pointed out that, considering the current state of AI research, the "Competent" General AI would rather be achieved with small improvements over time than a sudden breakthrough. They also distilled a few general principles, called "Focuses", that should be taken into account when evaluating our progress on the path to AGI:

  • Focus on capabilities, not processes: AGI definitions often focus on complex and abstract mechanisms like consciousness. Instead, we should focus on the ability to perform tasks, regardless of how it works internally.
  • Focus both on generality and performance: AGI must perform well on a diverse set of tasks before calling it "General."
  • Focus on Cognitive and Metacognitive tasks: AGI benchmarks should include tasks that measure learning capabilities and non-physical cognitive tasks. Robotic embodiment makes the system more general but should not be required for AGI.
  • Focus on potential, not deployment: We should focus not on AGI's real-world deployment but on the potential to achieve goals. At the same time, selected "goals" should be meaningful and useful to humanity. They shouldn't be easy to automate and should have real-world value.

Can AGI Arrive This Year?

In the past 2 years, expectations have soared for significant advancements in AI. Some anticipated new versions of ChatGPT or Gemini with enhanced "General" skills, while others looked forward to groundbreaking developments surpassing human capabilities. Recent statements from OpenAI and others are only fueling the overheated atmosphere of expectations. But what can we realistically have in the near future?

On November 6th, 2023, at Open AI Dev Day, Sam Altman said that GPTs are the first step to creating multi-agent systems that can do more planning and make more decisions on their own without any user input. At the end of the speech, he said that OpenAI is getting the world ready to use more agents to represent collective intelligence.

Already on November 17th, 2023, at the APEC summit, answering an interviewer's question, "What is the most remarkable surprise you expect to happen in 2024?" Altman said that the capabilities of AI models in 2024 will be remarkably more advanced than anyone expected and added:

"I've gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward. Getting to do that is the professional honor of a lifetime."

After this interview, some experts interpreted Altman's words as a prediction of the AGI achievement in 2024 thanks to a new concept — a swarm of autonomous agents, when each interacts and collaborates with the others, just like bees in a hive. Indeed, the swarm of agents approach has already shown much better performance than a single LLM and significantly reduced hallucinations for software development tasks. Also, the paper More Agents Is All You Need (Li, Junyou, et al.) finds that the performance of LLMs scales with the number of agents instantiated.

Others believe Altman was talking about the OpenAI's secret project Q*. The first version of Q* was only demonstrated internally and presumably was able to solve mathematical problems on the level of grade-school students. Experts suggested that Q* combines Q-learning (a type of reinforcement learning where the algorithm learns to solve problems through feedback) and A* graph search algorithm. In a recent Lex Fridman podcast, Sam Altman refused to comment on Q* but said that he expects AGI as a human-level performance system by the end of this decade and possibly somewhat sooner than that.

Notably, experts' predictions for AGI achievement are lowering exponentially. If the date-decreasing trend continues, ARK Invest predicts AGI by 2027.

Source: [“Platforms Of Innovation”](https://assets.arkinvest.com/media-8e522a83-1b23-4d58-a202-792712f8d2d3/ff76349d-7983-4384-899f-a105178f886c/Convergence%20White%20Paper%20PUBLICATION%2020240321.pdf) report by ARK Invest

Source: “Platforms Of Innovation” report by ARK Invest

Most experts consider the current and next decade as the time range for the AGІ milestone. People love to have milestones. But looking back at the path we've already taken, we should accept that there won't be a moment when humanity suddenly finds itself in a "New AGI World.” Only those who overslept a decade or two will :)

As an R&D center, DataRoot Labs keep tracking the AGI as a light-speed-fast-approaching train to not miss the point when it has already surpassed the level of human understanding. Let's take this journey together. Follow us to stay tuned.

Have an idea? Let's discuss!

Book a meeting
Yuliya Sychikova
Yuliya Sychikova
COO @ DataRoot Labs
Do you have questions related to your AI-Powered project?

Talk to Yuliya. She will make sure that all is covered. Don't waste time on googling - get all answers from relevant expert in under one hour.
OR
Send us a note
Optional
File requirements pdf, docx, pptx
dataroot labs logo
Copyright © 2016-2024 DataRoot Labs, Inc.