top of page
  • Mal McCallion

Human level AI in 2 years, says Anthropic CEO

Updated: Dec 11, 2023

I've summarised this one-hour video (well, I got Dario's Generative AI bot Claude 2 to do it, in fact), in bullets at the bottom of this article but if you wanted the ultimate TL;DR then here you go:


We all know that AI stands for Artificial Intelligence. Well - rather unhelpfully - what sounds like a minimiser, a subset of AI, 'AGI', is actually a massive expansion of it, into human-level thinking by machines. (We'll get onto ASI - Artificial Superintelligence - another day, one where you really don't have to sleep.)

AGI is Artificial General Intelligence, the thing that Turing started talking about all those decades ago - machines that are as intelligent as humans. In this video, Dario Amodei, CEO of Anthropic, talks about how it's really not very far away. And that is a quite amazing thing to say out loud.

From a purely pedantic point of view, one might argue that it's human-level inference rather than intelligence, but I have no doubt that the hype-cycle is going to be every bit as drama-infused as this until we get there.

To gain insights into the counter-punch to these grand visions of AI taking over everything, I really recommend that you read (or listen to, much more efficient!) 'The Myth of Artifical Intelligence' by Erik J Larson. It breaks down the challenge of achieving real, honest AGI - we don't know how our intelligence works so how can we program it? - and rebuts some of the more creative (and vested-interest-driven) claims about what is happening.

(Plus it was written before everyone got sweaty about ChatGPT, back in the Stone Age of 2021, so it's fascinating to see many of his predictions about PR coming true right now.)

Anyway, here's Dario's AI bot with its summary of the video - enjoy!

  • Dario Amodei has seen the potential for AI scaling and general intelligence for many years, going back to his early work in deep learning at Baidu. He sees the smooth scaling of AI capabilities with more data and compute as an empirical fact, even if we don't fully understand why it occurs.

  • Language models like GPT have been a major breakthrough, showing AI can learn a lot about the world just from predicting the next word. Scaling up language models seems like a promising path to general intelligence.

  • Amodei believes we could have models that seem generally intelligent in certain ways in 2-3 years, but they may still lack common sense, general knowledge, and capabilities in many domains. There are also safety issues to address.

  • Mechanistic interpretability research aims to look inside models to understand what they have learned and whether they are aligned with human values. This could be key for verifying if alignment techniques are working as intended.

  • On misuse, Amodei is concerned models could enable large-scale biological attacks in 2-3 years. On misalignment, the path is less clear but there are reasons to be cautious about controlling superhuman AI systems.

  • For beneficial outcomes, Amodei believes AI progress will need democratic oversight and decentralized control, not centralized power in small groups or "godlike" systems. Safety is crucial but so is addressing misuse potential.

  • Talent density, efficient training architectures, and interpretability may give Anthropic an edge, but staying at the frontier of AI capabilities is also important for safety research in Amodei's view. This has tradeoffs.

  • Amodei remains puzzled by how efficient and general human intelligence is compared to AI models. He believes intelligence is a very broad continuum of skills, not a single dimension.

Made with TRUST_AI - see the Charter:

17 views0 comments


bottom of page