The idea of artificial intelligence being as smart and versatile as a human being sounds like science fiction. But with the rapid pace of AI development, more and more experts are asking: how long will it take before we reach Artificial General Intelligence (AGI)? The moment an AI can reason independently, invent new concepts, and adapt as flexibly as we do. In this article, I dive into that question.

Recently, I had a discussion about AI with my father (69, amateur chess player), and before long we were talking about Garry Kasparov. In the 1990s, Kasparov was a world-renowned chess grandmaster, famous for his playing style and numerous victories. But in 1997, something happened that shocked the world: he lost to IBM’s supercomputer Deep Blue. For the first time, a world chess champion was defeated by a computer—something many had thought impossible. Kasparov was stunned and later said: “I felt a new kind of intelligence, a spirit in the machine.”

Yet Deep Blue wasn’t what we’d now call AGI. The supercomputer could only play chess. But the moment marked a turning point: technology began outperforming humans at specific tasks. It sparked a debate about the limits of artificial intelligence. Today, I see the same discussion flaring up again—but now it’s not just about winning a game, but disrupting entire industries.

When Will We Have AGI?

Predictions on when we’ll achieve AGI vary widely. Some researchers, like Dario Amodei of Anthropic, believe we’ll see systems with early AGI traits as soon as 2026. Others, like AI pioneer Geoffrey Hinton, think it might take five to twenty years. Quite a margin…

But not everyone is convinced we’ll ever get there. Yann LeCun, the highly respected AI researcher at Meta, argues that AGI is still decades away. He even suggests it may never be possible in the way people imagine.

Demis Hassabis, CEO of DeepMind, is more cautious in his forecasts: “I think human-like reasoning in AI is possible within a decade, but it’s far from certain. We still need to make fundamental breakthroughs in our understanding of intelligence.”

Zooming out, though, it’s clear to me that AI is already having a huge impact—on individuals, organizations, and entire industries. Both positive and negative. Whether AGI arrives soon or not, the boundaries of what AI can do are already expanding rapidly.

From Narrow AI to General Intelligence

Today’s AI systems, like GPT-4 and Gemini, are impressively versatile. I’m continually amazed by how these LLMs support and amplify my daily work. But these models are still specialized. They can generate text, write code, create images—but all within clearly defined boundaries. A language model like GPT can’t perform complex financial analysis like Bloomberg’s AI. McKinsey’s AI can’t compose music or analyze medical scans.

AGI would need to combine all of these skills into one system. An AI as flexible as a human would learn from experience, adapt to new problems, and perform tasks it was never explicitly trained for.

That’s a massive leap beyond today’s AI. Sam Altman, CEO of OpenAI, calls AGI “the ultimate technological leap” and says: “Once we reach AGI, it will become the most powerful tool humanity has ever created.”

So Where Are We in the AGI Race?

While AGI is still a thing of the future in my view, we’re already seeing AI systems perform tasks that were recently thought impossible.

AI models like GPT-4 and Gemini outperform most humans on complex exams. OpenAI’s GPT-4 scored in the top 10% on the Uniform Bar Exam for U.S. lawyers, and DeepMind’s Med-PaLM can answer medical questions at the level of an experienced doctor. These systems not only provide correct answers but also reason through complex problems, spot patterns in data, and even generate hypotheses.

AI’s ability to independently solve problems and make connections grows with each version. AlphaFold, a breakthrough from DeepMind, predicted the 3D structure of almost all known proteins—a problem researchers, like my younger brother, had struggled with for decades. To me, this proves that AI already functions as an intelligent system, going beyond simple pattern recognition.

Geoffrey Hinton, one of the founding fathers of deep learning, says: “We’re reaching a point where AI is starting to learn like humans do. That’s both exciting and worrying.”

But despite this progress, AI models are still limited. They lack motivation, can’t develop abstract concepts like humans do, and rely heavily on vast amounts of training data. This makes the leap to AGI complex.

What’s the Next Step in AGI Development?

Looking at current challenges and developments, AGI remains, in my view, an ambitious goal for now. Think of the technologies we’ve developed over the past century. Progress came in gradual steps—from the lightbulb to the internet, from the first computer to smartphones. But AGI is a different story. It’s not a matter of incremental improvements; it’s a bold leap into a fundamentally new reality.

Sam Altman recently said: “We’re now confident we know how to build AGI.” And not decades from now—but possibly within Trump’s next presidential term, meaning in just 3.5 years. His prediction no longer feels like science fiction. The computing power, models, and scalability show that final barriers are falling faster than expected.

AGI won’t arrive overnight, but the first systems that resemble it are already on the horizon. If the predictions are right, it won’t be long before we’ll have to ask ourselves: how do we collaborate with an intelligence that can outperform us in every domain?