AGI or Skynet Isn't Coming Anytime Soon

PT | EN
June 18, 2025 · 💬 Join the Discussion

If you haven’t watched it, I recommend the episode I took part in last Friday, 06/13/2025, on Flow Podcast:

I made a point of raising something that’s worth expanding on now. Every time you believe a CEO of an A.I. company saying that “A.I. is going to replace people” or “AGI is knocking at our door” or “AGI is going to be catastrophic”, you’re believing a drug dealer telling you cocaine is good for your health.

Let’s get to the new information. Sundar Pichai is Google’s CEO, and one of the people who said “30% of Google’s code” is already written by A.I. (without explaining, of course, “which 30%” or showing anything).

Check out the same Sundar Pichai today, 06/18/2025, on “Business Insider”

Sundar Pichai

Interesting how the tone got more “down to earth” now. And he wasn’t the first. Back in February 2025, Microsoft’s CEO Satya Nadella said the following in articles like this one:

Satya Nadella

And why this shift in tone, given that at the end of 2024 everyone was pumped up shouting “AGI”, “AGI”?

There’s a clause in Microsoft’s investment deal with OpenAI (they invested USD 14 billion) stating that OpenAI would stop sharing technology with Microsoft once “AGI” was reached. Defined as a “highly autonomous system that surpasses humans in the most economically valuable work”. That’s the definition.

The problem is that this also takes Microsoft out as an investor. On top of that, Microsoft itself recently started declaring itself a competitor to OpenAI instead of a partner, positioning itself alongside a Google or Claude. Read the details in this analysis:

msft-openai-clause Get this straight: every “it’s AGI, it’s not AGI” discussion has more to do with under-the-table negotiations between the various Big Tech players involved and their investors. It has nothing to do with real technology. You’re just another audience monkey in a show you can’t even watch. So you’re cheering blindly, as usual.

Geoffrey Hinton and the Expert Problem

geoffrey hinton interview Yet another interview with this Geoffrey Hinton guy came out, more fatalism, more “it’s over”, and more people screaming around every corner thinking Armageddon is showing up tomorrow.

The media considers Geoffrey Hinton the “Godfather of A.I.”

Because of this, many people believe everything he says - “Of course the guy who invented A.I. knows what’s going to happen”. It turned into a cult.

I myself said on Flow that he invented Backpropagation - the backbone of A.I. training -, but that’s a simplification. Hinton himself has clarified several times that he did not invent it. The bulk of the work was published by Paul Werbos and Shun-Ichi Amari. But it’s true that Hinton was the one who popularized that work.

Hinton, as a professor, had the luck of supervising students who would go on to become brilliant. On AlexNet - I told the story on Flow - it was Ilya Sutskever who came up with the idea of using GPUs to train deep neural networks. Ilya, who would later co-found OpenAI - alongside Andrej Karpathy, another brilliant Hinton student, and who these days I despise because he’s the one who coined that “Vibe Coding” thing. Karpathy has been following in his mentor Hinton’s footsteps, spreading bullshit.

Still on the computer vision revolution, besides using GPUs, the big architectural step was Convolutional Neural Networks (CNNs), from Yann LeCun - another Hinton student.

The problem: the media gives more credit to the supervisor than to the student. It lumps everything together and it ends up as if Hinton - alone - invented BackPropagation, CNNs, and the entire AlexNet. Even the Alan Turing Award he received, again the merit belonged to the team, but the name “Hinton” is stronger, so it came out as if “he” was the one most responsible. Obviously, that way anyone looks like a demigod.

I’m not saying he’s dumb or anything like that. But he’s not all that, he’s not infallible, and he’s certainly a normal human being who makes mistakes. There’s no correlation between his previous work and speculations about the future.

On another end of the spectrum, there’s a Nobel laureate, Roger Penrose, who has been postulating since the 80s or 90s that “consciousness is NOT computable”, for example, in his book “The Emperor’s New Mind”. A Nobel laureate at the level of someone who proved Einstein’s hypotheses and worked with Hawking.

On top of that, Yann LeCun himself belongs to the camp that doesn’t believe current A.I.s will get anywhere near AGI either. As I ALWAYS repeat: some new discovery - that we don’t yet know about - needs to happen for that to occur. LeCun has his theories and is at Meta, working on a project called V-JEPA to see if he can make that discovery.

Instead of listening to Hinton, listen to a bit of LeCun:

Well then, if “authority” is your only yardstick, you should then blindly believe Penrose and LeCun as well. And now, how does that work out?

This game is more complicated than your naive blind-follower mind can see. Don’t be just an audience monkey, and understand how the pieces fit together.

“You Can’t Prove AGI Won’t Exist”

It’s true, nobody can. Nobody can prove God doesn’t exist. Nobody can prove Unicorns don’t exist. This means absolutely nothing, it’s a logical fallacy.

People outside of tech who never studied computer science can only see what the newspapers publish or the nonsense some influencer says on a podcast.

Let’s clear this up: yes, all of us - programmers trained in computer science - know that with the current LLM architecture, there’s no way to reach AGI. That is a certainty.

The problem is that you folks think that not even we know everything about A.I., that there’s some mysterious layer inside the model that works without us knowing how it works.

Or worse, you folks think that only a few names considered important, like Geoffrey Hinton or Ilya Sutskever, are the ones who can understand it. Understand an important truth: yes, to arrive at a General Theory of Relativity, it took giant geniuses like Albert Einstein, David Hilbert, Kurt Gödel, etc. One of them would get there, not every physicist could.

But, once the Theory is described and understood, it’s easily taught to first-year college students. It’s no mystical secret that needs decades of training to understand.

Same thing: perceptrons, neural networks, backpropagation, gradient descent, RNNs, CNNs, Transformers, etc. Each of these stages counted on people considered geniuses, like Geoffrey Hinton (whom many will say just took the credit, while the actual work was done by the Sutskevers or LeCuns). But once understood, all of this is undergraduate material in computer science. You don’t even need to be a PhD to understand most of it.

Trained models work because we know how to tune them to work. An important part of the training phase is feeding the neural networks millions of PRE-MADE questions and answers. That’s the INSTRUCTION phase, and EVERY LLM goes through it. It answers exactly the way each company behind it told it to answer, nothing more than that.

Ah, but there are so many parameters that you can’t possibly know everything about it. Yes we can, any good programmer can load up Tensorflow, PyTorch, load the model, and explore every dimension. We can start from the answer and look at the steps backward and see why it formed the way it did. We can re-tune, we can tweak as much as we want. I showed part of this in previous posts.

Every stage of training, inference, optimization, alignment methodologies, deployment strategies, it’s all documented, and all of us in the sciences know exactly how everything works.

That’s why it’s scary that someone like a Hinton or Sutskever keeps going around fearmongering without demonstrating anything. It shows how out of the loop they are about what’s actually happening. Yann LeCun is right: he has his hands dirty and he knows the current architecture has already hit the ceiling and there isn’t much further to go from here. That’s why he started other projects to try to figure out what a NEW architecture that leads to the next stage might look like. That’s Meta’s V-JEPA 2 project.

Just because YOU don’t know how things work, don’t think there’s room for science fiction stuff. It’s the same as watching a movie like “The Ring” where evil spirits live inside a VHS tape and come out of the TV, associating that TV-wave-spirit as if it could be real and telling me “You can’t prove it’s not possible”. It’s a level of childishness and lack of knowledge that I don’t even know how to start explaining.

Always, the BURDEN OF PROOF is on whoever is SELLING the claim. Every AI company CEO and every renowned academic - without publishing anything - is making this claim. I’m not the one who has to prove anything: they are. No one should try to prove a negative, it makes no sense.

As a bonus, going back to Yann LeCun. I didn’t follow him before, I only went to see what he says today. I found it interesting how he argues exactly the same things I’ve been saying for over a year now. I don’t want to claim for a second that I’m even a fraction of his intelligence. It’s just that, starting from the same principles, you inevitably arrive at the same conclusions. It’s pure inference from first principles. It’s not speculation. If you write “2 + 2”, both he and I will arrive at “4”. Everyone is saying “but if you have enough computing power, one day it’ll get to 10”. It won’t. Watch this other interview of his, it’s very enlightening:

What LeCun and others - like DeepMind - have already said is that LLMs don’t have the capacity to reach AGI. That’s known, and DeepMind has already proven it mathematically. This other video has more details. But in summary, an LLM is good as an “information retriever”. Beyond what I call a “glorified text completer”, which means it’s only capable of completing with information it has already seen in training (and random combinations that sometimes make it seem like it’s new information, but it isn’t). The only way for it to solve ANY problem is to have a CAUSAL MODEL of ALL PROBLEMS. In other words, it would already need to know the answer to every question beforehand. Obviously we don’t have that, therefore it’s impossible for an LLM - being a machine for retrieving information - to give us solutions that don’t exist, like “what’s the new discovery that will enable AGI?”. It’s an undecidable and non-computable problem. Mathematically proven to be unfeasible.

AGI, under the current architecture, is not going to happen. Some new discovery needs to happen. That could be 2 years from now, it could be 20 years from now. But there is no possible path from what we know today.