Paul Kedrosky notes that the AI labs have implicitly given up on their current technology leading to AGI. But they are still hoping a breakthrough they don’t yet have will do it: continuous learning. Kedrosky quotes a Bloomberg piece [Paywall] that quotes a youtube interview of Sam Altman:
I think we’ll start to get to a place where we put these systems together in new ways and we have kind of continuous learning where the systems just run forever and get smarter and smarter.
I don’t need to point out that knowledge and intelligence are different things. Current AI models already far surpass humans in the amount of knowledge they have, but humans are still more intelligent. Even among humans, knowledge and intelligence are loosely correlated, at best. Many celebrated geniuses do their best work at a young age but do not become even more genius-y as they get older and gain more knowledge.
Because this is obvious, I have to assume Altman isn’t thinking of AGI appearing through accumulation of knowledge. I assume he is talking about the kind of bootstrap AGI that science fiction writers have been warning us about for decades: the AI comes up with an idea that makes itself more intelligent, then uses that extra intelligence to make itself even smarter in a loop that grows its intelligence exponentially until it is smarter than humans. (And potentially keeps going.)
I wrote a couple years ago about why this is wrong. The tl;dr is that different types of reasoning are not only structurally different but you can’t reason from a stricter type of reasoning to a more general type of reasoning. You can’t use deduction to deduce induction; you can’t use induction to induce abduction; and, assuming we are using something more general than abduction, you presumably couldn’t use abduction to abduce whatever that more general reasoning is. More general types of reasoning are ‘smarter’ than stricter types of reasoning.
I think the current crop of AI models are using induction. You might be able to argue they are using abduction, but I doubt it. Regardless, these models structurally cannot teach themselves whatever higher order of reasoning would make them competitive with human intelligence. They cannot improve themselves, except incrementally.
Philosophical arguments won’t convince you, I know, because philosophical arguments have probably never convinced anyone of anything. So here’s a question to think about. I posit that humans are not more intelligent than we were 3000 years ago or so. We have more knowledge of course, and we have better institutions to both pass on knowledge and coordinate thinking between people, but we are not better at the process of thinking. If you read Thucydides or Sun Tzu (written more than 2400 years ago), for instance, or Aesop’s Fables (written more than 2500 years ago), you have to conclude that their authors thought the way we think. They did not have more intellectual horsepower, so to speak. The question to ask, then, is why, given the more than 100 billion people who have lived, we have not been able to make our own brains more intelligent? And if we have not been able to, why would we think an AI that isn’t even as smart as we are could make itself more intelligent? (And, even more, why continuous learning would do the trick…have the AI labs not already tried to use the AIs they have to improve their own structures incrementally? This doesn’t need to be a continuous process, it could just as well be a discrete process.)
It seems unlikely that we have just been unlucky. More likely, we are not smart enough to improve our own intelligence. If this is true, then the current crop of AI, useful as they are, can’t bootstrap their way to AGI.