AGI Literacy
A decade ago AGI or Strong AI were terms that were mostly associated with crackpots. Those who work on the unattainable and bizarre goal of building an artificial general intelligence. That has changed. Chief AI officers, researchers and companies/institutes now openly talk about the idea of AGI and their pursuit of it. OpenAI announced Dall-E 2(1), the second coming of their from-natural-language image generating AI. Stability.ai released StableDiffusion(2), an open source text-to-image generator. It's stunning and marks incredible progress AI is seeing in recent years. And it's not just that, it's a cautionary tale for just how quickly things can progress in AI when there is a breakthrough.
There are some who still think pursuit of AGI is folly, simply impossible. Or those who make other dubious claims regarding AGI. In this post I outline the only scientifically literate ways of claiming AGI is impossible, or put differently, on what basis one might want to attempt to claim we can't ever build AGI. I'll also mention some claims I think are dubious and should be ignored unless they're backed by stunning theoretical work.
Before we dive into that though, let's establish what AGI really is or could be and what it isn't. Facebook's chief AI officer Yann LeCun recently posted(3) that AGI can't exist: "AGI can't exist. Because all intelligence is necessarily specialized. It's merely a question of degree. Even humans do not possess general intelligence."
While I have had this thought myself int he past, I don't quite agree. I would say that general intelligence is the collection of abilities of a cognitive system that allow that system to learn across domains, to evaluate itself and that learning, and to expand those domains. In this sense, there is a vast difference between a human and any other animal on earth, be it dogs, pigs or dolphins and elephants. There is a distinct qualitative threshold where the combination of of awareness, self-awareness, evaluation, self-evaluation and all enabling necessary modules such as working memory and executive function come together to result in a system that does not compare to the sort of narrow, niche-focused brains other species have. Whether this intelligence is "truly" general, we may quibble about and end up realizing there is no free lunch(4) for pure generality like there isn't for Solomonoff induction(5). However, for the sake of this post, let us assume AGI means human-level intelligence and beyond.
So where does the idea of AGI come from anyway? NGI, of course. Natural General Intelligence. Humans. This is our primary blueprint and of course proof there is nothing in the law of physics that prevents a general intelligence to exist. So what had to happen for that to occur? That's a heavy, very loaded, and hairy question. But, in a nutshell - The right combination of evolutionary pressures, in the right sequence - so... with the right modalities, (evolving) environment complexity, etc. So "all" we need to do to create AGI, is to simulate that which was necessary and sufficient of that process to to build AGI. Or is it? Of course just what is necessary and sufficient is a very hairy matter in itself. Is consciousness needed? What is consciousness even? What is intelligence? These are open and contested questions. Top authors and thinkers have been found to all have their own ideas on this(5). So what of that evolutionary process needs to be simulated? Are there completely other ways of getting there?
Quite the complicated problem. In fact, the idea that a computer science PhD (or other CompSci degree) automatically makes opinions on AGI valid is laughable. Computer Science curricula are diverse and even narrow AI is often only a small part of it. A student may focus on doing work with the OpenGL pipeline and know next to nothing about AI (playing with neural networks a few times hardly counts!), never mind AGI, AI ethics, existential risks, etc. Grinding through years of C++, doesn't magically give one insight on AGI either (How could it?). Building AGI is an inter-disciplinary endeavor and certainly to investigate future civilization developments and associated risks one cannot get by with a generic computer science degree. Obviously, skills in computer science are important. But so are skills in: math, systems science/cybernetics, physics, philosophy, cognitive science and more. Yes, that list is very long. This because everything from computational complexity, computational models, hardware, machine learning, brain, learning, evolution to consciousness are relevant to AGI. Concepts like agency, decision making, autonomy, intelligence, etc. These are all some of the biggest open questions in science. Not for the faint-hearted.
But hey, at least we know it's possible since it exists. There are no laws of physics that prevent a cognitive system from exhibiting as general of an intelligence as ours is. NGI is living proof AGI is possible.
So now that we've established that, what is there to protest against and where can we blunder when contemplating AGI and our future?
1. AGI is impossible
Evolution is a tough act to follow. Can we ever get enough data and computing power to generate an artificial mind of the sort that possesses general intelligence as we know it? It's an open question, sure - so here one can try to make an argument based on tractability and somehow come to strong conclusion that we can never get there. Extremely doubtful, but semi-literate. Next up is hardware. It's entirely possible what's necessary for intelligence requires specific hardware implementation. Here too an argument can be made. Carbon chauvinism or not, at least it's a concrete argument. With respect to it being somehow impossible in principle - that's where it gets illiterate. Natural general intelligence exists.. and none of its features suggest they cannot be emulated even if they require strict constraints. And so we are ourselves the prime example that general intelligence can exist and so AGI is very much possible.
2. AGI cannot rapidly self-improve
Claiming AGI cannot rapidly self-improve - have a so-called hard takeoff. Whether and how, when it will happen is one discussion. But whether it can happen? Of course it can. The only gigantic issue we have with our brain is one of interfacing. Biological machinery is wondrous, but interfacing with it is very difficult. It was not designed, it evolved. It has modularity, organization layers... but ultimately it's messy. We can't just double our working memory by sticking it into our ear. Otherwise humans would have had their own hard takeoff. AGI will have no such issues. Some protest that it's naïve and AGI will need resources like anything else. Of course. But your 86 billion neurons only need about 20W of power. You really think non-biological intelligence can't figure out how to run a supremely designed version of that and optimize it to 1W? And then.. 100 times more powerful? This topic warrants a whole blog post, or tome - but as far as I'm concerned, the idea that AGI cannot rapidly self-improve is a profound failure of the imagination. Hence:
3. AGI is to humans is what humans are to dogs
Using arguments that invoke the differences in intelligence between animals like dogs or chimps and humans. There is a qualitative threshold for intelligence that renders these comparisons moot. It's a seductive, because it helps immediately imagine how there can be an absolute impossible-to-overcome difference between two entities. You can try to have a dog design a rocket all its life and get nowhere. But the trap here is not appreciating that humans nevertheless have at least enough cognitive tools to have some form of generality, as I described in the introduction of this post. There surely can be an absurd, unimaginable difference in cognitive ability between an AGI and human, but it's a qualitatively a difference of another sort than that between humans and other animals. This provides at least a glimmer of hope that AGI wouldn't necessarily automatically rule us and stomp on us like ants. Just a glimmer, though.
4. AGI poses no risk
Claiming there is no risk or issue at all with vastly different artilects being injecting into the house of cards that is our civilization. Another profound failure of the imagination and wishful thinking. The onus is on the claimant here how AGI would be constrained or contained forever, but it's doubtful any such plausible argument exists. Pretty much scientifically illiterate unless there is a stunning thesis behind it. AI Safety or alignment are catastrophically difficult problems as they involve everything I mentioned necessary to deal with AGI theoretically and on top of that meta-ethics and ethics we haven't even sorted for ourselves, let alone alien entities that will be far, far less resource and interface constrained than us. To make this case one has to demonstrate many fiendishly difficult things, such as how the orthogonality thesis(6) doesn't hold.
5. Super-intelligence implies benevolence
Claiming that all AIs will converge on human-benevolence. This is one of the daftest ideas I’ve seen tossed around. Human values are a patchwork, inconsistent, and but a dot in possible configurations of ethical systems, like our mind is a dot in mind design space. Forget about human ideas of good as attractors for other minds and AIs. I’m afraid I don’t have much more to say about this claim, other than it is an extraordinary claim that requires extraordinary evidence.
In the near future I will go into some of these points in-depth as well as release my own original work on our future, AGI and how to possible think about future-proofing civilization the coming years and decades.