

Discover more from Mental Contractions
The future of AI - what's the "realistic" view?
Where I argue to err on the side of caution - which in my book means rapid acceleration
There are some people out there who pretend to present "realistic" views on AI improving very gradually and generally dismissing current developments. I am getting increasingly annoyed with this, because I think things are obviously accelerating at a at a breakneck pace and we should invest time in getting ready.
We need to err on the side of caution and assume things will accelerate (they will), because the amount of adaptation we'll need to do to deal with all the changes in the next year alone is anyways overwhelming.
Unfortunately I have little time to blog right now, but I may be able to squeeze out my AI fast takeoff post in the near future. In short, I think it's pretty much inevitable that from near-AGI/AGI to incomprehensibly more powerful AGI the time frame is short enough that we won't be able to keep up and adapt.
A few arguments/claims circulating:
- AGI will need resources, real-world data, etc., -of course(!), but please quantify this and you'll quickly find out there are a 100 ways the AGI can do this and that's merely using our puny human imagination. And it *is* puny.
- Lots of computation and power needed. - Again, look at the human brain. 20W. 20-bloody-Watts. You think an AGI won't be able to optimize? Please…
- Infrastructure. It's already there. People will let loose agents on the web and these agents will easily self-optimize and easily load themselves into other physical parts. They can easily be distributed. AI by nature is parasitic. It only needs hosts.
The above is of course simplistic and a bit of a caricature, but the main take away here is that I truly think we massively lack the imagination, intelligence and overall capacity to realize how AGIs will easily out-strategize us and will not be beheld to the many constraints of biology.
This is why I wrote:
The universe is a playground and there is a lot of energy and plenty of resources. Earth is no exception. There are physical limits to computation and energy usage, but we’re nowhere near close to them.
Forget for a moment about existential risk or catastrophic risk. AI is already permeating everything, and quite generalizable techniques that aren't that pricy to scale (100ks for useful ones) are being integrated over the world now. Big corporations, start-ups, universities, open source teams.. lots of stuff happening behind the scenes we don't hear about.
AI is going to be everywhere and it's spreading like wildfire. On Twitter some people objected when I called certain arguments silly. To me, my responses are super mild and I feel like I am holding back a lot in fact.
We have to act and we have to act fast. It really is wrongheaded and silly to insist LLMs are not AGI and therefore it's a lot of ado about nothing. The former is true, but the latter does not follow.
We're at the cusp of a revolution and we are so ill-prepared it hurts. Do some people have unrealistic ideas about AI magically becoming all-powerful and do they not know any physics? Yes. I'm sure there are.
However, is it the case that AI can easily and massively transcend our capabilities and optimize resource-usage in ways we can't fathom? Also yes.
ChatGPT got to one hundred million users in the blink of an eye. Fastest product adoption rate recorded, ever. Let that sink in. Faster than shiny apps like Instagram or TikTok. This is with a barebones interface and a sign-up that required a phone number. This is really quite a stunning feat.
So err on the side of caution and assume "AI is improving and spreading very quickly". Anything else is just irresponsible.
I really don't appreciate naysayers who think they're cute pointing out what LLMs get wrong, if the context is: "We're nowhere near AGI yet".. yeah, sure. 10 years ago we were not going to see Go human players being defeated in our lifetime, nor were we going to get anything close to the sort of natural language processing GPT-3+ displays.
Look at us now. Even the first conversational LLMs that are now popular like ChatGPT, massively outperform us on several metrics. So sorry, but if you think AGI will not crush you on every conceivable metrics w.r.t. intelligence, memory, creativity, etc., you are out of your mind and out of your depth.
Humans and the human brain are not the epitome of agent design, not the epitome of intelligence, not at the limit of what's physically possible w.r.t. cognition and action... Nowhere near.
We humans and our governments are lethally slow in grasping how quickly things develop and how they will affect society. This is not like the internet where we can have legislative lag amounting to years and sit back to see how it goes, to eventually respond in one way or the other after the fact. I truly don’t think this is a “wait and see” situation.
We need fundamental re-thinking of how we govern and anticipate the future. We need a complete overhaul of civilization and society.
Many things that were more or less constants of agents (minor variation in intelligence, capacity, constraints of bodies, logistics, sentience, etc), are becoming variables. This is a very profound paradigmatic shift.
Conclusion? We need to throw our anthropocentric worldview and systems out of the window and put all hands on deck to implement universal systems that accommodate for many different generally intelligent agents, the best way we can, as soon as we can.