There will be no AI winter. Unless you mean total, utter civilizational disarray and societal turbulence due to seismic shifts and transfers of skills between AI and humans with economy-crippling asymmetries. Then, yes - AI winter is coming.
ChatGPT is OpenAI's test run of a sophisticated chatbot, which can do anything from having a proper common sense conversation, the ability to detect whether you are asking a bullshit question, answer questions in any discipline at university level, or code for you. I have been playing with it conversing with it, doing some adversarial attacks, letting it spit out chunks of coding I do and I got it to produce very sensible to pretty excellent results.
This is like Google on steroids. Search with a brain. A virtual exocortex finally starting to be worthy of that moniker. It is not AGI yet, but it is nearing and evolving pseudo-AGI. And 2023 will be rife with with pseudo-AGIs. We have entered the AGI precursor era. Multi-modality, scaling, and orchestration will all boost abilities and it will keep going. This is obvious from the data, the benchmarks and the underlying tech and assumptions.
The great AI disruption is coming and it's coming even a tad faster than I anticipated mere months ago. That is why most of my upcoming blog posts at here will be covering the issues we will face as civilization the coming decade.
And I say this while also stressing the following: large language models are like the shadows in Plato's cave w.r.t. to meaning and understanding. The sheer amount of data and precursor to salience of transformers (attention is all you need) gets you very far and so this is why we'll see pseudo-AGIs popping up like shrooms in a forest after a rainy day. However, ultimately, human and possible consciousness-aided understanding is an AI-complete problem, meaning to solve them means entails solving AGI. For symbol grounding you need to ground them in something, and it's not endless vectors. You can't transformer or Markov chain your way into understanding. But grounding will too be done, and grounding can be done in more ways than the human way, too.
Language and meaning cannot be "reverse-engineered" to get symbol grounding. But yes, understanding is certainly be possible. NGI (natural general intelligence) is possible, so AGI is as well. But understanding piggybacks on multi-modality and interfacing with itself and the world in various ways, so multi-modality is necessary and for grounding in human-like feelings, it requires specific hardware and I/O systems - a deep topic for another time. The point is that language is not meaning. It's a symptom of meaning.
However, because we can emulate and mimic plenty of human features of intelligence and understanding as well develop them from first principles, these latest developments will fuel a lot of precursors to AGI the coming months. Why “months” and not “years”? I expect paradigm shifts to start hitting us more frequently the coming years, so I truly think a year from now we’d be shocked with the transformation and disruption the world has undergone and would massively update our views on what’s next. So let’s stick to months for now.
Thus, the conclusion for now is that we're facing many of the aforementioned challenges the coming year already. And we could not be less ready. AI will start eating everything. Education, search, art, programming. Let’s face it, a lot of human activity and labor amounts to busywork. I stress again this is not just hype and hot air. This is the beginning of the biggest self-induced disruptive event any civilization faces once they get to this technological point of no return. And here we are.
The genie...is out of the bottle.
> We have entered the AGI precursor era.
I'm concerned about this not just because it will replace a lot of jobs (and it will), but also because people bypassed ChatGPT's content safeguards within hours, demonstrating that controls are nowhere near ready for stronger AI.
I wrote about a control framework for AI a few weeks ago, and my strongest belief is that now is the time to kick the tires and strengthen controls, when AI is still in the "toy" phase. By the time it gets to the mission-critical, advanced phase, it will be too late for controls to catch up and keep pace. They have to catch up now and then try to keep pace (a major challenge!).