2 Comments
Dec 6, 2022Liked by Pawel Pachniewski

> We have entered the AGI precursor era.

I'm concerned about this not just because it will replace a lot of jobs (and it will), but also because people bypassed ChatGPT's content safeguards within hours, demonstrating that controls are nowhere near ready for stronger AI.

I wrote about a control framework for AI a few weeks ago, and my strongest belief is that now is the time to kick the tires and strengthen controls, when AI is still in the "toy" phase. By the time it gets to the mission-critical, advanced phase, it will be too late for controls to catch up and keep pace. They have to catch up now and then try to keep pace (a major challenge!).

Expand full comment
author

I share your concerns and will soon publish an essay on something I consider a potential Great Filter (Robin Hanson)... so... a lot of challenges ahead, and I think of catastrophic/existential proportions.

W.r.t. AI safety... I think we won't be able to contain it. It'll get "bloody", in terms of fighting for power, control, resources, and going through violent transition periods.

Expand full comment