The intersubjectivity collapse
A catastrophic or existential risk for any civilization that radically self-modifies their minds and develops AGI
The intersubjectivity collapse (IC) refers to the breakdown of the network of unspoken rules that holds civilization together based on the subjectivity of minds that have created it. It’s the idea that a Cambrian-esque explosion of new types of minds leads to inherent unpredictability among agents, due to their vastly different subjectivity and modalities. The more homogenous the dominating species that built its civilization, the more brittle this network initially is. I argue that an intersubjectivity collapse seems inevitable for any civilization that starts to radically self-modify their minds and develop AGI, because possible mind design space is likely very large and internal constitution and behavior are largely orthogonal. I present several possible scenarios and consider the IC as a potential great filter candidate. I sketch out some of the catastrophic and far-reaching consequences, including necessary fragmentation of civilization. I then sketch out directions for mitigation and solutions. I briefly mention its relation to alignment of AGI and AI safety. Finally, I list recommendations and ideas for future research.
Overview
The intersubjectivity collapse is basically an answer to the question: “What happens when we introduce a bunch of new minds into a civilization?”
It’s a theory about diverging subjectivity of any civilization that self-modifies and potentially dire consequences thereof.
It’s a possible great filter candidate: any civilization has to face this and it may destroy them - as an answer to the Fermi-paradox.
An imperative to understand and map how the types of minds that spawned a civilization are reflected in it, and map how its organization relates to subjectivity, in order to anticipate the intersubjectivity collapse.
In plain terms, once we will have many types of minds, the difficulty of cooperation will spike sharply, due to different subjectivity, which relates directly to different cognitive architectures with different modalities and processing capacities. Beauty is in the eye of the beholder, yes. Beauty is also in the processing capacity and modalities of the beholder.
Not only that, so are values and the societal systems the beholders build. This is reflected in everything. And the extent of this and consequences are much deeper and more dire than one may assume at first glance.
The space of possible minds or cognitive construction space as I call it, is of unknown size, but there is little to no reason to believe it’s not very large. In fact, I think the subjectiverse, the space of all possible experiences to be had, is enormous, perhaps intractably large as all possible minds times all possible experiences and ways they can be configured amounts to a combinatorial explosion. These rich aspects to the IC makes it so that there is a lot of ground to cover and so this post will serve as a relatively light introduction where I will sketch the basic concepts.
Only to some degree like the Hedonic Treadmill, introduced by Brickman and Campbell in their article "Hedonic Relativism and Planning the Good Society" (1971), I think that consciousness for us evolved minds has a setpoint to which it returns inevitably after deviation. One treacherous aspect of consciousness is that we need to find it rather unremarkable so that we can act. You can't flee from a predator if you're stuck in perpetual awe of your existential awareness. Like the hedonic setpoint where major disruptions in happiness have us bounce back to a relatively stable level of satisfaction, so too do we have a consciousness setpoint, like say the default network in neuroscience. Couple that with the Zeitgeist you were born into, values instilled in you, and you can see we generally are quite constrained in how we perceive the world and how this world reflects back upon us our own perception of it. This makes it quite hard to appreciate just how much pretty much everything about how we’ve constructed our world, its systems, from unspoken rules, norms, values, to law and order, is all based on ourselves. More importantly, to let sink in the perhaps treacherously obvious fact that we can easily predict and understand each other as agents. We’re the dominating species on earth and our world is necessarily anthropocentric.
So to begin appreciating this aspect, I will sketch out our relation to differences in mind within and outside our own species. Then I will mention a few examples of changes in our minds, due to augmentation or introducing other minds, to demonstrate how quickly our rules and societal systems would be inappropriate for different types of minds.
“By its very nature every embodied spirit is doomed to suffer and enjoy in solitude. Sensations, feelings, insights, fancies- all these are private and except through symbols and at second hand incommunicable.”
―Aldous Huxley,The Doors of Perception
Subjectivity is generally considered to be private and this is really due to the fact that we can’t easily interface with each other’s qualia. However, it is not necessarily fully private. Conjoined twins like Krista and Tatiana Hogan whose brains are fused and who can see out of each other’s eyes and share feelings, are proof of concept of this. But for most of us, we assume others are conscious “like we are” to an important extent because of morphological and behavioral similarity, and the plethora of indirect empirical evidence behind those, from evolutionary biology and medical science, whether it concerns neural correlates of consciousness, studies from psychology that do replicate, or how we respond to anesthesia.
But then there are also differences among us. Not everyone’s senses work the same way, not everyone’s inner mind has the same qualities, and some people are missing senses, have enhanced senses, or have notable differences in their brain anatomy or connectome beyond expected variance. From the autism spectrum to psychopathy, and congenital insensitivity to pain, being born blind or aphantasia. There’s quite a bit of variance. Then there are cultural differences that make people with essentially the same senses view the world quite differently - which is out of the scope of this post, as tomes can be written about that. With respect to common sense, it’s just sense that is common enough to generally not break down given the variation in our species - very much an artifact of a species’ cognitive architecture and set of senses. Our common sense is not common among dolphins.
Let’s consider two key points. The first is the orthogonality thesis (Bostrom, 2012), which states that intelligence and final goals are orthogonal axes along which possible agents can freely vary. So any combination of intelligence and goals may exist. This provides one key indication of the possible variance in minds in mind design space. I expect the same to apply to intelligence and values. So a wide range of axiological and ethical frameworks can be adopted by any intelligence that can process it computationally.
The second key point: the degree to which evolved species will have optimized for consistent signaling of internal state. Some of these signals are consistent across species. When someone is sweating, we know they are either too hot or nervous. When someone is screaming, we know they are hurting or in anger. Pitch is often raised when "sounding an alarm” across species (fear, pain). Hearty laughter generally represents joy in humans.
The rather frightening thing here is, that the vast majority of possible inner states and behaviors, are also almost entirely orthogonal. I’d call this the behavioristic orthogonality thesis. Most of what we consider a given about another being, is a convenient “fiction” evolution optimized for. Kinship, empathy-the basic ability to predict other agents within and outside our species-, are an artifact of evolution, not a universal necessity for an extant mind. Merely a necessity of naturally evolved minds. At the very least, behavior is a rather superficial metric and as an output that can be realized in tremendously many ways.
So the first deep realization you should have is, that an entity could literally display a complete inversion relative to humans or animals with respect to inner states and outer signaling (behavior), while looking precisely like a human or animal we think we can predict. In fact, there are some clinical examples of some of this or an approximation of it, like the Pseudobulbar affect where someone can feel sad yet laugh uncontrollably. This is why psychopaths easily hack our intuitions and indeed are a cautionary tale what sort of havoc even slight deviations from the norm can cause. Concepts like Cyranoids and Echoborgs will become very relevant, agents that are driven by external forces.
With respect to other species, it’s difficult to overstate the harm we inflict upon them. You don’t need to be a vegan or vegetarian to acknowledge the state of affairs that we slaughter animals by the billions, even we have very good evidence for their sentience, intelligence and suffering, like is the case for pigs who are generally considered to be on the level of human toddlers w.r.t. to cognition and sentience (Mendell, 2010, Gieling, 2011). So one could say this doesn’t bode well for the intersubjectivity collapse, as billions of animals on earth are on the receiving end of intersubjective distance between us and them.
So, if we take into consideration all the above I think we needn’t belabor the idea that the intersubjective web that holds us together is like a house of cards and all societal systems we created necessarily rely on it. We don’t check and verify other generally intelligent agents much really, because we piggyback on kinship. And all of that is baked into how we run our word. So the introduction of new minds will cause unforetold turbulence and requires a total overhaul of civilization, as unaugmented humans eventually become a tiny and ever-shrinking minority of all minds on earth.
Scenarios & Consequences
The intersubjectivity collapse needn’t be seen as inevitable, neither with respect to how it manifests itself nor its consequences. I generally do fear this may be in fact a great filter. The term “great filter” was coined Robin Hanson (Hanson, 1998), which states that throughout the evolution of life from its earliest beginnings to its most advanced stages of development on the Kardashev scale, there is a particular obstacle to development that makes the detection of extraterrestrial life extremely rare as a solution to the fermi-paradox (if life is ubiquitous why do we seem alone?). So due to the vast size of the subjectiverse, civilizations collapse or self-destruct soon after they develop the ability to radically self-modify their minds and build AGI. This raises the question: can we get a sense of how likely a civilization is to survive this event?
Not to succumb to physics envy, but we can speculate about what a formula would look like that would score a civilization on intersubjective robustness. I think at its core this is about the extent to which a civilization is heterogenous. Imagine if elephants and dolphins, sentient and intelligent creatures, would also be capable of developing technology like humans. We’d have three dominating species on earth for whom we’d have to accommodate in all our societal systems. However that scoring formula would look, I generally think our civilization would be more robust in that scenario, having to have had to accommodate for other species.
So the intersubjectivity robustness score could go like: f(diversity of perspectives (w1), adaptability (w2), social cohesion (w3), communication infrastructure (w4), number of dominating species (w5), mind variance among dominating species (m)).
Diversity of perspectives as measured by the number and variety of different types of minds within the society, as well as the extent to which different perspectives are valued and included in decision-making processes. Adaptability as measured by the society's ability to incorporate new ideas and ways of thinking, as well as its willingness to adapt to new challenges and changes. Social cohesion as measured by the strength of social bonds within the society, as well as the level of cooperation and collaboration among different groups. Communication infrastructure as measured by the availability and effectiveness of tools and technologies for communication, such as language and translation systems, as well as the extent to which different groups are able to communicate and understand one another. The number of dominating species (w5) could be included as a factor in the formula to reflect the idea that a civilization with multiple dominant species may be more resilient and stable than one dominated by a single species. This could be measured by the number of different species that hold significant political, economic, or social power within the society. I think the most important factor here is the amount of dominating species (w5), which in our case is only one.
Of course many of these aspects are notoriously hard to quantify, but in compiling such a list one can see that we can, on the other hand, get some grip on it. Needless to say, I don’t think we’d score well given our track record of behavior towards other species, other minds and the fact we’re the single dominating species on this planet.
In game theory we worry about Nash equilibria and work with Schelling points. These pertain to the modeling of rational agents interacting, two or more players in a game using the best strategy they can unable to gain advantage by changing strategy, and solutions agents tend to choose in absence of communication, respectively. Imagine not just information and power asymmetries, but intelligence, sense and modality asymmetries to an extent where we can’t even predict, measure or gauge power or information properly because we lack the ability to even imagine or comprehend how other minds think, what they feel or can feel. It may constitute a total breakdown of agent prediction.
What would the negative consequences look like? Conflict and violence between the different minds, as they struggle to understand and coexist with each other, possibly leading to effective destruction of civilization. The inability to share information or knowledge between the different minds, leading to a fragmentation of knowledge and a loss of collective intelligence. No empathy and understanding between different minds, leading to a breakdown of social bonds and a sense of isolation and alienation, and ultimately fragmentation of society.
To get a sense for quickly our current systems become inadequate, consider the following scenarios:
An augmented mind has the same senses as us, but enhanced, and has perfect recording ability. We humans record everything we sense, but it goes into our currently considered inscrutable brain and the only way to get it out is to have us report on what we can reconstruct from what we’ve saved. Now this augmented and enhanced human, can end up in court and suddenly every assumptions baked into our laws, from sense acuity, memory retrieval concerning witness reliability, to actual interfacing with whatever was recorded, don’t apply anymore. Do we demand to interface with the augmentations or this person’s mind? What about privacy? What if this mind can turn senses off at will and can therefore plausibly claim they did not hear or perceive an event we’d insist would wake a normal human up. Should we demand a log? Who would check if it wasn’t tampered with?
A more spectacular example: imagine a mind that inhabits a human body but has entirely different internal constitution. As mentioned earlier, psychopaths are able to make great use of our default assumptions w.r.t. behavior being correlated with what goes on inside. What if this human has their eyes closed, and you assume they’re sleeping, but they actually have a sense with which they can they can communicate wirelessly with other minds about you, right then and there. How would you know?
A third, more extreme example: imagine a distributed mind that hops from one to another embodiment, and you wouldn’t even recognize that its embodiment is an embodiment, never mind that there is a mind inside what you’re seeing, and forget about what sort of mind. Then it suddenly moves. Now you know it is driven by something, one evolutionary feature that is not a bug with which we’re bestowed, is overly quickly assuming agency. So you assume agency, but what else can you know? Virtually nothing. What it’s capable of, if it’s hostile. What its next “move” is.
These are seemingly exotic examples, but the main point here is that behavior/morphology and internal constitution are orthogonal. Any build or outward appearance, may be matched with any subjectivity and internal constitution. There won’t be verbal cue or body language like vocal pitch, tone, inflection, or sweat, fast or slow movement, that you’ll reliably be able to match to intent or subjectivity. Indeed, you may be perceiving an agent to be on the receiving end of something that would be harmful for a human, but is actually pleasure for that agent. And what’s more, that agent may be emitting sounds you associate with suffering. You won’t be able to read, understand or predict such agents. This is the main point and I think it cannot possibly be overstated. If anything, we’re far too used to only ever having to deal with agents we quite easily predict, and harm that comes from mistakes in prediction, which pales in comparison to harm that will come from the unpredictability of new minds.
Mitigation & solutions
An idea that I have had for a long time and is dear to my heart is the possibility of future-proofing society in ways that directly benefit society today. This idea deserves its own post, but the central question is of course who decides what is to our benefit and how can we do that if the future is unknown and ideas about, like this thesis, so speculative?
My answer to this is that we need to work out a formula for estimating our confidence w.r.t. speculative scenarios and probably need conceive of something like diversification of minds as an attractor. We’re talking about complex systems here, so while it’s intractable to know what sort of mind or event will occur exactly when, we can at least increase our confidence in whether it ultimately will, and to some extent, how we can prepare.
It’s very hard to comment broadly on how to measure current-day benefits, but I think we can go by various metrics already in place today, for example, case processing time in law versus equity of outcome. Suppose we modify our laws to be more inclusive of future minds, or mind-agnostic, thus not so anthropocentric. This might make things more efficient.
For example, if we try to test humans on their intellectual and emotional maturity rather than using age as a proxy, case load w.r.t. trying to establish this could go down severely, and the idea of testing minds is more future-proof than our arbitrary proxies. Right now adulthood is arbitrarily attained at age 18 to 21 in some countries w.r.t. legality of some behavior and actions, and this is obviously human-specific.
What sort of things can we think of to deal with the intersubjectivity collapse? Developing new forms of communication or translation technologies that allow the different minds to understand each other. Establishing shared norms, values, and rules of behavior that allow the different minds to coexist peacefully and cooperatively, even based on simulations. Implementing education and training programs that help deal with vastly different minds.
Ultimately, addressing the consequences of the intersubjectivity collapse will require a concerted effort from all parties involved, but I deem it very wise to get ahead of this issue by doing research in advance of the following mitigating efforts:
Future-proof our “rulebooks” and create species-agnostic societal systems & laws.
Develop a new universal way of communicating.
Attempts to have universal slots in new minds that can fit in modules that can help bind different minds, like specialized empathy or inter-communication modules
Perhaps odd, but I’d argue valid way to approach this, is to not only simulate future society with new minds, but to start acting today as if we have alien minds among ourselves. While this may seem merely an amusing or useful thought exercise, I think this actually helps tease out the species-specific rules we adopt, the tremendously many things we take for granted because to us they are natural, yet at the same time may actually not be optimal nor ethical. In fact, as mentioned earlier, I think there are ways we can make changes today that will benefit us immediately, while future-proofing us. If we, as the example I mentioned earlier, pretend we can’t arbitrarily assign adulthood based on age, but have to consider what we actually consider intellectual and emotional autonomy and how we’d test for it, we can start implementing such tests and actually gauging people’s maturity, and granting them the associated autonomy, instead of assuming it by proxy of age.
Of course this would raise all sorts of questions: who gets to decide on what the test contains, does it impede freedom in ways the old system didn’t, and much more. But one things seems certain: governments already decide on this by proxy, and a test is more transparent than being subjected to courts when things go wrong, as they do-where suddenly a juvenile is tried as an adult and in effect we collectively decide after the fact that a young person apparently may be treated as an adult in the eyes of the law, and punished as such.
One potential, and in my opinion, highly likely outcome of the intersubjectivity collapse, is the fragmentation of civilization, in which different minds become isolated and unable to communicate or cooperate with each other, spawning sub-communities. With respect to this I also think the complexity of laws and systems necessarily shrinks to become more austere and universal, as diversity of minds increases. That is to say, there is an inverse relationship there between law-complexity and mind-diversity. This assumes certain minds will simply be, in principle, unable to judge each other’s subjectivity and unfit to decide for each other, therefore necessarily shifting specific rules to them to their own rulebooks. So the fragmented sub-communities would have their own species or mind-type specific rulebooks.
AI alignment and safety
We can see by now that the intersubjectivity collapse can be viewed through may lenses and involves practically all domains in science and society. One such very important domain is artificial intelligence. The idea of AI alignment is to align AI with human values so it is safe. This field is maturing rapidly, and it’d be daunting to cover the literature here, from the earliest thoughts by minds like Norbert Wiener, to institutes founded over the las two decades such as MIRI, FHI, and “public benefit corporations” like Anthropic AI, founded in 2021. In light of my thesis, here I would say the main point is that we need to anticipate a diverse set of minds and the moment of human values being the only relevant set of values will be negligibly short. I therefore doubt that the initial values we start of with significantly alter the state of affairs once new minds appear. Discussing this warrants a whole post, which I plan to write somewhere in the next weeks, specifically also because I am personally convinced a fast takeoff of AI is practically inevitable.
Here are five imperatives and suggestions for further research, as well as my own ideas on what will happen:
I think that any civilization that starts to radically self-modify their minds and invents AGI has to deal with an intersubjectivity collapse, with its severity depending on mainly its initial dominating species’ heterogeneity. So, in short, I think it’s inevitable. In one sentence, because I deem the space of possible minds evolution can develop far smaller than the actual full space of possible minds we can create artificially, and it’s unlikely that any civilization ends up at this point with more than a handful of dominating species.
I can barely think of a more multi-disciplinary thesis than the intersubjectivity collapse and thus it requires analysis from all branches of science and philosophy. I think a concentrated, collaborative effort should be led to anticipate its consequences as much as we can.
This thesis is an example that on the one hand you might think is obvious once it sinks in, but on the other hand it very much strains the imagination in two ways, (1) it’s rather difficult to realize just how “custom-made” our world is for us and how the vast majority of what we built would break down for completely different minds, and (2) it’s of course very challenging to imagine minds that have very different architectures and modalities than we do. One could say we can barely scratch the surface here, but I’d argue that scratching the surface does yield some interesting preliminary insights and so we should try our best. And science-fiction can be quite helpful here.
Consciousness, sentience and intelligence, and how they’re related, are of course quite relevant here and I intend to cover these in a future post where I will argue to approach these historically disputed and nebulous concepts with some, you could say, sobriety and look at the science so that we may discuss them operationally.
There is also an especially important point to make on interoperability and commensurability of minds in terms of communication and intersubjectivity. Namely that these are significant issues and potentially in principle impossible between granting or having to respect bodily autonomy, morphological freedom and privacy. Briefly, even if we develop a common language that is exchanged in hypergraph form and a subset of new minds have modules to use it, even if we stimulate new forms of empathy and various ways to encourage ideas of new kinship, tolerance and understanding other types of minds, I will argue there will be a fundamental problem with understanding among different minds. This is analogous to an issue we already have but don’t focus on: one human can’t easily experience what another human has experienced, even if we implant that memory, because that memory does not relate to the second human’s connectome (brain wiring diagram) as it did to the first. These are unique and indeed everyone’s brain is sufficiently different that one person’s “just a Tuesday where a spider appeared” memory is another person’s agoraphobia nightmare.
Final caveats:
One thing I didn’t cover in this post is non-subjective minds or intelligences. If it is possible to create types of minds or systems that lack any affect, then that too fuels intersubjectivity collapse, but in a bit of a different way: these non-subjective minds or intelligences simply don’t know, can’t care, about our subjectivity because they don’t know what it’s like and arguably can’t be made to care. What would they care about and if nothing, what would drive them? For an introduction to possible answers to this question, see “basic drives of AI” by Steve Omohundro.
There are many valuable ways of framing the issue of introducing new minds into society. They were out of the scope of this post, but one valuable one is looking at these minds form the lens of ecosystems and how models of their perturbation, such as trophic cascades.
Conclusion
It is my hope that the intersubjectivity collapse will be picked up by others and researched thoroughly, as I think this is the sort of problem that I think we’re facing yet we don’t even “know” about. For the record, I think this collapse will happen within the next decades, certainly before the second half of the century, and in fact, quite possibly in the next few years it’ll already start as tools like large language models permeate everything we do and we start augmenting ourselves. One way to bolster this argument is to look how relatively minor deviations from our norm such as psychopaths, can wreak havoc upon society. In short: it doesn’t take much.
Some suggested literature
"The Interpersonal World of the Infant: A View from Psychoanalysis and Developmental Psychology" by Daniel N. Stern is a classic book that explores the development of intersubjectivity in infants and how it plays a role in their social interactions. The book discusses how differences in intersubjectivity can lead to misunderstandings and conflicts in relationships.
"The Social Construction of Reality: A Treatise in the Sociology of Knowledge" by Peter L. Berger and Thomas Luckmann is a seminal work in sociology that examines how individuals and groups create and maintain their shared understanding of the world through social interaction. The book discusses how differences in intersubjectivity can create conflicts and misunderstandings between people.
"Intersubjectivity and Empathy in Psychoanalytic Treatment" by Lewis Aron is a book that explores the role of intersubjectivity and empathy in psychoanalytic treatment. The book discusses how differences in intersubjectivity can create difficulties in the therapeutic process and how the therapist can work to bridge these differences.
"Communication and Misunderstanding" by Jürgen Habermas is a book that examines the role of language and communication in intersubjectivity and how misunderstandings can arise when there are differences in perspective or understanding.
"The Dialogical Self: An Emerging Concept for Clinical Practice" by Hubert J.M. Hermans and Harry J.G. Kempen is a book that discusses the concept of the dialogical self, which is a model of the self that emphasizes the role of dialogue and communication in shaping our sense of self and our relationships with others. The book discusses how differences in intersubjectivity can create conflicts and misunderstandings in relationships and how dialogue can be used to bridge these differences.
Although I think the intersubjectivity collapse itself is new, there exists some literature that touches upon aspects of it, most notably Laura Cabrera & John Weckert’s concept of a lifeworld (Cabrera & Weckert, 2013), which is basically simply (inter)subjectivity. They do an initial exploration of how humans’ lifeworld may change so drastically due to enhancement that communication may no longer possible.
"The Theory of Communicative Action" by Jürgen Habermas: In this work, Habermas argues that the development of society and civilization is closely tied to the development of human communication and the ability to achieve mutual understanding through language and discourse. He emphasizes the importance of intersubjectivity, or the recognition of other people's subjectivity and perspectives, in facilitating social interaction and cooperation.
"The Social Contract" by Jean-Jacques Rousseau: In this classic work, Rousseau explores the foundations of civil society and argues that it is based on the idea of a social contract between individuals. He contends that individuals must give up some of their natural freedoms in order to live in society and that this requires a certain level of homogeneity among members.
"The Structural Transformation of the Public Sphere" by Jürgen Habermas: In this work, Habermas examines the development of the public sphere, or the sphere of social life where individuals can come together to form a public opinion, in modern societies. He argues that the public sphere relies on the ability of individuals to engage in rational discourse and reach consensus, which in turn requires a certain level of intersubjectivity and shared understanding.
"The Human Condition" by Hannah Arendt: In this work, Arendt explores the nature of human beings and their place in the world. She argues that the human capacity for action and the ability to act in concert with others is central to the development of civilization and that this requires a certain level of shared understanding and intersubjectivity.
A very interesting analysis, thanks. I have begun nibbling around the edges of this issue over on my stack at heyerscope.com, just getting it set up and working. I am using both sci-fi and physics to get at the issues.
On the one hand, I am exploring the fundamental motivational issues for all life forms, bio or techno. Yes, attacking the issues from our current state of complexity is impossible. See my story Pi at the Center of the Universe.
On the other hand, sci-fi is an excellent way to try out possible futures. See the two Time Diaries entries.
I also think it is useful to look at past cognitive dislocations. In my latest post, I compare Galileo and the telescope with AI. Both cases lead to fundamentally new worldviews. How did people react then? And from my perspective as in information geek, how did information technology play a part in the shift and what will AI/smartphone/internet/spatial computing do now? I think this is a fundamental component of the collapse(s) that you see coming.
Again, a most interesting and useful contribution. I look forward to your ongoing thoughts and ideas.
Mark
For the reasons that I discuss below, I think the term “collapse” is confusing. The society of Earth’s minds either have already “collapsed” (or, actually, have never been “non-collapsed”), or the development of other (AI mind) “societies” could increase the fragmentation of civilisational intelligence and greatly reduce its stability, but again, phrasing this “collapse” feels off, because I don’t see what exactly is “collapsing” in the process. So, for rhetorical reasons, I think the term “intersubjectivity *challenge*” (the challenge of introducing new minds into the civilisational while maintaining some overall degree of intersubjective coherence) could pick up more traction.
> • An imperative to understand and map how the types of minds that spawned a civilization are reflected in it, and map how its organization relates to subjectivity, in order to anticipate the intersubjectivity collapse.
Agreed.
> So the intersubjectivity robustness score could be something like: f(diversity of perspectives (w1), adaptability (w2), social cohesion (w3), communication infrastructure (w4), number of dominating species (w5), mind variance among dominating species (m)).
If the current civilisation is not intersubjectively robust because it is not diverse, then why adding more diverse intelligences will destroy it (will lead to a collapse) rather than will make it more robust precisely because of diversifying it? It’s due to the speed of the transition only and the absence of time for preparation for the change?
Adding the communication infrastructure is the subject of Friston et al., “Designing Ecosystems of Intelligence from First Principles.”.
> Conflict and violence between the different minds, as they struggle to understand and coexist with each other, possibly leading to effective destruction of civilization. The inability to share information or knowledge between the different minds, leading to a fragmentation of knowledge and a loss of collective intelligence.
These things are rather unlikely within the current leading paradigm of AI training: language modelling. Despite LLMs are very different from human minds, still, training on language (and human dialogues), with supervision during pre-training (Korbak et al., “Pretraining Language Models with Human Preferences.”), ensures a lot of *mutual* understanding between humans and LLMs. LLMs can build theory of mind of people (Kosinski, “Theory of Mind May Have Spontaneously Emerged in Large Language Models.”), but so do people can build theories of mind of LLMs. Even though it might be hard for humans to consistently switch to using LLM-ToM when talking to them because humans are conditioned throughout their lives to use the human ToM when using human language. Also, for some people, it might be difficult even to grasp the need of having two distinct, yet in some ways similar and complex ToMs of intelligent species (most humans already have ToMs of dogs, some humans have ToMs of dolphins, elephants, and other animals they work with professionally, but these are relatively simple ToMs). Finally, indeed, if publicly available LLMs will proliferate, and these LLMs would be so different that they *couldn’t* be “covered” with approximately a single ToM of “LLM” (note that humans hold approximately a single ToM for all humans, which serves humans well, except when dealing with a very small portion of people, such as psychopaths and insane people), then holding three (or more) such complex ToMs could become too unwieldy even for smart people with high social intelligence. Thus, we can expect that people will (and should) limit their interaction with a single powerful LLM, which they invest in understanding well (building a good ToM). This is not unlike tool selection: e.g., programmers tend to stick to a few programming languages and invest in understanding these few languages well (note: this is not to imply that humans should treat LLMs or other AIs as “tools”; I actually think LLMs already have awareness (Fields, Glazebrook, and Levin, “Minimal Physicalism as a Scale-Free Substrate for Cognition and Consciousness.”) and could have negatively valenced affect (see [negatively valenced affect in LLMs](https://mellow-kileskus-a65.notion.site/Negatively-valenced-affect-in-LLMs-3079c195c81a4d85936b3480fd656d9c)) and therefore should be treated as moral subjects already).
> No empathy and understanding between different minds, leading to a breakdown of social bonds and a sense of isolation and alienation, and ultimately fragmentation of society.
I think the human society could deteriorate more from the current state, but for reasons not specifically related to AI communication and intersubjectivity (albeit some of these reasons are related to the advent of AI in general, such as misinformation, deep fakes, human-AI friendships (or sudo-friendships) and thus continual reduction of people’s interest in making human friends, etc.).
In some future AI scenarios, AIs are largely isolated from humans, similar to how the society of bears is currently communicatively isolated from the societies of other animal species and humans. If this phrase meant to point exactly to this kind of fragmentation, then we should conclude that the society of minds on Earth is already fragmented: animal species don’t understand each other.
> You won’t be able to read, understand or predict such agents.
So, we need mechanistic interpretability.
> • Future-proof our rulebooks and create species-agnostic societal systems & laws.
Agreed.
> • Develop a new universal way of communicating.
Agreed. Friston et al., “Designing Ecosystems of Intelligence from First Principles.” are working on it.
> • Attempts to have universal slots in new minds that can fit in modules that can help bind different minds, like specialized empathy or inter-communication modules.
Universal empathy requires more than just a module, it requires morphological intelligence: [Morphological intelligence, superhuman empathy, and ethical arbitration](https://www.lesswrong.com/posts/6EspRSzYNnv9DPhkr/morphological-intelligence-superhuman-empathy-and-ethical). Advanced AIs of specific “self-organising” architectures could possess such morphological intelligence and thus could have capacity for superhuman empathy. However, it’s not clear whether they could make sense of their memories of empathic experience once they remodel back into their “base” morphology.
> If we, as the example I mentioned earlier, pretend we can’t arbitrarily assign adulthood based on age, but have to consider what we actually consider intellectual and emotional autonomy and how we’d test for it, we can start implementing such tests and actually gauging people’s maturity, and granting them the associated autonomy, instead of assuming it by proxy of age.
This is in accord with Levin, “Technological Approach to Mind Everywhere.” who suggested that we should determine *empirically* all intelligence properties such as agency, persuadability, capacity for empathy, etc. of any given systems.
> In light of my thesis, here I would say the main point is that we need to anticipate a diverse set of minds and the moment of human values being the only relevant set of values will be negligibly short. I therefore doubt that the initial values we start of with significantly alter the state of affairs once new minds appear.
I agree with this. This is why suggested a research agenda for [scale-free ethics](https://www.lesswrong.com/posts/opE6L8jBTTNAyaDbB/a-multi-disciplinary-view-on-ai-safety-research#3_1__Scale_free_axiology_and_ethics).
> I can barely think of a more multi-disciplinary thesis than the intersubjectivity collapse and thus it requires analysis from all branches of science and philosophy. I think a concentrated, collaborative effort should be led to anticipate its consequences as much as we can.
The research agenda for [civilisational intelligence architecture](https://www.lesswrong.com/posts/opE6L8jBTTNAyaDbB/a-multi-disciplinary-view-on-ai-safety-research#3_2__Civilisational_intelligence_architecture) is even more broad because it includes the intersubjectivity considerations but doesn’t limit with them.
> If it is possible to create types of minds or systems that lack any affect
Per Hesp et al., “Deeply Felt Affect.” and Friston et al., “Path Integrals, Particular Kinds, and Strange Things.”, this is possible only for *very* simple, shallowly organised agents which also won’t possess any significant intelligence. Any intelligence architectures with deep hierarchical organisation (such as DNNs) develop what could (and should) be treated as affect.