These kinds of arguments are never clear exactly what features of the physical implementation are necessary for consciousness. It's true that the appropriate physical architecture is required for certain functions, but this is true when just considering information processing. For example, a feed-forward computation has different information dynamics than a computation with feedback loops, and such dynamics requires support from the physical implementation. But given some abstract properties that support feedback dynamics, the specifics of the physical implementation are irrelevant.
If your argument is just that physical implementation matters in virtue of its functional/information dynamics, then I agree. But then the emphasis on physical realizer in opposition to abstract properties is misplaced. On the other hand, if the argument is that certain metaphysical properties of the substrate are necessary, then onus is on you do identify exactly the right property. But trading on a supposed distinction between abstract properties of computation and physical properties of the implementation is a mistake. A physical system computes some function in virtue of its abstract properties. These abstract properties include the temporal relationship between said properties, i.e. the causal structure intrinsic to the abstract formal system being implemented. But highly abstracted computational systems implement this causal relationship just the same as a physical implementation with no layers of abstraction in between.
Neuroscience provides important evidence for how the form of brains relates to the function of minds. But the issue is what are the explanatory features of the brain for its ability to generate the function of mind. Explanations in terms of computational dynamics have borne significant fruit. But computational dynamics is not dependent on substrate except as it admits certain information dynamics within the space of supported behaviors. If one is to suppose that the physical substrate is essential, one must explain exactly how.
Thanks for the thoughtful comment. I am not sure what "kind" of arguments we're talking about exactly here. First of, this post is meant to set aside for a moment a lot of hairy philosophical debates and insist that we take a look at the brain as engineers. The brain as a physical piece of hardware that implements consciousness, along with the CNS/body. I didn't mean to undertake the quest of exhaustively describing the physical aspects of consciousness.
What I did describe is at what level consciousness operates in the brain, for which we have ample evidence. See the neural correlate paper I linked and several other links. It's simply a basic truth that when you ingest LSD it affects the brain on a molecular level, and there are countless other examples of this. This is why I provide the example of the YottaFLOPS laptop. If you can answer why a perfect simulation of your mind is equal or not equal, we can start teasing apart the issue you're seeing. The issue I am seeing, is that literally none of that simulation can causally interact with the world as your actual mind and we currently have no evidence that computed abstractions are anything like the physical instantiation except for descriptively functioning "the same way". But we can't feed your digital self any real LSD.
"If your argument is just that physical implementation matters in virtue of its functional/information dynamics, then I agree."
No, because as the post details as far as we know the substrate (hardware) is actually employed in the phenomenal part of our consciousness - i.e. LSD, alcohol, all sorts of states we go through. This is very well established in the literature. So again it's certainly not a given that we can just translate these to abstracted mechanisms and assume the affect will be there.
We can obviously simulate many important processes digitally and get a lot of bang for our buck with enough compute - but looking at physics, the universe, causality and what the brain does, I simply think we need to do stuff like generate an electromagnetic field like the brains does, rather than implement abstractions thereof and think it will produce any phenomenal aspect.
I highly recommend reading Piccinini's work - if you do, let me know if it changed your mind at all.
"But highly abstracted computational systems implement this causal relationship just the same as a physical implementation with no layers of abstraction in between."
If it's abstracted, then I must insist "this causal relationship just the same" is doing a specious amount of heavy lifting.
By kind of argument I mean the Searle-style argument that focuses on the biology or physical mechanism as necessary above its structural dynamics but doesn't explain what features of the biology/mechanism are necessary. Of course physical mechanisms are needed to realize formal or structural relationships. But the question we need to answer is what is doing the explanatory work in producing the target phenomena? It is not clear that it is the physical matter thus extended and engaged in these specific dynamics, rather than as instantiations of some specific kind of abstract information dynamics. The difference is one of multiple realizability.
Regarding LSD and other chemical interventions, what we do know is that these substances change the functional connectivity within and among various brain regions, which has implications for the information dynamics being implemented in any given instant. I don't know of an example of a physical intervention that changes one's conscious experience without a corresponding change in functional connectivity. So it is plausible that these interventions operate by modulating the information dynamics and that this is necessary for changes in phenomenal perception.
I'm not thoroughly read on Piccinini, but I have been reading a lot of adjacent literature lately. I'll add the Neurocognitive Mechanisms book to my list, thanks!
Structural dynamics is fine, I don't understand that as representational content. Multiple realizability I take to be true, but in a sort of loose sense and definitely not where functionality is understood was representational "equivalent" - so the same or very similar mental "properties" can be realized by different physical processes, substrates, etc., but they can't be expected te be realized in the abstract.
"I don't know of an example of a physical intervention that changes one's conscious experience without a corresponding change in functional connectivity."
I understand your point here, but let's again look at actual causation. On laptop you can change programs all you like. You can program a game with hyper-realistic NPCs - all the while you changed absolutely nothing about the architecture's signaling or underlying physical processes. You are just flipping bits to changes states so that the representational content changes. But there is no actual causal connection to the real world for that NPC nor anything you simulate.
Again, your digital version on the laptop will have a whole host of issues. Consider that first of all for that simulation to even remotely work, you need simulate the laws of physics and a whole chunk of a universe to embed that simulation in - otherwise it already wouldn't make sense. Next you can't just simulate the mind as some fantasize, as that mind would have the phantom limb of the century - the whole body. A total collapse of the simulated somatosensory system. And again - no way to interact causally with the real world like your physical self. If you can list the differences between this digital version and your physical version, you're off to building my point. If you want to add, say, sensors and actuators - fine, but these are a tiny part of the physical mechanisms that make up brain and body. Even if you consider the brain to create partly a fiction or some version of fiction as it bootstraps itself into the world through embodiment and its senses, you then still will have to create a taxonomy of fictions to distinguish between the gaping differences between the digital and physical.
This issue crops up no matter what - digital physics, pancomputationalism, the simulation hypothesis. This rabbit hole is quite deep of course - but the perhaps silly and banal message here is that we live in a differentiated universe. The table of elements is very real. I think the idea that we can just claim equivalence between simulations and actual physics is like alchemy.
The argument here challenges the tendency to assume that AI models like large language models (LLMs) have consciousness simply due to their advanced processing capabilities. This perspective highlights that consciousness in humans arises from intricate physical processes, deeply rooted in neural structures and complex feedback loops that are absent in digital hardware. The assumption that LLMs might be conscious ignores the physical substrates required for consciousness—like a brain’s dynamic, interconnected architecture capable of generating awareness, feelings, and experiences.
The author argues that digital computation, designed for abstraction and functionality, lacks the foundational structure for consciousness. Just as simulating gravity doesn’t create real gravitational force, simulating consciousness doesn’t equate to real awareness. This viewpoint emphasizes the need for engineering approaches that consider the physical foundations of consciousness, rather than abstract software simulations. It’s a call for cautious optimism grounded in scientific rigor, rather than philosophical speculation or anthropomorphic projections onto technology.
These kinds of arguments are never clear exactly what features of the physical implementation are necessary for consciousness. It's true that the appropriate physical architecture is required for certain functions, but this is true when just considering information processing. For example, a feed-forward computation has different information dynamics than a computation with feedback loops, and such dynamics requires support from the physical implementation. But given some abstract properties that support feedback dynamics, the specifics of the physical implementation are irrelevant.
If your argument is just that physical implementation matters in virtue of its functional/information dynamics, then I agree. But then the emphasis on physical realizer in opposition to abstract properties is misplaced. On the other hand, if the argument is that certain metaphysical properties of the substrate are necessary, then onus is on you do identify exactly the right property. But trading on a supposed distinction between abstract properties of computation and physical properties of the implementation is a mistake. A physical system computes some function in virtue of its abstract properties. These abstract properties include the temporal relationship between said properties, i.e. the causal structure intrinsic to the abstract formal system being implemented. But highly abstracted computational systems implement this causal relationship just the same as a physical implementation with no layers of abstraction in between.
Neuroscience provides important evidence for how the form of brains relates to the function of minds. But the issue is what are the explanatory features of the brain for its ability to generate the function of mind. Explanations in terms of computational dynamics have borne significant fruit. But computational dynamics is not dependent on substrate except as it admits certain information dynamics within the space of supported behaviors. If one is to suppose that the physical substrate is essential, one must explain exactly how.
Thanks for the thoughtful comment. I am not sure what "kind" of arguments we're talking about exactly here. First of, this post is meant to set aside for a moment a lot of hairy philosophical debates and insist that we take a look at the brain as engineers. The brain as a physical piece of hardware that implements consciousness, along with the CNS/body. I didn't mean to undertake the quest of exhaustively describing the physical aspects of consciousness.
What I did describe is at what level consciousness operates in the brain, for which we have ample evidence. See the neural correlate paper I linked and several other links. It's simply a basic truth that when you ingest LSD it affects the brain on a molecular level, and there are countless other examples of this. This is why I provide the example of the YottaFLOPS laptop. If you can answer why a perfect simulation of your mind is equal or not equal, we can start teasing apart the issue you're seeing. The issue I am seeing, is that literally none of that simulation can causally interact with the world as your actual mind and we currently have no evidence that computed abstractions are anything like the physical instantiation except for descriptively functioning "the same way". But we can't feed your digital self any real LSD.
"If your argument is just that physical implementation matters in virtue of its functional/information dynamics, then I agree."
No, because as the post details as far as we know the substrate (hardware) is actually employed in the phenomenal part of our consciousness - i.e. LSD, alcohol, all sorts of states we go through. This is very well established in the literature. So again it's certainly not a given that we can just translate these to abstracted mechanisms and assume the affect will be there.
We can obviously simulate many important processes digitally and get a lot of bang for our buck with enough compute - but looking at physics, the universe, causality and what the brain does, I simply think we need to do stuff like generate an electromagnetic field like the brains does, rather than implement abstractions thereof and think it will produce any phenomenal aspect.
I highly recommend reading Piccinini's work - if you do, let me know if it changed your mind at all.
"But highly abstracted computational systems implement this causal relationship just the same as a physical implementation with no layers of abstraction in between."
If it's abstracted, then I must insist "this causal relationship just the same" is doing a specious amount of heavy lifting.
I think this book does a good job of explaining how: https://www.amazon.com/Neurocognitive-Mechanisms-Explaining-Biological-Cognition/dp/0198866283?asin=0198866283&revisionId=&format=4&depth=1 I haven't read it yet, but I am very familiar with the author's papers and it looks to be a good overview.
By kind of argument I mean the Searle-style argument that focuses on the biology or physical mechanism as necessary above its structural dynamics but doesn't explain what features of the biology/mechanism are necessary. Of course physical mechanisms are needed to realize formal or structural relationships. But the question we need to answer is what is doing the explanatory work in producing the target phenomena? It is not clear that it is the physical matter thus extended and engaged in these specific dynamics, rather than as instantiations of some specific kind of abstract information dynamics. The difference is one of multiple realizability.
Regarding LSD and other chemical interventions, what we do know is that these substances change the functional connectivity within and among various brain regions, which has implications for the information dynamics being implemented in any given instant. I don't know of an example of a physical intervention that changes one's conscious experience without a corresponding change in functional connectivity. So it is plausible that these interventions operate by modulating the information dynamics and that this is necessary for changes in phenomenal perception.
I'm not thoroughly read on Piccinini, but I have been reading a lot of adjacent literature lately. I'll add the Neurocognitive Mechanisms book to my list, thanks!
Structural dynamics is fine, I don't understand that as representational content. Multiple realizability I take to be true, but in a sort of loose sense and definitely not where functionality is understood was representational "equivalent" - so the same or very similar mental "properties" can be realized by different physical processes, substrates, etc., but they can't be expected te be realized in the abstract.
"I don't know of an example of a physical intervention that changes one's conscious experience without a corresponding change in functional connectivity."
I understand your point here, but let's again look at actual causation. On laptop you can change programs all you like. You can program a game with hyper-realistic NPCs - all the while you changed absolutely nothing about the architecture's signaling or underlying physical processes. You are just flipping bits to changes states so that the representational content changes. But there is no actual causal connection to the real world for that NPC nor anything you simulate.
Again, your digital version on the laptop will have a whole host of issues. Consider that first of all for that simulation to even remotely work, you need simulate the laws of physics and a whole chunk of a universe to embed that simulation in - otherwise it already wouldn't make sense. Next you can't just simulate the mind as some fantasize, as that mind would have the phantom limb of the century - the whole body. A total collapse of the simulated somatosensory system. And again - no way to interact causally with the real world like your physical self. If you can list the differences between this digital version and your physical version, you're off to building my point. If you want to add, say, sensors and actuators - fine, but these are a tiny part of the physical mechanisms that make up brain and body. Even if you consider the brain to create partly a fiction or some version of fiction as it bootstraps itself into the world through embodiment and its senses, you then still will have to create a taxonomy of fictions to distinguish between the gaping differences between the digital and physical.
This issue crops up no matter what - digital physics, pancomputationalism, the simulation hypothesis. This rabbit hole is quite deep of course - but the perhaps silly and banal message here is that we live in a differentiated universe. The table of elements is very real. I think the idea that we can just claim equivalence between simulations and actual physics is like alchemy.
The argument here challenges the tendency to assume that AI models like large language models (LLMs) have consciousness simply due to their advanced processing capabilities. This perspective highlights that consciousness in humans arises from intricate physical processes, deeply rooted in neural structures and complex feedback loops that are absent in digital hardware. The assumption that LLMs might be conscious ignores the physical substrates required for consciousness—like a brain’s dynamic, interconnected architecture capable of generating awareness, feelings, and experiences.
The author argues that digital computation, designed for abstraction and functionality, lacks the foundational structure for consciousness. Just as simulating gravity doesn’t create real gravitational force, simulating consciousness doesn’t equate to real awareness. This viewpoint emphasizes the need for engineering approaches that consider the physical foundations of consciousness, rather than abstract software simulations. It’s a call for cautious optimism grounded in scientific rigor, rather than philosophical speculation or anthropomorphic projections onto technology.
Refer https://talenttitan.com