Discussion about this post

User's avatar
Tyler Black's avatar

These kinds of arguments are never clear exactly what features of the physical implementation are necessary for consciousness. It's true that the appropriate physical architecture is required for certain functions, but this is true when just considering information processing. For example, a feed-forward computation has different information dynamics than a computation with feedback loops, and such dynamics requires support from the physical implementation. But given some abstract properties that support feedback dynamics, the specifics of the physical implementation are irrelevant.

If your argument is just that physical implementation matters in virtue of its functional/information dynamics, then I agree. But then the emphasis on physical realizer in opposition to abstract properties is misplaced. On the other hand, if the argument is that certain metaphysical properties of the substrate are necessary, then onus is on you do identify exactly the right property. But trading on a supposed distinction between abstract properties of computation and physical properties of the implementation is a mistake. A physical system computes some function in virtue of its abstract properties. These abstract properties include the temporal relationship between said properties, i.e. the causal structure intrinsic to the abstract formal system being implemented. But highly abstracted computational systems implement this causal relationship just the same as a physical implementation with no layers of abstraction in between.

Neuroscience provides important evidence for how the form of brains relates to the function of minds. But the issue is what are the explanatory features of the brain for its ability to generate the function of mind. Explanations in terms of computational dynamics have borne significant fruit. But computational dynamics is not dependent on substrate except as it admits certain information dynamics within the space of supported behaviors. If one is to suppose that the physical substrate is essential, one must explain exactly how.

Expand full comment
Shubham's avatar

The argument here challenges the tendency to assume that AI models like large language models (LLMs) have consciousness simply due to their advanced processing capabilities. This perspective highlights that consciousness in humans arises from intricate physical processes, deeply rooted in neural structures and complex feedback loops that are absent in digital hardware. The assumption that LLMs might be conscious ignores the physical substrates required for consciousness—like a brain’s dynamic, interconnected architecture capable of generating awareness, feelings, and experiences.

The author argues that digital computation, designed for abstraction and functionality, lacks the foundational structure for consciousness. Just as simulating gravity doesn’t create real gravitational force, simulating consciousness doesn’t equate to real awareness. This viewpoint emphasizes the need for engineering approaches that consider the physical foundations of consciousness, rather than abstract software simulations. It’s a call for cautious optimism grounded in scientific rigor, rather than philosophical speculation or anthropomorphic projections onto technology.

Refer https://talenttitan.com

Expand full comment
3 more comments...

No posts