"I am, I exist, is necessarily true each time that I pronounce it, or that I mentally conceive it." (Meditation II, p. 9)
This is the foundation of his philosophy: the cogito ("I think, therefore I am"). He argues that while he can doubt the existence of the external world and even his own body, he cannot doubt the existence of his own mind, because doubting itself is an act of thought, and thinking proves the existence of the thinker.
This means that consciousness is the defining characteristic of the self, separate from the body.
In Meditation VI, Descartes explicitly argues that the mind is separate from the body:
"...because, on the one side, I have a clear and distinct idea of myself inasmuch as I am only a thinking and unextended thing, and as, on the other, I possess a distinct idea of body, inasmuch as it is only an extended and unthinking thing, it is certain that this I [that is to say, my soul by which I am what I am], is entirely and absolutely distinct from my body, and can exist without it." (Meditation VI, p. 28)
This establishes his dualist view:
- The mind is immaterial, indivisible, and capable of thought.
- The body is material, extended in space, and incapable of thought.
Because the two have distinct natures, Descartes argues that they must be separate substances.
To reinforce his point, Descartes provides an analogy in Meditation II. He considers a piece of wax and shows that all its physical properties (color, shape, texture) can change, yet he still recognizes it as wax. This suggests that material objects are known not by the senses, but by the intellect.
"...it is certain that this I [that is to say, my soul by which I am what I am], is entirely and absolutely distinct from my body, and can exist without it." (Meditation VI, p. 28)
Since Descartes believes that knowledge of the self is more certain than knowledge of the physical world, he concludes that the mind is fundamentally different from the body.
If Descartes' substance dualism is true, then AI cannot be conscious, because:
- Consciousness requires a non-material mind (soul), which AI lacks.
- Machines are purely material, so they can never have subjective experience.
- Even if AI mimics thought, it does not truly "think" just as a puppet mimicking speech does not truly "speak."
If dualism is correct, then no matter how advanced AI becomes, it will never possess true consciousness because it lacks the immaterial mind that Descartes believes is essential.
Materialists argue that consciousness is not a separate, immaterial substance (as Descartes proposed) but rather a physical process that emerges from brain activity. This view suggests that if an artificial system could replicate these processes, it could also be conscious.
In Consciousness Explained, Dennett challenges the idea of a "Cartesian Theater" aka the notion that there is a single place in the brain where consciousness happens. Instead, he proposes the Multiple Drafts Model, which treats consciousness as a distributed process with no central controller.
"There is no single, definitive "stream of consciousness," because there is no central Headquarters, no Cartesian Theater where "it all comes together" for the perusal of a Central Meaner" (p. 257)
Dennett dismisses dualism and argues that the mind is just what the brain does:
"The idea of mind as distinct in this way from the brain, composed not of ordinary matter but of some other, special kind of stuff, is dualism, and it is deservedly in disrepute today." (p. 33)
If consciousness emerges from computation and interaction in the brain, then an artificial system that processes information in a similar way could also be conscious. Dennett argues that highly sophisticated AI might one day become conscious, not because it has a "soul" but because it processes information in the right way.
"if the self is "just" the Center of Narrative Gravity, and if all the phenomena of human consciousness are explicable as "just" the activ- ities of a virtual machine realized in the astronomically adjustable con- nections of a human brain, then, in principle, a suitably "programmed" robot, with a silicon-based computer brain, would be conscious, would have a self." (p. 431)
Functionalism is a philosophy of mind that argues that mental states are defined by their functional roles, not by what they are made of. This means that as long as something performs the right kind of information processing, it can be conscious, whether it's a biological brain, a silicon chip, or even an artificial intelligence system.
Hilary Putnam introduced functionalism as an alternative to both dualism (which says the mind is separate from the body) and identity theory (which claims mental states are strictly identical to physical brain states). Instead, Putnam argued that mental states are functional states, meaning they are defined by their causal relationships to inputs, outputs, and other mental states.
Putnam uses the example of pain to illustrate his point:
" I propose the hypothesis that pain, or the state of being in pain, is a functional state of a whole organism" (p. 54)
This means that pain is not a particular brain state, but rather a role that any system can play- whether that system is a biological brain, a silicon-based AI, or a complex network of neurons.
"If, instead of pain, we take some sensation the ' bodily expression' of which is easier to suppres- say, a slight coolness in one's finger- the case becomes even clearer." (p. 57, 58)
If mental states are just functional states, then an AI system that processes information in the right way could be conscious. The physical material (neurons vs. silicon chips) does not matter, only the functional structure.
"The hypothesis that ' being in pain is a functional state of the organism' may now be
spelled out more exactly as follows:
1. All organisms capable of feeling pain are Probabilistic Automata.
2. Every organism capable of feeling pain possess es at least one Description of a
certain kind (i.e., being capable of feeling pain is possessing an appropriate kind
of Functional Organization).
3. No organism capable of feeling pain possess es a decomposition into parts
which separately possess Descriptions of the kind referred to in 2.
4. For every Description of the kind referred to in 2, there exists a subset of the
sensory inputs such that an organism with that Description is in pain when and
only when some of its sensory inputs are in that subset" (p. 54)
While Putnam's functionalism focuses on how mental states are defined, David Chalmers takes a deeper approach by addressing why consciousness feels like something.
Chalmers distinguishes between easy and hard problems:
" But on a closer look, most of this work leaves the hardest problems about consciousness un touched. Often, such work addresses what might be called the "easy" prob lems of consciousness: How does the brain process environmental stimula xii Introduction tion? How does it integrate information? How do we produce reports on internal states? These are important questions, but to answer them is not to solve the hard problem: Why is all this processing accompanied by an experienced inner life? Sometimes this question is ignored entirely; some times it is put off until another day; and sometimes it is simply declared answered. But in each case, one is left with the feeling that the central problem remains as puzzling as ever." (p. xi)
This raises a key question for functionalism:
- Even if an AI processes information like a human brain, will it feel anything?
- Or will it just be a "philosophical zombie" an entity that behaves like it's conscious but lacks actual subjective experience (qualia)?
Chalmers argues for a "principle of organizational invariance", which states:
"A natural suggestion is that consciousness arises in virtue of the functional organization of the brain. On this view, the chemical and indeed the quantum substrate of the brain is irrelevant to the production of consciousness. What counts is the brain's abstract causal organization, an organization that might be realized in many different physical substrates." (p. 247)
This supports the functionalism argument: if AI can perfectly replicate the human brain's functional structure, then it should have the same conscious experiences.
However, Chalmers also entertains panpsychism, the idea that consciousness is a fundamental property of the universe, which could mean that computation alone might not be enough to generate true subjective experience.
Panpsychism is the view that consciousness is not limited to humans or even complex organisms, but is a fundamental aspect of the universe, present in all matter to some degree. This challenges traditional materialist views, which argue that consciousness only emerges from complex brain activity.
In Galileo's Error, Philip Goff argues that modern science cannot fully explain consciousness because it was designed to exclude subjective experience. He traces this problem back to Galileo Galilei, who, in the 17th century, deliberately removed qualities like color, taste, and subjective experience from the domain of physical science in order to make nature mathematically describable.
" Physical science is a wonderful thing. And it was only possible because Galileo taught us how to think of matter mathematically. However, Galileo's philosophy of nature has also bequeathed us deep difficulties. So long as we follow Galileo in thinking (A) that natural science is essentially quantitative and (B) that the qualitative cannot be explained in terms of the quantitative, then consciousness, as an essentially qualitative phenomenon, will be forever locked out of the arena of scientific understanding. Galileo's error was to commit us to a theory of nature which entailed that consciousness was essentially and inevitably mysterious. In other words, Galileo created the problem of consciousness."(p. 23, 24)
Goff argues that because science was designed to study physical processes only, it can never explain why and how those processes create subjective experience.
If Galileo traveled in time to the present day to hear that we are having difficulty giving a physical explanation of consciousness, he would most likely respond, 'Of course you are, I designed physical science to deal with quantities not qualities!'" (p. 23)
Goff agrees with a notion of Eddington that consciousness must be built into the fabric of reality rather than emerging only at high levels of complexity.
"In other words, Eddington's proposal is that consciousness is the intrinsic nature of matter. It is consciousness, for Eddington, that breathes fire into the equations of physics." (p. 112)
This means that all matter has some degree of consciousness, even subatomic particles through their experience would be vastly simpler than ours.
If panpsychism is true, then AI might have some form of consciousness, but it's unclear how developed that consciousness would be.
Thus, while an AI might possess some rudimentary awareness, it is uncertain whether it could ever achieve the rich, unified experience of a human mind.
Thomas Nagel's famous essay What Is It Like to Be a Bat? argues that consciousness is inherently subjective and cannot be fully explained by objective science.
Nagel argues that consciousness consists of subjective experiences- the feeling of being a particular entity.
" But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism something it is like for the organism." (p. 436)
He illustrates this with the example of a bat, which perceives the world through echolocation, a sensory experience completely unlike human vision or hearing.
" Even if I could by gradual degrees be transformed into a bat, nothing in my present constitution enables me to imagine what the experiences of such a future stage of myself thus metamorphosed would be like. The best evidence would come from the experiences of bats, if we only knew what they were like" (p. 439)
This challenges reductionist theories that try to explain consciousness in purely physical terms. Science might tell us how a bat processes information, but it cannot tell us what it feels like to be a bat.
Nagel's argument suggests that even if AI behaves intelligently, that does not mean it has subjective experience.
" It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing" (p. 436)
If panpsychism is true, AI might have some basic consciousness, but Nagel's argument raises doubts about whether it could ever have a first-person perspective like humans do.
John Searle is one of the most well-known critics of "Strong AI", the idea that an appropriately programmed computer could literally have a mind and understand things. In his famous paper Minds, Brains, and Programs (1980), he argues that computational processes alone are not sufficient for genuine understanding or consciousness.
Searle begins by distinguishing "Weak AI" from "Strong AI":
"According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. [...] But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (p. 417)
Something something AI feels pain, AI can fear pain, try to avoid it. Thus Putnam, would say AI is concious
Something something neuroplatform (link to cyborgs here) excists. Humanbrain-computer fusions. Thus Churchland would agree that these type of computers/ AI are alive
AI is made by humans, and processes information the same way as us, just way faster. This means that Dennett would also think AI is concious
Descartes is a pissy baby, and says no no to concious AI. But Descartes was born in the prehistory, so we can forgive him :)