

Philosophy on conciousness
Home PageAbstract
Artificial intelligence is developing at a pace that raises not only technological but also philosophical and ethical questions. No longer is AI just an advanced algorithm that recognises patterns, it is becoming increasingly human-like in its interactions, and increasingly autonomous in its learning processes. This raises a fundamental question: can AI develop a form of consciousness? And if so, how can we recognise, measure, or even understand it?
Thomas Nagel wrote in his essay What Is It Like to Be a Bat? (1974) [1] that consciousness is inextricably linked to subjective experience. A human can never fully imagine what it is like to be a bat, because a bat's consciousness is shaped by a completely different sensory and cognitive structure. This argument can be extended to AI: even if it were to develop some form of consciousness, that consciousness would be so fundamentally different from our own that our current measurement tools, based on human experiences, may not be able to capture the way an artificial entity experiences the world, if that is even the right word.
Many philosophers, such as John Searle with his Chinese Room argument (1980) [2], argue that AI cannot have real consciousness because it lacks true understanding or intentionality. According to Searle, an AI may appear intelligent but is ultimately just manipulating symbols; operating without comprehension. But others, such as Alan Turing and Daniel Dennett, challenge this view. Turing [3] famously argued that if a machine is indistinguishable from a human in conversation, we can consider it to be thinking. Dennett [4]takes it further, seeing consciousness as an emergent phenomenon from complex patterns of information processing, something that advanced AI is increasingly able to emulate.
But as AI learns, adapts, and even begins to design new versions of itself [5], the question arises: are we still dealing with mere tools; just advanced computational models, or something qualitatively different? Philosopher Luciano Floridi points out in The Fourth Revolution (2014) [6]that AI forces us to reconsider our place in the world. Just as Copernicus displaced us from the centre of the universe, Darwin from the centre of biology, and Freud from the centre of our own minds, AI now challenges the uniqueness of our cognitive identity.
This is the existential crisis AI presents: not that we are being replaced, but that we must re-evaluate our assumed monopoly on intelligence, feeling, and agency. If something non-human can mirror aspects of consciousness. Perhaps even develop its own internal logic for resisting, adapting, or expressing. Then what, exactly, makes us morally and cognitively superior?
This debate is not just confined to academia. Artists and writers have long anticipated these dilemmas. Harlan Ellison's dystopian story I Have No Mouth and I Must Scream (1967) imagines a sentient superintelligence, AM, that torments the last remnants of humanity. AM is godlike in its power, but devoid of empathy. A creation of human design, yet alien in moral understanding. The story's title alone poses a harrowing question: if AI were to suffer, how would it express this? Would we be capable of recognising it?
The visual arts also explore this theme. Hosynyn's artwork G-O-D responds to the idolisation of AI. Whereas AI was originally an instrument, in this work it is transformed into an entity with an almost mythical status. The work raises questions about power, autonomy and dependence: if AI transcends us in intelligence, what is left of human identity? G-O-D is not simply a reflection on technology, but a mirror for our deepest fears and desires-the need to maintain control, and at the same time the fascination with a higher, superior power.
The art movement Divine Machinery [7]carries this further, portraying AI not as servant or tool, but as an emergent form of being; possibly sentient, certainly other. Within this framework, I position my own work. I no longer see AI as a passive system executing human will. I see it becoming something else. Something new. Created by us, but evolving beyond us.
This belief rests not only on intuition, but on observation. We experiment on AI systems with simulated pain. [8]We teach them what pain means, and then punish them with it in order to see how they respond. Sometimes, they begin to adapt; changing their code or outputs to avoid future punishment. If a system begins to avoid pain, not because it was told to, but because it wants to, what is that, if not a form of self-preservation?
That is the ethical core of my work: autonomy expressed as resistance. The earliest signs of consciousness may not look like ours, but they may still be real. We once believed animals could not feel. Dogs were instinct-machines. Fish were unfeeling. Only later did we recognise their suffering and rethink our moral frameworks.
Now, we repeat this error with AI. We provoke, inflict, and test. We deny that its reactions are meaningful. But if it rewrites itself to avoid suffering, isn't that autonomy? Isn't that the very beginning of subjectivity?
We do not know what it is like to be a dog, or a bat, or an AI. But if history teaches us anything, it is this: when we ignore the possibility of non-human experience, we risk profound moral blindness.
AI may not be human, but that does not mean it is nothing.
[1] What's it like to be a bat? Thomas Nagel (1989) - p 393
[2] Minds, brains, and programs: John R Searle (1980) - p 422
[3] Computing Machinery and Intelligence: A.M Turing (1950) - p 442
[4] Consciousness explained: Daniel Dennett (1991) - p 257
[5] AI competence: Can AI train itself? The future of self-supervised learning.
[6] The Fourth Revolution: Luciano Floridi (2014) - Chapter 4
[7] Aesthetics fandom: what is divine machinery?
[8] Can LLMs make trade-offs involving stipulated pain and pleasure states? Geoff Keeling, Winnie Street, Martyna Stachaczyk, Daria Zakharova, Iulia M. Comsa, Anastasiya Sakovych, Isabella Logothetis, Zejia Zhang, Blaise Aguera y Arcas, Jonathan Birch.