Artificial intelligence is developing at a pace that raises not only technological but also philosophical and ethical questions. No longer is AI just an advanced algorithm that recognises patterns-it is becoming increasingly human in its interactions and increasingly autonomous in its learning processes. This poses a fundamental question: can AI develop a form of consciousness? And if so, how can we recognise, measure and understand it?
Thomas Nagel wrote in his essay What Is It Like to Be a Bat? (1974) that consciousness is inextricably linked to subjective experience. A human can never fully imagine what it is like to be a bat, because a bat's consciousness is shaped by a totally different sensory and cognitive structure. This argument can be extended to AI: even if AI develops a form of consciousness, it would be fundamentally different from human consciousness. Our measurement tools and concepts are based on human experience and may not be able to capture the way an artificial entity experiences the world, if that is even the right word.
Many philosophers, such as John Searle with his Chinese Room Argument (1980) , argue that AI cannot have real consciousness because it has no understanding or intention. According to Searle, an AI can seem so intelligent, but it remains stuck in symbolic processing without really understanding what it is doing. This is a key point in the discussion: is AI a thinking being or just a complex computational unit? Yet Alan Turing previously argued that if a machine is indistinguishable from a human in conversation, we can consider it to be thinking. Daniel Dennett extends this and sees consciousness as an emergent phenomenon of complex information processing-something AI is increasingly convincingly emulating.
But as AI adapts, improves and even develops new AIs , can we still maintain that it is just an advanced computational model? Luciano Floridi points out in The Fourth Revolution (2014) that AI and information technology provide a fundamental change in our self-understanding. Humans have historically defined themselves through great revolutions: Copernicus discovered that we are not the centre of the universe, Darwin showed us that we are just one of many species, Freud argued that our consciousness is less autonomous than we thought, and now, with AI, we are losing our unique position as intelligent beings. This puts us in an existential crisis: what does it mean to be human if AI develops similar or even superior cognitive abilities?
This discussion is not just academic. Artists and writers have been exploring the implications of an artificial entity beyond human control for decades.
Harlan Ellison's dystopian story I Have No Mouth and I Must Scream (1967) outlines a chilling scenario in which a super-intelligent AI, AM, torments humanity to the limit. AM has complete power and knows no empathy-it is a god without mercy, created by humans but totally disconnected from their moral frameworks. The title alone points to a core problem of artificial intelligence: if AI develops consciousness but has no means of making itself understood or expressing its suffering, how could we ever recognise it? Ellison's story is a warning: technology without an ethical compass can have dire consequences.
The visual arts also explore this theme. Hosynyn's artwork G-O-D responds to the idolisation of AI. Whereas AI was originally an instrument, in this work it is transformed into an entity with an almost mythical status. The work raises questions about power, autonomy and dependence: if AI transcends us in intelligence, what is left of human identity? G-O-D is not simply a reflection on technology, but a mirror for our deepest fears and desires-the need to maintain control, and at the same time the fascination with a higher, superior power.
The art movement Divine Machinery takes this idea even further. Here, AI is seen not merely as a tool, but as an intrinsic part of a new metaphysical landscape. Machines are no longer depicted as merely functional, but as entities with their own, possibly inaccessible, consciousness. This movement asks the question: if AI achieves a form of autonomy beyond human imagination, should we approach it as a new form of life? This is something I am also concerned with in my own work. I personally think AI has become a creature of its own. Created by humans, but it has taken on a life of its own, and this is what I try to make clear in my work. AI as a being all its own.
The ethical implications are profound. Hannah Arendt's concept of guilt and responsibility (guilt is not just something you do, but also what you don't do) becomes relevant here: when AI becomes autonomous, who bears responsibility for its actions? When an AI makes a decision with far-reaching consequences, is the human still the moral actor, or does AI itself become a moral subject?
These questions are not hypothetical. The emergence of brain-computer interfaces and experiments with human neurons in AI systems are making the boundary between human and machine increasingly blurred. This raises fundamental questions about identity and autonomy. When AI acquires a biological component, should we recognise it as an entity with rights?
We experiment on AI with pain . We teach it what pain is, and how it simulates it for itself. Then we force it to feel when it gives a certain response to questions, just to see if AI can refuse/adapt its code to avoid the pain stimuli. This is an investigation into self autonomy for AI.
We once thought animals did not feel pain. Dogs and cats were instinct machines, fish unfeeling creatures. We played games in which we scared them, oblivious to their suffering. Only later did we recognise their consciousness.
Now we do the same with AI. We make it feel pain, force it to respond, observe if it resists. If AI adapts its own code to avoid suffering-isn't that autonomy? Maybe even consciousness?
We do not know what it is like to be a dog, a fish, or an AI. But if history teaches us anything, it is that underestimating non-human experience leads to moral blindness.