The rise of artificial intelligence inevitably raises questions about its existential status. Already widely deployed across fields ranging from engineering to commerce—and in some cases surpassing human efficiency, particularly in data organization—AI may eventually reach a point of technological singularity, becoming indistinguishable from humans in its capacity for cognition, perception, and perhaps even emotion. When that moment comes, what should its status be: a fellow human being, or merely an exceptionally intricate machine?
The question has already been explored in Steven Spielberg’s science fiction film A.I. (2001). Set in a future where climate disasters have caused a decline in the human population, some technologically advanced nations begin producing humanoid robots to fulfill various social roles. Professor Hobby, a leading scientist in the field of artificial intelligence, creates a child robot named David who is capable of love by employing neuronal feedback—enabling communication between different regions of the brain to produce emotional experience. At one point, a fellow scientist poses a critical question: “It isn’t simply a question of creating a robot who can love, but isn’t the real conundrum—can you get a human to love them back?” This question succinctly captures the enduring uncertainty surrounding the definition of AI, which remains mired in moral, ethical, and philosophical ambiguities that lag behind its rapidly expanding role in economic and technological domains. While it is clear that AI may be a game changer, how we choose to understand and relate to it remains unresolved.
The rest of the film demonstrates that humans are not yet ready to love David in the way he loves them. Despite possessing emotions such as jealousy, desire, and longing—feelings that appear to exceed mere simulation—he is ultimately regarded as nothing more than a mechanical construct. David is initially adopted by a couple whose son suffers from a rare, debilitating disease, but once the boy recovers, David is cast out and left to wander alongside other abandoned robots. No matter how complex or sincere his emotions may be, he remains a manufactured being with no rightful place in a human-centered world.
Slavery in the Technological Era
As repeatedly emphasized in the film, David is unique—one of a kind. He demonstrates not only intelligence and the ability to learn from environmental input, but also, most notably, the capacity to feel emotion—specifically, love. While humans increasingly rely on artificial beings to perform routine tasks such as calculations, data organization, and information retrieval with superior efficiency, they also desire these machines to reflect human values. Examples are not far off in films where AI-centered robots demonstrate sympathy, sacrifice, love, and even a desire to become human. In such portrayals, they are not perceived as threats. Roy Batty in Blade Runner (1982) exemplifies this paradox: although designed for physical labor in off-world colonies, he ultimately accepts his four-year lifespan with self-awareness and grace, exhibiting a deeply human quality. By the film’s end, he becomes a kind of honorary human, rather than a malicious machine. But does the presence of human qualities exempt artificial beings from being viewed as unpredictable, deceitful, or untrustworthy? David’s reality seems to suggest otherwise.
Despite possessing human-like qualities, artificial beings remain non-human and are ultimately confined to the roles designated for machines. They are expected to work harder and more efficiently than humans, while simultaneously embodying human values as a means of affirming human moral frameworks. Yet, they are still perceived as nothing more than a bundle of bolts and nuts—their value determined solely by their labor, neither more nor less. But is this the full extent of what artificial beings with intelligence and emotional capacity represent? Are they only to exist within the confines assigned to them by human systems of utility? This condition may well constitute a form of slavery in the technological era, wherein intelligent machines are treated as if they were merely simple mechanical tools.
David is left alone in the woods after his adoptive mother abandons him—like a puppy that has lost its charm—and soon finds himself among many other mechanical beings who have also outlived their usefulness to humans. To Lord Johnson-Johnson, the master of the Flesh Fair—a brutal spectacle where discarded artificial beings are destroyed for the entertainment of human audiences—David and his kind are easy prey for a night of violent amusement. In this grotesque display, machines are subjected to horrific ordeals, such as being launched through rings of fire to burn like cannonballs, offering spectators a fleeting sense of superiority.
Captured in a giant net, David and the others are imprisoned in cages, awaiting their turn to be destroyed. David is horrified by the unfolding spectacle—an atrocious circus of sadistic pleasure that eliminates anything non-human without the slightest trace of moral reflection. When a black-faced robot is launched into the cannon, the image evokes an unmistakable historical analogy: acts of savagery such as slavery have always been sustained by an exclusive definition of who qualifies as human—and who does not.
The Question of Ownership
At the heart of the debate over the existential status of artificial beings lies the issue of ownership, which grants the owner exclusive rights to possess, use, control, and dispose of property. Even when artificial beings exhibit human-like qualities alongside high functional efficiency, they remain vulnerable to the will of their owners, as they are still classified as property. Unless a new framework emerges that defines artificial beings outside the paradigm of ownership, both machines and humans will remain trapped in an antagonistic zero-sum relationship—one in which absolute power is exercised by one over the other.
SORI: Voice from the Heart (2016) offers a compelling exploration of the concept of ownership and its potential consequences. SORI is an artificial intelligence-powered, self-learning robot originally part of a satellite communication system. After falling from the main satellite, it crash-lands on an island off the southwest coast of Korea. Though damaged, it retains its ability to access communications through electronic devices, a function it once served in the satellite system for national security purposes.
On land, SORI encounters Mr. Kim, a man who happens to be on the island searching for his daughter, who has been missing for ten years. SORI takes on the mission of helping him locate her. By using its communication technologies and data-tracking capabilities, SORI assists Mr. Kim in tracing his daughter’s “sound footprints”—records of past phone conversations, for instance. Unexpectedly, a bond of trust and mutual dependence forms between them.
As the story unfolds, it is revealed that Mr. Kim had a strained relationship with his daughter before her disappearance. He had dismissed her desire to pursue a singing career, insisting that she was too young to know what she wanted and reminding her that he had raised, fed, and educated her. This logic reveals the darker side of ownership—thinly veiled in parental authority—that fails to recognize the autonomy of the other. Had Mr. Kim been able to see his daughter as an independent being, rather than as someone he “raised” and thus “owned,” she might not have disappeared from his life.
In this light, it is not surprising that humans feel entitled to ownership over machines—after all, humans created them. But when artificial beings begin to exhibit intelligence, agency, and emotional capacity, would it still be viable to regard them as mere property?