The advancement of artificial intelligence appears unstoppable, generating a complex mixture of fear, anxiety, curiosity, confusion, and hope. While its efficiency and potential are often celebrated for enhancing various aspects of human life—from online shopping to healthcare—it simultaneously provokes persistent anxieties about the loss of human control. Despite its promise to improve the management of daily life, why does the fear of AI’s autonomy and unpredictability continue to loom large?
MBC (Munhwa Broadcasting Corporation) produced and aired an unusual documentary in 2020 titled VR Human Documentary: Meeting You, in which a young mother was given the chance to meet her deceased daughter virtually. The girl had died of blood cancer at the age of seven. The documentary team constructed a virtual version of the child, modeled on her real-life image and voice. The virtually reconstructed girl could speak and move, but only through the goggles the mother was wearing. In a later interview, the mother reflected that the experience moved her, even though the girl did not fully resemble her daughter. The broadcast drew both acclaim and criticism. Some viewers saw it as a life-affirming moment, while others found it insensitive, confusing, or patronizing. To some, it crossed a line of decency, reducing the profound matters of life, death, and grief to a cheap technological gimmick.
In the popular science fiction series Black Mirror, humanity’s effort to defy death finds a new technological form in Artificial Intelligence, where the deceased are stored in computers, waiting to interact with their loved ones. Their ways of speaking, thinking, and feeling are converted into information—data that can reproduce itself as a realistic human presence at the touch of a keyboard. This fundamentally alters traditional understandings of life. It
challenges what have long been considered unshakable truths. Even death, once considered an irrefutable and essential part of the life cycle, becomes a variable that can be modified. As new technologies emerge, we find ourselves attempting to rewrite the rules of life that have stood unchallenged for centuries. A Prometheus who once stole fire to change the course of human history lives on in us, desiring to use new technologies to improve our lives. Yet, a traditionalist within us—who finds comfort in a fixed order—feels threatened by such technological transformations.
AI More Human than Humans?
At the intersection of rising technology and human anxiety, what we fear may not be the technology itself, but an uncertainty—or a vulnerability—within ourselves: the possibility that we might yield to the temptation of using such technology for ends that destabilize the foundations of humanity. Given how narrow the boundary is between deploying technology to improve life and crossing into territory that violates human decency, such fear should be read not as panic, but as a reasonable act of caution. The ambivalent reactions to Artificial Intelligence reflect a deeper impulse—to reexamine the nature of humanity in light of technological transformation.
There are numerous instances in which AI or AI-centered beings are used to mirror human ideals—often by exhibiting traits that appear more human than those of humans themselves. In Blade Runner (1982), AI humanoids implanted with artificial memories display characteristics traditionally regarded as deeply human, even noble. The story follows a group of replicants—synthetic beings designed for labor—who escape from an off-world colony in search of a way to extend their limited four-year lifespan. Deckard, a blade runner assigned to retire these rogue replicants, pursues their leader, Roy Batty, memorably portrayed by Rutger Hauer. In the film’s final scene, Roy confronts his imminent death and reflects on his fleeting existence, uttering the now-iconic lines: “All those moments will be lost in time, like tears in rain. Time to die.” In this moment, Roy emerges not as a threat to humanity, but as a figure who illuminates its moral and existential possibilities—one who prompts us to ask who we are, and where we ought to go.
Roy Batty’s capacity to reflect on his life from a critical distance does not affirm the assumption that human qualities are inherently superior or that they will become the standard for artificial beings. His graceful acceptance of death is not a symbolic concession to the triumph of humanism over technology, either. The replicant’s qualities—“more human than human”—represent a reimagination of what it means to be human, viewed through the lens of artificial intelligence. For centuries, nature—the sea, trees, mountains, flowers, birds, lakes, bees, and all living things—has invited human introspection, offering a mirror through which we have defined ourselves. In the technological age, intelligent machines such as AI-driven humanoid robots have taken on a similar role: they serve as mechanical mirrors, reflecting back the evolving conditions of what it means to be human.
Maybe Happy Ending
Another human quality in need of recalibration appears to be love. The new Broadway musical Maybe Happy Ending probes the idea of love by exploring the relationship between two helper-bots—synthetic beings designed to assist humans in domestic settings. Named Oliver and Claire, these two robots live in a facility in Seoul allocated for machines that are no longer wanted. Still unsure why his owner abandoned him, Oliver decides to embark on a journey to Jeju Island—where his former owner now resides—accompanied by his neighbor, Claire. Along the way, Oliver and Claire begin to experience feelings for each other and struggle to make sense of these new emotions. For them, this is uncharted territory. Yet their sincere attempts to express what love might look like feel refreshing, endearing, and sobering—especially for those immersed in online dating, social media, and dopamine-driven short-form videos.
Winner of six Tony Awards including Best Musical, Maybe Happy Ending centers around the question: what happens if robots can feel? Oliver and Claire are outdated models, and their parts are no longer manufactured. As their systems begin to deteriorate, they are forced to confront the reality of their limited time. In one poignant moment, Claire’s legs stop working in the middle of a dance with Oliver. This moment brings to the forefront the question of how love endures—or fails—under conditions of decline and loss. Oliver’s heartbreaking attempts to mend Claire’s leg gesture toward a deeper inquiry: what should love look like in the face of impermanence, even for humans? Their story invites us to reconsider and reconfigure our understanding of love itself.
If one were to imagine transferring their software or CPU into a new body, that is certainly a possibility—though it begins to resemble a cinematic trope borrowed from cyborg films. Yet the significance of new technologies may not lie in the pursuit of altering life without limit, or in satisfying every curiosity and desire. Rather, their value may rest in their capacity to prompt us to reimagine what it means to be human—so that we might evolve in tandem with the technologies we create.