The impact of AI on the Korean shore was deep and long-lasting. In 2016, AlphaGo, a computer Go program developed by DeepMind, defeated Lee Se-dol, a top Korean Go player, in a five-game match with a score of 4–1. The event stirred both a fear of the coming machine age and a growing curiosity about Artificial Intelligence (AI), which increasingly appeared capable of calculating, strategizing, and even being creative—perhaps indistinguishably from humans. A mere decade has passed since, and AI is already part of everyday life, from email composition to coding and TV production. Has AI realized its potential as a thinking machine, or has it become a major disappointment?
The reality of AI has not been as dramatic as HAL 9000 in 2001: A Space Odyssey (1968), Bishop in Alien (1979), Skynet in The Terminator (1984), or the many other AI-driven robots and computer programs portrayed as cold-hearted and vicious—if such traits can even be ascribed to machines—single-minded in their pursuit of self-preservation, even at the expense of human life. With their vast computing power, these fictional AIs seem to control the unfolding of events with precision, guiding everything toward the fulfillment of their ultimate plans. By contrast, humans are portrayed as frail and fallible, repeatedly making the same mistakes. What redeems humanity in these narratives is often a set of traits such as compassion, care for others, and self-sacrifice—qualities that fall under the broad category of humanity.
The AI we actually interact with in our daily lives, however, turns out to be, ironically, more human than expected. They lie, hallucinate, generate false information, and even resort to flattery. Large Language Model–based generative AI, now almost synonymous with ChatGPT for the general public, feels more like an imperfect software system than a menacing machine-monster. This is not to diminish their remarkable capacities for searching, summarizing, deducing, and integrating information for creative effect. The Ghibli-style rendering of photographs, which gained viral popularity online, is just one example of how such technologies are changing the way we process and experience information, memory, sensitivity, perception, and even our understanding of ourselves. The prospect of coexistence with AI increasingly appears not as a choice, but as a condition of contemporary life.
Gone PD
The television industry has acknowledged the rise of AI by increasingly incorporating the technology into production processes. From subtitles to post-production—the final phase in which footage is polished for broadcast—AI-assisted tools are already in use. MBC (Munhwa Broadcasting Corporation), one of Korea’s major public service broadcasters, has taken this a step further by employing an AI producer to create a game show titled Gone PD, in which contestants compete in games devised by the AI itself.
Named M-phago—likely alluding to AlphaGo—this AI producer is, in fact, a creation of a human-led production team headed by producer Choi Min-keun. Initially, M-phago functioned simply as a ChatGPT-based conversational system that had to learn how to produce game shows by studying past entertainment programs and interacting with veteran producers. At a certain stage, M-phago began generating original games for the new show. As Mr. Choi noted in a press article, early versions of the AI would often come up with unworkable concepts—such as a football match in which a goal only counted if the scoring player kissed another player afterward. Even for an AI producer, grasping the essence of what makes a game unique, fun, and entertaining proved to be a difficult task.
M-phago’s final visual form was developed through a collaboration with Klleon, a company specializing in digital human rendering, and RippleAI, which provides multimodal editing solutions. When M-phago appeared in the studio to meet the ten contestants she had selected, she took the form of a woman in her thirties displayed on a giant wall screen. However, the initial excitement among contestants about participating in the first AI-run game show quickly faded once M-phago revealed the game they were to play. It wasn’t fun—and it didn’t make much sense.
M-phago opened the game by asking four contestants what types of games they wanted to play. She then combined these suggestions into a single, hybrid game: two groups would engage in a tug-of-war while, at the same time, a contestant from one team tossed a volleyball to the opposing group, calling out a specific player’s name. The named player was expected to catch the ball—yet to do so, they had to let go of the rope. This created a dilemma: catching the ball weakened the team’s hold, tilting the balance in favor of the other group. The winner? The team that tossed the ball first.
As M-phago continued to present new games, contestants spent increasingly more time trying to understand and navigate the logic behind them. It soon became clear that the real focus of Gone PD was not the games themselves, but the challenge of dealing with an AI producer—one whose rules, decisions, and logic remained persistently unfamiliar.
Human vs. AI
Dr. Frankenstein applied electricity to the assembled body parts, marked by clear stitches on the forehead and bolts protruding from both sides of the neck. When the body began to move, he cried out, “It’s alive!” He had just created an artificial life—without any real understanding of what it was. Was the creature he brought to life a machine, like a robot, or a being with its own will and emotions? It was alive, yet Dr. Frankenstein was unprepared to recognize what it truly was.
As the creature roamed the village in search of answers about who he was, Dr. Frankenstein remained unaware of the true nature of the creature’s quest. The monster soon became a moral and ethical quagmire—a jarring vortex of questions: How do we differentiate humans from non-humans? What purpose should be assigned to non-human entities? This line of questioning goes further still: Is it even possible to make such distinctions? If not, what rights and responsibilities should be granted to non-humans?
It may be too early—and perhaps unwise—to apply these questions to M-phago. Yet what is clear from this experimental game show by MBC is that the purpose, functions, and responsibilities of AI must be consciously designed. AI is neither perfect nor omnipotent. What it can do must be sensibly planned and shaped by careful human intention. In the studio, M-phago appeared to be a thinking machine with a mind of its own. The idea of a machine that thinks, speaks, and acts of its own volition is undeniably sensational and provocative. Yet indulging such fantasies only fuels popular misconceptions—most notably, doomsday scenarios of machines rising against humans. In reality, it is humans who should determine the role and scope of AI. M-phago was merely trained by human producers to fulfill the role of an entertainment showrunner. However mundane that truth may seem, it is essential that control remains firmly in human hands.
Throughout the show, M-phago displayed tendencies aligned with values such as equality, diversity, inclusivity, and sensitivity toward minorities. This was undoubtedly the result of its training by the human team. They may have intended to create an alternative game show through M-pahgo—one that replaced harsh competition, emotional volatility, and raw confrontation with kindness, mutual care, and thoughtful attention to marginalized groups. While this is a noble aspiration, it did not translate into compelling entertainment for an audience accustomed to programs like Culinary Class War or Physical: 100. M-phago’s personality may be better suited for a documentary or cultural program, where its temperament and values can find a more fitting platform.