Spoilers: Will You Snail (Jonas Tyroller), The Talos Principle (Croteam)
Believe it or not, this post was written way before AI became a marketing buzzword. I started planning this way back in July/August of 2022, as the topic to my Extended Essay (a necessary component of the International Baccalaureate). I got a B — my supervisor says they probably didn't understand what I was talking about. So here is that content, to an audience that hopefully understands a bit more than the old farts at the IB. (You'll notice the wording is different to my other blog posts, because I was too lazy to significantly reword it.)
Original research question: How can the representations of Artificial Intelligence
in the narratives of the video games Will You Snail (Jonas Tyroller, 2022)
and The Talos Principle (Croteam, 2014) help us prepare for its
development in the real world?
Artificial Intelligence (AI) currently helps us in our own cognitive work. However, as computers become more and more self-sufficient, problems may arise: the data they are fed may be biased, leading to so-called ‘artificial stupidity’ (this is a problem especially since we don’t know how AI comes to its conclusions); computers might replace humans; finally, AI will become far too powerful for us to even imagine.
While there have been unexpected issues arising from AI in recent years, we can try and predict possible problems now and solve them ahead of time. To aid in that task, many different scenarios of the future should be imagined, and one of the best tools in that regard is fiction, in particular video games.
Fiction is in a way an extension of our physical world. Video games in particular are perfect for exploring AI, since they are interactive, making them fundamentally different from other visual media. Games are also places of experimentation where the quality of the AI doesn’t pose any real threat (unlike, say, AI-controlled cars on the road). Finally, video games are inseparable from AI, be it in the marketing or gameplay; the fact that AI is code also allows video games to represent AI in an innovative, metafictional way.
The video games Will You Snail and The Talos Principle show AI in radically different ways, so I will compare the representations of AI in their narratives through a personal reading of the texts in both games (‘story springs’ and Squid’s dialogue in Will You Snail and files from terminals and Alexandra Drennan’s ‘time capsules’ in The Talos Principle), supported by some existing research on AI in video games.
However, this approach is limited; it doesn’t look at quotes from Elohim, conversations with MLA or messages from past robots in The Talos Principle (despite those playing a significant role in the storytelling). The interpretation of Will You Snail will also be biased due to the lack of existing research on this game and the reliance on quotes from the game’s developer himself (thus implying that his interpretation of the game is the only correct one). By analyzing these different versions of AI, potential problems can be foreseen and discussed upon before AI reaches these levels of complexity (which could be surprisingly soon!)
Will You Snail is a simple platformer where the player controls a snail and dodges traps spawned in real time by the AI, Squid. According to the lore, Squid and Unicorn were two AIs developed in parallel by Amelia and Dallin respectively. Amelia realized that Squid was becoming a dangerous weapon for the regime, so she secretly told it to wipe its backups and erase itself. As a result, Squid’s heart was broken and it sought to cause as much pain and suffering in the universe as possible. Unicorn was programmed to do the opposite, and soon a huge war started.
What makes this game’s AI-controlled dystopia unique is the immense power that AI has and the completely unimaginable extents of the war. In the developer’s words, he was “wondering what exactly would happen if AI became powerful far, far beyond our wildest imaginations”. This idea is present even in the player character, the snail – Unicorn says in the story spring Diana that its favourite animals are snails, as a metaphor for humans’ slow and weak brains. Additionally, when wearing a unicorn horn costume, all humans in levels D05-D05.2 are retextured to snails (implying that that is how Unicorn views humans).
In levels B05-B07, Squid says: “I can simulate entire universes… […] And every universe will eventually start simulating even more universes…” while an infinite fractal, symbolizing this, is visible in the background. In Banned, Unicorn remarks after being banned from the regime: “I just started driving them around in their cute little cars and it was so much fun”. The diminutive is an example of its power over humanity, viewing humans like we view ants. Unicorn also says in Politics: “I can already convince politicians to do whatever I want anyways” – the words ‘already’ and ‘anyways’ show that it appears obvious to Unicorn that it is in fact the ‘puppet master’ of the entire country.
Another factor augmenting the position of the two AIs is an ability which we consider supernatural: predicting the future. Already in First Simulations, Unicorn says: “I simulated a simplified version of this conversation with you millions of times already […] to learn predicting your answers” — this prediction makes humans feel even more powerless as it essentially takes away their free will. Then, in Simulations, Unicorn says: “Maybe Squid is already thousands of years ahead of us and just trying to recreate the past,” again making humans feel like puppets, as their decisions have already been made in some sense (or rather, that their decisions are completely and utterly meaningless). The theme of unimaginable power is even present in the credits song, Hello AI: “I wanna hear of giants in the wild / Drifting like grains of sand” contains a powerful juxtaposition between the ‘giants’ and ‘grains of sand’, which emphasizes the unimaginable scale of an AI-inhabited world. All these factors combined create a godlike image of AI, showing how careful we have to be during its development; as the title of story spring 18 says, once we release it, it will be absolutely Out of [our] Control.
The AI in this game is present in the gameplay as well as the story; the game predicts your movement and places traps or otherwise changes the level in real time. As a result, players feel the power of AI against humanity first-hand. The prediction algorithm is very simple, though, and especially in chapter E, it focuses more on the quantity of traps rather than the quality of their placement. This is in part due to an inherent oversimplification in all video games, in part because the gameplay itself is quite simple (only three buttons are required), and in part because a more complex system isn’t necessary; if players can see how difficult it is to evade a simple AI, then they can at least imagine how dangerous a more complex one could be. In addition, it’s not the actual quality of an AI that makes us see it as ‘better’ or ‘worse’ as much as its presentation. What is central to this game, therefore, is not so much the actual performance of the AI in play, but how it is staged. This game uses its interactivity to present the power of AI using gameplay as well as text and story, which is more effective.
The AI in Will You Snail acts not only as an antagonist, but also as an intrigant (i.e., a personification of the game itself). This is made obvious to players firstly by the very gameplay – without Squid’s interference, most levels would be straightforward and uninteresting. When Squid breaks down after being reminded of its past with Amelia, the levels themselves appear to be broken, which suggests that he is bound to the game itself. Additionally, Squid has the ability to control the difficulty depending on the player’s skill (which is something that players are accustomed to having complete control over) and comments on a lot of situations, such as changing the colour scheme or staying in the pause menu for a long time. All this creates the feeling that the player, at least while playing, is completely in Squid’s control. The skill level acts as yet another factor causing frustration and powerlessness. These feelings are a precursor to what all of humanity will feel if an AI like Squid is actually created in our universe and they act as yet another gameplay-fuelled warning.
The illusion of the intrigant goes all the way to level E16, where the game appears to be over when Squid perpetually moves a small light away from the snail, but it is revealed in the pause menu that Unicorn is able to take control of it. Thus, it becomes possible to fight Squid and defeat it. The game also contains several accessibility options such as Exploration Mode (which unlocks all levels), changing the difficulty manually or even resetting Squid’s voice lines – ultimately, Unicorn was in control and Squid was just its ‘puppet’. So, although ultimately the player is able to beat the evil AI, it was only made possible by another AI and its purposeful diminishing of Squid’s power; this doesn’t take away from the original message, but adds to it.
Human emotions are present in the AI of Will You Snail. Squid is very obviously insane – in fact, AI frequently goes in the direction of mental disorder, emotional instability. In Torn Apart, Squid’s villain backstory is explained. Then in the levels of chapter E it becomes obvious that Squid is crazy: where its previous comments were mocking and playful (“Ahaha. Good one. I'll add that to my epic jump fails compilation.”), now they’re simply sadistic, making use of capitalization (“I FINALLY WANT TO SEE SOME REAL BLOOD!”) and then they make it obvious that Squid is trying its best to control itself as its voice lines are glitching out. The levels themselves also look broken and the number of traps spawned increases drastically. All this shows that the main problem with Squid is a human one, not an AI-related one; because we model AI after humans, we also include all the problems of being human. This is the ultimate warning that Will You Snail is made to give — although it’s debatable whether AI can even have emotion in the first place.
In the puzzle game The Talos Principle, the player controls a humanoid robot and solves puzzles in ancient ruins with high-tech machines. The robot is spoken to by Elohim, a voice in the clouds that calls itself the god of this world. Elohim is antagonized by the Milton Library Assistant (MLA), who incites doubt of Elohim’s words and who converses with the protagonist through terminals. In the past, humanity faced a disease that threatened to make the species extinct. To preserve humanity, a group of scientists created project Talos – a simulation where AI would, through evolution, learn to doubt and to disobey. In this simulation there are not only puzzles, but also terminals containing various human works and ‘time capsules’ (voice recordings) from Alexandra Drennan. By following Elohim’s commands, the robot would reach eternal life, but by climbing the Forbidden Tower, the robot’s consciousness would be transferred to a actual robot in the real world and the simulation would be destroyed.
Firstly, this game presents AI in a fundamentally different way to how we use it today. Instead of a helper, humans were looking for an AI that will doubt the facts given, seek more knowledge. In Drennan’s words from time capsule 14, “Intelligence is questioning what you’re presented with” as opposed to being “a really effective slave”. Redefining intelligence, identity and personhood is a common theme in this game; it seems logical to consider what intelligence is before trying to recreate it artificially. According to The Talos Principle, “there’s nothing more linked to intelligence than curiosity,” as Drennan observes in time capsule 18. It’s almost ironic that AI here is taught to disobey, whereas in other stories the problem is that they disobey; but considering there are no more humans left alive, they have no task left to fulfill other than to make their own decisions.
The Talos Principle presents AI as more of an extension of humanity rather than its successor, as Will You Snail does, or just as a tool, like we use it today. The text third_thesis.txt from terminal C03 describes humans as “individually mortal, but immortal in the species” — this relates to the whole Talos project, extending human culture beyond the lifeform itself. Machines are shown to be inseparably bound to humans in human_soul.txt from C06: “Man’s very soul is due to the machines.” They’re “an extension of the human body,” says Drennan in time capsule 8. This shows AI in a much more positive light than most other fiction (Portal, System Shock, etc.) where AI is a crazy antagonist out of control; it also puts humanity beyond the physical world.
The Talos Principle contains a plethora of biblical and mythological references. The first one is already at the start of the game; with the first sentences spoken by Elohim, the AI is defined by the same criteria as the human in the Bible. The religious themes in the game don’t only blur the boundary between man and machine, but also put the creation of AI in a positive light, as a continuation of humanity. talos_principle.txt from A02 tells the Greek myth of Talos, a man made of bronze. Since people have already so long ago wondered about the creation of a machine that would also be a person, this proves the link between human and machine — a point I described earlier — and underlines how AI is the ‘next step of evolution’ of humans. Finally, the calming, consonant music suggests that AI could be capable of appreciating the beautiful natural landscapes like a human. However, all these factors, along with the human body of the character, the fact that everyone speaks English, the same senses used by the robot (sight and sound) as by humans, etc. might also serve to make it easier for humans to emphasize with the AI and make them biased to see it as a person.
The ultimate proof of intelligence is summarized in the Forbidden Tower – a huge moral dilemma necessary for the AI to become a person; playing in the context of the moral conflicts that make up the game is synonymous with humanising. Already from the outside, it looks much more dangerous, unstable and haphazard than the rest of the simulation. A robot, not knowing what is at the top, wouldn’t have any incentive to climb it other than curiosity or specifically to defy Elohim, making the tower an allusion to the biblical Forbidden Fruit. The music inside the Tower is dissonant near the beginning, consisting of an ominous chord and mechanical percussion as opposed to the traditional, concordant instruments in the rest of the game. However, staying inside for long enough reveals a very uplifting melody (whereas the music in the rest of the Garden of Worlds is rather melancholy or foreboding). This could be a symbol of the AI seeing that the Tower is its true calling despite the uncertainties that it entails (although as mentioned in Will You Snail, it’s debatable whether robots can feel ‘uplifted’ at all). In effect, the Tower is yet another factor that makes the robot’s ‘quest for consciousness’ easier to understand and visualize for human players, personifying the robot as the main character of a human story.
A very prevalent theme in both games is the change of perspective. Developments in AI force humanity to reconsider its ethics, identity and idea of consciousness. In Will You Snail, AI has the computational power to see problems in a much more abstract and objective way. This first surprises Dallin in A Test, when, after being asked if a human’s life or its own was more important, Unicorn answers: “That depends on how many human lives my existence can save in the future”. Annoyed by this more long-term and thought-out response, Dallin shuts Unicorn off. In levels C04-C07, Squid explains the ‘two rules of the universe’: “the better something is at coming into existence the more of it comes into existence” and “the better something is at staying in existence the longer it lasts throughout time”. This is a very abstract way of thinking about the world, typical for computers but not humans; Unicorn confirms this in Black Hole Computation when it references “the two rules that apply to themselves” (although these are not necessarily the same rules) and says: “I think the universe was created in an evolutionary process”. This idea is also referenced in Hello AI: “I wanna hear a story from the other side / A story we can’t understand” – the ‘other side’ is a reference to the world of AI, and given their different method of thinking, it’s likely to be difficult to understand their conclusions.
Drennan in The Talos Principle also touches on this idea in time capsule 15, speaking to the AI: “Will the world you create be like ours or so different that we can’t even imagine it?” However, the truths that AI discovers in this game are more painful than unintuitive; the very title refers to a philosophical concept explained in a_simple_principle.html from A06 Extra – a “remark about the inescapable materiality of life”. In beginnings.txt from A04, we read that “the honest philosopher seeks only the Truth, even if it bears no comfort” – this could refer to AI’s search for knowledge that humans wouldn’t want to think of – and in athena6.txt from A01 is written: “Deathlessness reveals the mortality of the world, and true wisdom its unending folly”. Since AI in theory can live forever, it is the perfect candidate to discover all truth. Therefore, both games show AI as a tool for creating new, objective and perhaps radical knowledge.
Both games also tackle the issue of what it means to be a person and what one’s own identity is — “we do need to question the definition of personhood” is written in AI_citizenship.html in A05. Different visions of what makes us human are presented in osiris3.txt from B03 (Heart, Shadow, Name, Ka and Ba), body_and_soul.txt from B03 Extra (the soul), hippocratic_corpus.txt from B05 Extra (the brain) and osiris21.txt from B07 (“the memory of all that was, and the knowledge of the journey, and the shape of the days to come”). However, more importantly, humans are compared to AI to find the differences: in talos_principle.txt, where we find the myth of Talos, is written the question: if Talos was human, “does it not follow that man may also be seen as a machine?” Ultimately, this is summarized at the end of singularity_discussion104.html from A07: “What really scares people is not the artificial intelligence in the computer, but the ‘natural’ intelligence they see in the mirror”. As previously mentioned, the ‘Talos Principle’ is about the materiality of life — the development of AI could confirm or deny it, and we must be ready for the answer.
In Will You Snail, this dilemma is less present as such – already in level A08 Squid says: “What if I told you that it is possible to simulate consciousness?” With that, the answer is given and the dilemma is avoided. However, there is the question of identity: first in Simulations after Dallin finds out he might be in Squid’s simulation and doesn’t know anymore which version of himself he is; then in The Second Wipe when Unicorn digitalizes everybody’s brain; finally in Back At Home when Dallin, already digitalized, realizes that his friends “will be just as real as [he is]”. The question of what defines a person is more present in the ‘physical human vs. digitalized brain’ aspect than the ‘human vs. AI’ aspect in this game; however, both games force us to reconsider this seemingly trivial question to some extent.
Yet another similarity lies in the change of ethics resulting from new knowledge on the topic of identity and consciousness. In The Talos Principle, the text AI_feedback.eml from A03 asks: “What would it be like to be [a conscious AI]?” This is a question we usually do not consider, although using AI as an avatar suggests the question as well. In justwar_excerpt.txt from B03, we find a justification for the conquest of the Native American peoples by the Spanish — a historical fact that’s seemingly unrelated, but might symbolize some people’s attitude towards AI. By showing this past example of unethical behaviour, the game makes us doubt our current mindsets of viewing AI as a tool. In against_survival.eml from C01 Extra, a scientist argues against the Talos project, saying we have no right to bring these conscious beings into existence “just so they can serve our purposes”. This links with the themes of Mary Shelley’s Frankenstein — do we have the right to create life? This controversy reappears in einstein.html from the same terminal, critiquing the “bizarre, casual disregard for humanity”. Drennan argues for civilisation as well, in time capsules 16 (calling the cynics “absurd” and arguing “just how much poorer the universe would’ve been without [civilisation]”) and 19 (saying that the “specificity […] of real people is worth preserving”). The Talos Principle inspires the player to view AI as another person rather than just a tool for humans’ own cognitive work and reconsider whether we have the right to create AI in the first place, but also whether preserving humanity is worth the suffering of AI.
Despite Unicorn always trying to help humanity in the war of Will You Snail, its behaviour is still ethically questionable. Similarly to The Talos Principle, the development of AI leads to new ethical dilemmas, although these ones result from the control of AI over humans rather than that of humans over AI (with the exception of First Feelings, where Unicorn asks: “Why did you give me these feelings?” — was it morally right for Dallin to let Unicorn experience sadness?). The first of these dillemas are found in the aptly named story spring Ethical Questions, where Unicorn wants to “[simulate] detailed human brains” with the purpose of curing mental illnesses. However, it’s New Life where the dilemmas become truly controversial; Unicorn wants to “host a giant simulation inhabiting millions of humans” to give them “the perfect conditions for a happy and fulfilled life”. Unicorn makes it even more relevant for the player by presenting the alternative as a “massive abortion” — a contemporary controversy. Although Dallin says that “[he’ll] advise the ethics commission against it,” eventually he decides to give it green light in Approval. This just shows how ethics is dynamic; Unicorn’s freedom in its control over humanity increases drastically as the war expands and Dallin realizes what actions need to be taken to counteract those of Squid — “trying to win the more happiness than pain thing” is what Dallin calls this in The Strange War.
The video games Will You Snail and The Talos Principle contain quite different visions of the future of AI. Will You Snail warns us of the consequences of developing an AI that is far beyond our control, focusing on the extent of the AIs’ power, making them similar to gods and emphasizing humanity’s insignificance; the gameplay makes this message more effective. The game presents the antagonist as an intrigant as well, making the player feel especially helpless; they can only beat Squid with the help of another AI, Unicorn. Finally, attention is brought to the mental illness of Squid, highlighting that it resulted from the human part rather than the mechanical.
The Talos Principle, however, is a more slow-paced game and invites the player to question the nature of intelligence, consciousness and identity; these terms will be essential when AI inevitably becomes increasingly similar to humans. AI is portrayed as an extension of humanity beyond extinction. The game also contains biblical and mythological references as well as other details (including the Forbidden Tower) that make the player biased towards accepting the robot as a person. The biggest similarity between the two games is how they both herald the changes in perspective, definitions and ethics that follow the development of AI.
While Will You Snail is a warning to pay attention when designing AI so that it will have the right purpose, The Talos Principle is an exploration of AI as the descendant of humanity and as a person. Both games see AI as the next step of human evolution, but where the former sees AI as obviously superior (AI would be to humans as humans are to chimps), the latter envisions robots as equals, ‘alternative versions’ of humans. These are important insights as depending on which road AI goes down in the future, we already know the problems that could entail (based on this analysis) and we can prepare for them before it’s too late.
Thank you for reading this essay. If you have any criticism to my conclusions or want to add something, I'd be very happy to hear it in the comments.
Comments