Creative Writing – Academic Texts

The Humanoid Robot as Portrayed in Movies

Where technology meets emotion

By: Amki Moors

Introduction

            The aim of this text is to explore the relationships within movies between humans and humanoid robots (anthropomorphised technology). Since Metropolis (Germany, 1927, dir. Lang), robots in films have been a recurrent and popular theme, raising questions about creation and social morals, in the context of our interaction with these robots. Through carefully analysing three movies – Prometheus, I, Robot and Artifical Intelligence: A.I. – and focusing on the main robot characters in the plots, a pattern may become apparent that will help us understand the way that humanoid robots are portrayed in the selected films. The research will be conducted through viewing the movies and selecting such scenes as are relevant to this study. Semiotic methods of writers such as Barthes, and other methods discussed in a book regarding the analysis of media by Gillespie and Toynbee (2006), and also Saussure as he is further discussed in Thibault’s book (1997), as well as a more psycological aspect with direct insight into human-robot relationships, such as that of Breazeal (2010), will all be applied.

Purpose and questions

            The intent of the research for this paper is to answer two main questions that might help us look at humanoid robots within movies differently; how do the directors characterise the main robot through the use of actions and visual cues? And how are the human-robot relationships portrayed through visual cues and dialogue? The three chosen movies are scripted and directed in such a way that they are especially relevant to this line of enquiry. “Interacting with a sociable robot should not be like interacting with an ant or a fish, for instance. Although ants and fish are social species, they do not support the human desire to treat others as distinct personalities and to be treated the same in turn. For this reason, it is important that sociable robots be believable” (Breazeal, 2010:8).

Prometheus (USA, 2012, dir. Scott)

            Prometheus is about a group of scientists, funded by Weyland Corporation, who travel to a distant planet in search of creatures they have nicknamed ‘Engineers’, who they believe to be the creators of human life. Travelling with them is a humanoid robot with a hidden agenda, named David, who works directly for the owner of the company, Peter Weyland. David interacts with the humans on the ship, where he appears to have a good relationship with Dr. Elizabeth Shaw and a poor one with her spouse, Dr. Charlie Holloway.

            Arriving at the planet, the scientists put on environment suits to investigate a large structure:

Holloway (to David): “You don’t breathe, remember? So, why wear a suit?”

David: “I was designed like this, because you people are more comfortable interacting with your own kind. If I didn’t wear the suit, it would defeat the purpose.”

Holloway: “Making you guys pretty close huh?”

David: “Not too close, I hope.” (Holloway nods and chuckles).

            Here, David makes a very relevant point. Humanoid robots are created to resemble humans as closely as possible, and so the way they conduct themselves should as well. This is discussed by Breazeal, where she writes: “In the field of human computer interaction (HCI), experiments have revealed that people unconsciously treat socially interactive technologies like people, demonstrating politeness, showing concern for their “feelings,” etc. To understand why, consider the profound impact that overcoming social challenges has had on the evolution of the human brain. In essence, we have evolved to be experts in social interaction. Our brains have changed very little from that of our long-past ancestors, yet we must deal with modern technology. As a result, if a technology behaves in a socially competent manner, we evoke our evolved social machinery to interact with it. Humanoid robots are a particularly intriguing technology for interacting with people, given the robots’ ability to support familiar social cues” (2010:xii).

            Dr. Holloway later explains to David that his sole intent was to get the answer to why the mysterious Engineers made humanity:

David: “Why do you think your people made me?”

Holloway: “We made you ’cause we could.”

David: “Can you imagine how disappointing it would be for you, to hear the same thing from your creator?”

Holloway: (laughs) “I guess it’s a good thing you can’t be disappointed.”

            Towards the end of the movie, Dr. Shaw and David are the only two conscious beings left on the planet. David, his head severed from his body, tries to communicate with Dr. Shaw through her intercom:

David: “Elizabeth? Are you there? Dr. Shaw? Can you hear me?”

Dr. Shaw: “Yes, I can hear you.”

David: “I was afraid you were dead.”

Dr. Shaw: “You have no idea what afraid is.”

            Dr. Shaw goes to retreive David’s head, and once again the discussion regarding creation ensues:

Dr. Shaw: “They created us, then they tried to kill us. They changed  their minds. I deserve to know why.”

David: “The answer is irrelevant. Does it matter why they changed their minds?”

Dr. Shaw: “Yes. Yes, it does.”

David: “I don’t understand.”

Dr. Shaw: “Well, I guess that’s because I’m a human being, and you’re a robot.”

            In relation to the above excerpts, one can look at how Thibault discusses Saussure’s views in his book: “A given act of parole does not have meaning in itself; rather, it is always situated at the intersection of these two dimensions of contextualization. These are put into operation whenever a given act of parole is performed. In other words, it is this ‘double system’ of contextualizing relations in langue which denotes an act as being of a certain type, rather than some other” (1997:273). What can be seen in the three examples above, is a difference of understanding, due to two separate paroles. David’s parole is one of pure logic, whereas Holloway’s and Shaw’s is guided by emotion, belief and curiousity. What is regarded as a relevant point of view in the parole of humans is incomprehensible to that of a robot with a completely logical way of thinking, so that even though their langue may appear to be the same, it has subtle differences, making a common understanding difficult.

I, Robot (USA, 2004, dir. Proyas)

            I, Robot – inspired by Isaac Asimov’s book of same name – follows ‘robot-phobic’ Police Detective, Del Spooner as he investigates what appears to be a suicide at the offices of the world’s largest robot-producing corporation, U.S. Robotics. Aided by U.S.R. ‘Robot Phycologist’, Dr. Calvin, whose job is to make robots appear more human (meaning, to anthropomorphise them), Spooner’s investigation leads him to humanoid robot Sonny, who appears to be capable of making his own decisions. The movie ends with a struggle between these heroes, and massive computer ‘brain’ VIKI who has been secretly controlling all of the U.S.R. robots.

            Detective Spooner has several flashbacks throughout the movie, where a robot saves Spooner from drowning in a river, leaving behind a little girl who was less likely to survive. Spooner later explains this event in detail, and that his mistrust of robots derives from their inability to make important decisions based on anything other than logic. Concluding from what Saussure writes: “It is understood that concepts are purely differential and defined not by their positive content but negatively by their relations with the other terms of the system… signs function, then, not through their intrinsic value but through the relative position” (Berger, 2009). Spooners attitude toward Sonny changes in one of the final scenes when Sonny ignores the logical choice and saves Dr. Calvin – instead of injecting VIKI with nanites, which stops her from controlling the actions of humanity. In this scene the writer is clearly emphasising the difference between Sonny and an ordinary robot, using the previous ‘wrong choice’ to make Sonny’s choice the ‘right one’.

            “Does this… make us friends?” A question uttered by Sonny right before he and Spooner shake hands. Here we have the grammatical signifier ‘friends’ with a signified of two individuals who like each other, and the visual signifier ‘handshake’ with the signifieds of a union or agreement.  Based on writings by Saussure this would result in a positive fact: “The sign, considered in its totality, is a positive, rather than a negative, fact because it represents the combining of selections of terms from the two orders of difference in response to some specific contextual requirement” (Thibault, 1997). In this context, and according to personal interpretation of Thibault’s text, the moment becomes a strong visual sign of a bond between the robot and the robot-phobic detective. It is perhaps worth noting that both Sonny’s arm and Spooner’s bionic arm have been severly injured in their struggle against VIKI; a visual cue of war heroes suffering wounds in battle for a good cause. In this scene, the handshake symbolises trust and equality between robot and human.

A.I.  (USA, 2001, dir. Spielberg)

            The movie takes place in a future where robots have been so anthropomorphised that they are outwardly indistinguishable from humans. One of the corporations that create ‘Mechas’ (humanoid robots) has a new aim: to create a Mecha child that can be programmed to love its adoptive human parents. The first half of the film shows this Mecha child, David, as he enters the home of a grieving couple, who later abandon David when their human son returns after coming out of a coma. Following his abandonment, David attempts to find the ‘Blue Fairy’ from Pinocchio, who he believes can turn him into a real boy.

            In the first scene, Professor Hobby is lecturing about the benefits of creating a robotic child capable of love:

Woman: “You know it occurrs to me, with all this animus existing against Mechas today, it isn’t simply a question of creating a robot who can love, but isn’t the real conundrum, can you get a human to love them back?

Professor Hobby: “Ours will be a perfect child, caught in the freeze frame, always loving, never ill, never changing. For all the childless couples,  yearning in vain for a license, our little Mecha would not only open up a completely new market, but fill a great human need.”

Woman: “But you haven’t answered my question. If a robot could genuinely love a person, what responsibility does that person hold toward that Mecha in return? It’s a moral question, isn’t it?”

Professor Hobby “The oldest one of all. But in the beginning, didn’t God create Adam to love him?”

            The woman’s final question summarises the theme of the movie; is it right for humans to create a creature that can love, unless they can be loved in return? One can infer from Hobby’s ‘God argument’ that humans, as creators, have the right to do whatever they please with the things they create. Berger refers to R. Patai: “Myths can be defined as sacred narratives that shape cultural values and behavior… Myth… is a traditional religious charter, which operates by validating laws, customs, rites, institutions and beliefs, or explaining socio-cultural situations and natural phenomena, and taking the form of stories, believed to be true, about divine beings and heroes.’ He adds that myths play an important role in shaping social life and that ‘myth not only validates or authorizes customs, rites, institutions, beliefs and so forth, but frequently is directly responsible for creating them” (Berger, 2009:88).

            The most striking transformation in A.I. happens when David is programmed to imprint on the woman in the family, loving her from that moment on as his own mother. When the programming is finalized, David’s face turns from an apathetic smile into an open-mouthed gaze of wonder. He instantaneously starts refering to the woman as ‘mommy’. He is literally changed by love. This is discussed in Wachsmuth’s et al. e-book Embodied Communication in Humans and Machines: “A research challenge at the heart of the study of embodied communication is imitation of non-verbal behaviors such as gestures demonstrated by a human interlocutor… Another research challenge is emotion, that is can a virtual human express emotions related to internal parameters that are driven by external and internal events. In communication-driven approaches, a facial expression is deliberately chosen on the basis of its desired impact on the user” (2008). So here, David’s face is expressing love and wonder, but it is still merely a programmed response, something which is easily forgotten by the human spectator.

            After David is abandoned in the forest by his ‘mother’, he is captured together with several stray Mechas by men who make money from destroying robots in creative manners in front of a human audience. David is taken to the ‘execution’ area in the middle of an arena, where the ‘ring-master’ tries to rile up the crowd against this new abomination, a child created to substitute human children, but he is met with an unexpected reaction from the angry crowd:

Ring-master: “… To steal your hearts, to replace your children, this is the latest iterration in a series of insults to human dignity….” (The crowd watches, silent and tense). “Do not be fooled by the artistry of this creation…”

David: “Don’t burn me! Don’t burn me! I’m not Pinocchio!  Don’t make me die! I’m David!” (David keeps chanting the words as a woman rises from the crowd).

Woman in the crowd: “Mecha don’t plead for their life! Who is that? He looks like a boy!”

Ring-master: “It’s like a boy, to disarm us! See how they try to imitate our emotions now! Whatever performance this thing puts on, remember, we are only demolishing artificiality!”

            In her book on designing sociable robots, Breazeal discusses the infant/caregiver relationship, and how adults instinctively treat infants differently than other they treat other grown-ups (2010). Here, the fear which the ring-master is attempting to instill in the spectators, of robots taking the place of other humans in our lives, is met by the conflicting emotions of seeing someone who is by all appearances, both in the way he acts and looks, a child. The instinct to protect this child overruns any logic reasoning which would lead the spectators to the conclusion that David is just a machine.

Compilation

            As has been shown in these movies, the characters have an underlying fear for robots. Thibault quotes in his book: “Agression (1) Helping others is a prosocial act of conformity that has clear social benefits. (2) Its opposite, hurting others, has clear social costs. (3) An act that is intended to cause pain is an act of interpersonal aggression. (4) The key attribute of an aggressive act is intent: (5) to be considered aggressive, an act must be deliberate – (6) a definition that encompasses verbal attacks, such as insult and slander, as well as physical or material injuries. (7) Unintended injuries are not considered aggressive” (1997:269). As Thibault explains, an aggressive act must be deliberate and intend to cause pain, something which a robot, void of emotions, should be incapable of. Yet it appears that the true fear of robots lies in these two incompatible qualities: rational deductions and evil intent.

            A great problem in our perception of humanoid robots seems to lie in our inability to distance ourselves from them emotionally, as Breazeal explains: “Recent research by Reeves and Nass (1996) has shown that humans (whether computer experts, lay-people, or computer critics) generally treat computers as they might treat other people. They treat computers with politeness usually reserved for humans. They are careful not to hurt the computer’s “feelings” by criticizing it. They feel good if the computer compliments them. In team play, they are even are willing to side with a computer against another human if the human belongs to a different team. If asked before the respective experiment if they could imagine treating a computer like a person, they strongly deny it. Even after the experiment, they insist that they treated the computer as a machine. They do not realize that they treated it as a peer” (2010:15). Though being aware that humanoids are just machines, social conduct still affects the way they are viewed. The perfect logic of a robotic brain scares humans, because humans perceive a purely logical brain as unpredictable simply because it cannot be understood. Whereas to be truly unpredicatable requires the one thing that a robotic brain cannot have; emotion. Examples of this paradoxical thinking can be found in all three movies: in Prometheus, Holloway treats David with great disrespect because he believes his behaviour will have no repercussions. In I, Robot, VIKI is perceived as evil because her plan is based entirely on logic, and though her intent is to save mankind, it’s seen as malicious and wrong. David in A.I. is the one robot given emotions, with the result that he can also feel hate and make irrational decisions.

            Toynbee et al. discuss the nature of semiotics, pointing out that it can be “more or less conventional, more or less motivated”, that the way a sign is perceived greatly depends on the viewers background and the general concensus for the sign (2006:14). Applying that logic, it appears that the colour blue, in the context of robot movies, is the signifier for the signified: artificial and technological. Since blue does not occur commonly in nature, it is often associated with the technological and inorganic. VIKI in I, Robot is a matrix of blue numbers and light, and blue lighting is used in many of the scenes. In Prometheus, David’s eyes are blue, like those of Sonny in I, Robot and David in A.I. Yet it is also the eyes in these movies that give the robots character. In Prometheus, David’s eyes are cold and void of emotion, even when his mouth is smiling politely. In A.I., David’s eyes follow the movements of the grown-ups at the dinner-table, imitating them in the manner a small child would, thus endearing himself to them, and in I, Robot, Sonny learns the meaning of a ‘wink’ and uses this facial cue at a crucial moment to signal trust. Tscherepanow et al. write: “Especially in the context of social robotics, it has been shown that an anthropomorphic robot with the capability of imitating an interactant was perceived as responding more adequately to a specific social interaction. Furthermore, after the experiment, the anthropomorphic robot was evaluated as being more capable of recognising the emotional aspects of the social interaction content. Regarding the non-verbal displays it was evaluated as being closer to a human being than in a neutral condition where the robot did not mimic the subjects. In conclusion, the participants of the experiment actually expected the robot to show motor mimicry displays equivalent to those a human counterpart would show in the same social interactions” (2009:1).

            Applying semiotics to the theme of movies, one can speculate about the use of characters within the movies and the choices that producers and directors make regarding the theme. Though the director, for the most part, makes a conscious choice regarding the theme, it may at times be unconscious, based on a norm so common as to not even be considered a choice (Toynbee et al., 2006:26). Many of these robot-themed movies touch upon the Biblical aspect of a father-son relationship, between a male creator and a male robot. In addition to the movies referenced above, other films depict these relationships. For example, in Star Wars, Anakin creates C3PO, and in Blade Runner, Tyrell creates Roy Batty. In fact, the father-son relationship is often mentioned directly in these movies; Sonny refers to his creator as his father, David in A.I. is modelled after his creators own son, and in Prometheus, Weyland says that David “…is the closest thing to a son I will ever have”. As in Prometheus and A.I., God creating mankind and mankind creating robots is a theme of constant comparison and discussion. It is then reasonable to conclude that if God created Adam in his own image, humans have created robots in their own image in a similar manner. What they require is an imitation of themselves, a mirror-image of their own greatness. As Arthur A. Berger quotes in his book on the meaning of objects, the sociologist William Fry, and his study of mirrors: “… Even the most accurate or most artistically rendered oil painting is a stiff, lumpish object in comparison to the flowing, mercurial range of self-contemplation afforded by mirrors. Mirrors instrument images of self, make them possible. Further, they enhance self-images and body-images, and thus contribute to the Ego development chain… How can one have a self-image without an image of self?”(2009:115).

Discussion

            The greatest opposition to the creation of robots, humanoid or otherwise, is most often raised by those concerned that robots are even now replacing humans in the workplace. Many forums and videos can be found, amongst others the blog Activist post, which depict robots who do the jobs of factory-workers, chefs, nursing in homes for the elderly and so forth. The book “THE END OF WORK: Decline of the Global Labor Force and the Dawn of the Post Market Era” by Jeremy Rifkin (1994) discusses the contuining replacement of humans within the workforce, by machines, technology and robots. Further blogs can be found, such as Robots Are Evil, where conspiracy theories are discussed and articles about advances in the fields of robotics are posted.

            The opposition, however, is far outnumbered by the many robotic scientists and fans who look at the future of humanoid robots with hope. Here robots are seen as a way to better understand humanity, on a biological and social level. The number of spokespeople, videos and forums are too great to be mentioned here, but one very avid ‘pro-robot’ man is Professor Hiroshi Ishiguro, who has created an identical copy of himself, considered to be one of the most sophisticated humanoid robots to date. In the Youtube video Roboticist explores nature of humanity he talks about the creation of the robot, and says among other things: “Researching and developing Geminoids is the best way to answer the philosophical question of what it is to be human. This is my main purpose… These robots could be used as guides at museums, or for other jobs like that. In my case, I can use it for my lectures, while I’m away overseas.” Here, the upside of humanoid robots is discussed, as they can perform tasks for us, and act as substitutes when we can’t be present ourselves.

            In the end, it would appear that the same things that make robots useful to us are also what makes people scared of them. We believe we can learn more about ourselves through them, but we fear that they will replace us over time, and we will render ourselves useless with their creation.

Conclusion

            In this paper, the use of anthropomorphic technology within modern-day movies has been discussed. Through analysing the three movies Prometheus, I, Robot and A.I., looking at their differences, the way that the director attempts to portray the main robot character and their relationship to the humans in the movie, a pattern emerges, giving a deeper understanding of the intent behind the way robots are dramatised. Certain similarities have been found between the movies: the creation of man vs. the creation of robot; a male creator making a male robot as a substitute for a son; and the use of blue as a signifier for the signified logic and artificial intelligence. The social instincts of humans strongly affect their behaviour in the interaction with robots which appear to be human, while their emotional brains leave them struggling with the fear of the robot’s logical way of thinking. The robot discourse, currently spreading across the internet, shows a clear divide between those who oppose further development of robots, and the scientists and fans who believe the development of humanoid robots will have positive long-term consequences. Moving toward a future where robots such as those depicted in these movies may soon be a reality, mankind is at once fascinated and fearful of humanoid robots, and the way they are depicted within movies greatly corresponds with the future fears and hopes for the anthropomorphised technology and its advances.

References

Printed sources and literature

Berger, Arthur Asa. (2009). What Objects Mean: An Introduction to Material Culture. U.S.A: Left Coast Press, Inc.

Rifkin, Jeremy. (1994). THE END OF WORK: Decline of the Global Labor Force and the Dawn of the Post Market Era. Tarcher.

Thibault, Paul J. (1997). Re-reading Saussure: The dynamics of signs in social life. London: Routledge.

Movies

Prometheus, director: Ridley Scott, Twentieth Century Fox, 2011.

I, Robot, director: Alex Proyas, Twentieth Century Fox, 2004.

Artificial Intelligence: A.I, director: Steven Spielberg, Warner Bros, 2001.

Internet

Breazeal, Cynthia L. (2010). Designing Sociable Robots. The MIT press.

Tscherepanow, Marko. Hillebrand, Matthias. Hegel, Frank. Wrede, Britta. Kummert, Franz. (2009).

Direct Imitation of Human Facial Expressions by a User-Interface Robot.  9th IEEE-RAS International Conference on Humanoid Robots. December 7-10,2009 Paris, France.

Wachsmuth, Ipke. Lenzen, Manuela. Knoblich, Günther. (2008). Embodied Communication in Humans and Machines. Published to Oxford Scholarship Online: March 2012.

Robots Are Evil, http://robotsareevil.com/ , 07/2008-05/2010

Activist post: 5 Ways Robots Are Outsourcing Humans in the Workforce, Nicolas West,

http://www.activistpost.com/2012/05/5-ways-robots-are-outsourcing-humans-in.html 06/05/2012.

Unprinted sources

Broadcast. Reporter: Mark Willacy. “Roboticist explores nature of humanity”. Australian Broadcasting Corporation, Lateline. Broadcast: 29/04/2011. On youtube.com,  http://www.youtube.com/watch?v=hKAkLUd1jFg&feature=related 29/04/2011.

Leave a comment