The parser program ELIZA surprised people by doing a decent job of subverting the Turing Test all the way back in the 60s, despite being even less complex than a mid-80s Infocom game, and modern ChatBots are getting very good at cheating the test while still apparently being completely unaware of what they're talking about. Attempts to talk down a mad computer are likely to lead to a literal blue screen of death (which may save the heroes from having to worry about the ethics of "fixing" him because now he needs to be fixed anyway).Īnother reason for a narrative's answer to be "yes" is that it can be difficult to write a character as an above-mentioned "philosophical zombie" a hypothetical thing that can act very convincingly like a conscious being but is not actually conscious. After all, if you wanted to give a human villain a chance to redeem himself, you'd do it by talking to him, not subjecting him to brain surgery. If the robot is malevolent but is also judged to have true self-awareness, often the next question is whether it can be fixed to become good - which then raises further ethical questions about whether it's right to go mucking about with the basic essence of someone's mind, even if that someone is a machine. If it's a Mecha-Mook or Mechanical Monster - whose only real purpose in the story is to give the heroes something literally mindless to fight and destroy without having to feel guilty about it - or even a full blown villain in its own right (the question of Terminators' sentience never came up until we met a friendly one), the assumption is usually that they are Just Machines and need to be stopped just as you'd need to stop or fix a runaway car or sparking electrical cable, and such robots will usually be portrayed as so obviously lacking self-awareness or personality that there may be no perceived need to even ask the question. Whether the answer to the trope's question is yes or no will depend largely on how much the viewer is expected to sympathize with the robot. Then, of course, there are those who think it's Just a Machine (or, in the case of an ambiguously Animate Inanimate Object, that it's just a Companion Cube). Of course, this doesn't prevent quite a few works from doing just that, seemingly for the sake of a downer ending. In the vast majority of cases where the question is asked, the viewer will either be told outright at the end that the answer is "yes," or it will at least be strongly implied that that is the case, perhaps because getting the viewer to sympathize enough with the AIs to consider the question and then tell them that the AIs are just soulless machines after all would be considered a Downer Ending. One or more AIs will display human-like attributes and frequently one or more humans may be portrayed as amoral and overly obedient in order to further blur the line between "human" and "non-human". In order to create tension such an attempt is usually set in a world where AIs have just been newly created or have already been relegated as sub-humans. ![]() However, this trope is about when the intent is to make the viewers ponder these questions. And if a human who does disagree in such a world changes their mind upon seeing the "death" of a Robot Buddy or other sentient robotic or artificial form, then Death Means Humanity is at play. If the humans and the AIs disagree about the answer to the question, a rebellion or Robot War may be in the cards. While watching such a show you may end up wondering What Measure Is a Non-Human? A world where the answer is "yes", on the other hand, may include Ridiculously Human Robots or Mechanical Lifeforms. ![]() When the humans in a universe (or the writers who created the universe) don't consider this trope's question or believe the answer is "no", then any AIs will end up being second-class citizens or sidekicks at best, and disposable slaves at worst.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |