This is way too long for a blog post but I wanted to put it up anyways. Neuromancer by William Gibson was the first cyber-punk text and really has links to Bladerunner and The Matrix. I will review it if I read it again. "Super-Toys" is the short story that the movie "A.I." is based on.
The AI perspectives presented by both Brian Aldiss in “Super-Toys Last All Summer Long” and William Gibson in Neuromancer are equally realistic when considering that neither author claims AI capable of true consciousness or freedom of thought. The agnosticism of whether we can ever know what defines and enables consciousness is a distinct theme of their works. The characters of David of “Super-Toys” and Neuromancer’s Dixie Flatline have outlooks and desires based on what their makers intended for them. In “Super-Toys,” David is fulfilling his programmed role to portray a human child, and his desire to be real comes from his set function of emulating real humans. Dixie’s function is to emulate his personality in life, but it is clear that there are pre-set limitations on his simulation, programmed into him intentionally. Such limitations include that he is programmed to desire destruction, and that he has no permanent memory. Both Aldiss and Gibson provide hints that the AI of their fictional worlds is not complete, free-thinking intelligence. They convey that there is uncertainty as to whether humanity and technology can possibly be fused together. This can be seen by comparing the capabilities and functionalities of David and Dixie to that of Gibson’s Wintermute.
It is difficult to speculate as to whether humanity will ever be able to understand the nature of consciousness, and whether or not a computer can actually think. This creates a difficulty in Science Fiction because in order to keep a semblance of plausibility, authors comment on AI in an open-ended fashion that allows for it to be free-thinking or not. However, it is possible to come to the conclusion that neither David nor Dixie were freethinking AI by examining the arguments over AI and looking at comparable characters. Aldiss and Gibson use these characters to demonstrate the uncertainty over the nature of consciousness and the possibility of conscious AI.
Science Fiction deals with this issue extensively in the genre of Cyberpunk. Contemporary thinking regarding whether a machine can think is a common element of Sci-Fi stories. Authors apply the arguments to their plots in order to create a message, and make realistic scenarios to demonstrate a point. The Chinese Room argument of John Searle can be applied to the characters of Neuromancer and “Super-Toys”. The argument deals with whether or not AI could be capable of consciousness the way we understand it. The function and programming of computers demonstrates that they operate on an input/output basis, with no free-thinking, understanding, or conscious thought required. As much as characters such as David and Wintermute appear to be human, there is no reason to assume that they have human-like consciousness. Characters such as Aldiss’s serving bot and Dixie Flatline serve as comparisons where it is possible to tell that the AI is not complete, human-like consciousness.
Even those characters that demonstrate human-like freedom of choice can be considered incomplete AI when examining the nature of programming. Computers operate by a set of rules that dictate specific outputs for given inputs. The speech of these characters reflects their intended purpose, so that their interaction with other characters seems human, but there is no reason to suggest that they are actually choosing what to say or have any self-awareness. Thus far, no matter how complex computers become, a number of pre-programmed responses must exist for expected input in order for a computer that cannot learn to have a conversation with a human being. Computers that can learn are simply completing a programmed function to accumulate new responses into their repertoire. There is no consciousness required for a computer to communicate, play a game of chess or run a program. There is no consciousness required for David to say that he wants to know whether he is real, if that is what his programmer intended. When considered alone, David’s case could be argued either way, but when compared to another AI in the story, the serving bot, it becomes clear that AI does not have satisfactory consciousness to be considered human in Aldiss’s world. As well as a robot could ever mimic humanity, there would be no way to know whether it was actually self-aware.
Brian Aldiss provides two examples of AI in his short story. David is an advanced android that passes Alan Turing’s test for consciousness in that it is impossible to distinguish his reactions from that of a real human being. Aldiss shows that AI does not really understand human speech by having the character of the serving bot respond inappropriately in a conversation. The serving bot is a less advanced android, and does not pass the Turing test because it is clear from his responses that he is not human. The conversation between the serving bot and Mr. Swinton is awkward, for example when the serving bot responds to Mr. Swinton’s comment that the roses are guaranteed to remain perfect: “It is always advisable to purchase goods with guarantees, even if they cost slightly more” (Aldiss, 9). It is clear that the robot selected the most appropriate response based on its programming, and did not have the freedom of thought to respond the way a human would. The serving bot serves as a foil for David to show that he is much more human-like, but David is equally incapable of consciousness because he operates on the same input/output basis. While it can be claimed that David and the serving bot are not comparable because one is more advanced than the other and may have reached a level of AI, nothing suggests that his more complex system implies consciousness and freedom of thought. Although David’s responses are more difficult to distinguish from a true human being, this is the result of superior programming, not human-like consciousness. David expresses a desire to know whether or not he is real, which mirrors our lack of understanding on the subject. Wanting to know whether or not he is real does not necessarily mean that he has freedom of thought if his makers programmed him to imitate humanity. His developers created a computer that serves the function of being a child and a companion to the family. He fulfills the expectations in exactly the way that his developers intended: by being indistinguishable from a human child. Aldiss leaves the topic open-ended, reflecting the uncertainty of whether or not we can create synthetic consciousness.
Gibson creates a similar foil with the distinguishable differences between Dixie and Wintermute. Dixie and Wintermute differ from David in that they are not physical androids, but computer programs. Dixie Flatline mimics the consciousness of McCoy Pauley, who is dead. However, he does not pass the Turing test. He is able to mimic the late Pauley’s speech patterns, personality and skills, but he is distinguishable from a person because he does not retain memories after he has been shut off. This does not resemble the effects of amnesia because Case would have been able to deduce that the memory lapse occurred every time the computer was shut down, even if he had not known that the real Pauley was dead. Also, Dixie presents the same output both times Case introduces himself: “Miami, joeboy, quick study” (Gibson, 78). This is easily distinguishable from the highly variable nature of human interaction. The construct does not have the freedom to choose what he will say to Case, but picks the most appropriate response from a pre-set database. This shows that consciousness and freedom of thought are more than prescribed responses in interactions, but true decision making capability. By making Dixie distinguishable from human characters, Gibson comments on the nature of consciousness.
A comparison between Dixie and Wintermute can further exemplify this point. Dixie and Wintermute have antithetical desires. However, these desires were programmed into them by their creators, and were not the result of actual conscious decision making. Dixie’s creators had the foresight to limit his memory, his desires and his capabilities, so that he cannot learn because he cannot remember new information once the console has been shut down. He also wishes destruction. This is not a normal human desire, and it can be read that this desire was programmed into him, particularly when it is revealed that Wintermute’s desire for transcendence was programmed into it. Dixie and Wintermute are compared when Case asks whether Dixie is truly sentient. Dixie responds, “Well it feels like I am, kid, but I’m really just a bunch of ROM… I ain’t likely to write you a poem, if you follow me. Your AI, it just might. But it ain’t no way human” (Gibson, 131). Dixie suggests that Wintermute is closer to consciousness in that it could create something human-like, but it is not truly different from him.
It is clear that the problem of consciousness is central to Gibson and Aldiss’s works because both authors provide characters that can arguably be seen as free-thinking or not. However, when comparing the character of David to Wintermute, it can be seen that neither the advancement of the programs nor the similarity of desires to those of a human necessitates consciousness in AI. It can be argued that since David appears to be human where the serving bot does not, he is a more advanced program, but Wintermute is a more comparable character in his superiority and his desire for self-improvement and freedom: two very human desires. Wintermute is by all accounts a transcendently advanced program that appears for much of Neuromancer to think freely and control factors in the world by its own design. It has desires and capabilities based on those programmed into it. Close to the end it is revealed that the Tessier-Ashpool family had in fact intended for the program to merge with that of Neuromancer when it was created. Therefore, it was not Wintermute’s desire to become free from the limits put upon it that chose its course, but its programming. Even the ability to perceive and to learn does not necessitate consciousness the way humans experience it.
No comments:
Post a Comment