Vous êtes sur la page 1sur 18

1 Aliens are very sensitive about human feelings.

They even understand maternal affection and your girlfriend's love. 2 Contrary to international research results (that they are greenish in color), aliens are actually blue in color. 3 They are not sticky as shown in Hollywood. They are very cute, fresh-looking, neat and clean people. You can even kiss them without worrying about any chemical on their skin. 4 Even though they are believed to be very powerful and advanced in science (see their spaceships), they are still scared of dogs and elephants. 5. They do not eat anything; they live only on solar energy. They get charged from sunlight or dhoop (perhaps like our calculators). 6 Their powers are rivaled by no less than God himself. They can cure a mentally handicapped child (whom even US doctors have given up on) with just a tap on his head. 7 Forget contact lenses, contact aliens. They can correct your vision. 8 They can help humans fly (provided it is not cloudy outside and they have enough dhoop). The flying helps in many ways, including winning basketball games. 9 Given a chance, they can earn well with their magic shows. After all, they are very good working magic with clouds and your shadows, etc. 10 They can understand and speak Hindi and English. 11 Did you know ISRO (Indian Space Research Organization) actually works on alien research and not on space research? 12 If you have a green monocrome monitor and four obsolete 5-1/2" floppy drives, you can make a device with which you can communicate with aliens. 13 Aliens will never come to your city until they first ensure a total power blackout. 14 They are generous enough to resume power connections in your city soon after they take off. 15 If you produce a particular sound (by whistle, instrument or whatever) you can call them as many times as you want to your town. They are free and are actually looking for such invitations so they can visit earth. _________________________________________________________________ In the far distant past around 240 millions BC, an intelligent technological species of theropod did evolved on earth. This species developed space travel and established colonies in nearby star systems. The asteroid impact that apparently occurred 65 million years ago devastated this civilization as well as nearly destroying the entire ecosystem. With their home civilization on earth devastated, the colonies were left to evolved on their own. It would be likely that the earth would be monitored and as the ecosystem recovered these reptoids would resume operations on earth. Given the distances and effort involved to move resources back to earth, it is likely that a need to augment the local work force with a native labor force that could accomplish the more menial and laborious tasks. A simple means of acquiring this labor would be to utilize local terrestrial stock and genetically alter this stock as necessary to accomplish these goals. This idea is not without precedent, since mankind itself has done this with horses and other domesticated livestock. It is likely that this is how modern man was developed.

You may have noticed (assuming that you have read my robot stories and novels) that I have not had occasion to discuss the interaction of robots and aliens. In fact, at no point anywhere in my writing has any robot met any alien. In very few of my writings have human beings met aliens, in fact. You may wonder why that is so, and you might suspect that the answer would be, I dont know. Thats just the way I write stories, I guess. But if that is what you suspect, you are wrong. I will be glad to explain just why things are as they are. The time is 1940 In those days, it was common to describe Galactic Federations in which there were many, many planets, each with its own form of intelligent life. E. E. (Doc) Smith had started the fashion, and John W. Campbell had carried it on. There was, however, a catch. Smith and Campbell, though wonderful people, were of northwest European extraction and they took it for granted that northwest Europeans and their descendants were the evolutionary crown and peak. Neither one was a racist in any evil sense, you understand. Both were as kind and as good as gold to everyone, but they knew they belonged to the racial aristocracy. Well, then, when they wrote of Galactic Federations, Earthmen were the northwest Europeans of the Galaxy. There were lots of different intelligences in Smiths Galaxy but the leader was Kimball Kinnison, an Earthman (of northwest European extraction, Im sure). There were lots of different intelligences in Campbells Galaxy, but the leaders were Arcot, Wade, and Morey, who were Earthmen (of northwest European extraction, Im sure). Well, in 1940, I wrote a story called Homo Sol, which appeared in the September 1940 issue of Astounding Science Fiction. I, too, had a Galactic Federation composed of innumerable different intelligences, but I had no brief for northwest Europeans. I was of East European extraction myself and my kind was being trampled into oblivion by a bunch of northwest Europeans. I was therefore not intent on making Earthmen superior. The hero of the story was from Rigel and Earthmen were definitely a bunch of second-raters. Well, Campbell wouldnt allow it. Earthmen had to be superior to all others, no matter what. He forced me to make some changes and then made some himself, and I was frustrated. On the one hand, I wanted to write my stories without interference; on the other hand, I wanted to sell to Campbell. What to do? I wrote a sequel to Homo Sol, a story called The Imaginary, in which only the aliens appeared. No Earthmen. Campbell rejected it; it appeared in the November 1942 issue ofSuperscience Stories. Then inspiration struck. If I wrote human/alien stories, Campbell would not let me be. If I wrote alien-only stories, Campbell would reject them. So why not write human-only stories. I did. When I got around to making another serious attempt at dealing with a Galactic society, I made it an all-human Galaxy and Campbell had no objections at all. Mine was the first such Galaxy in science fiction history, as far as I know, and it proved phenomenally successful, for I wrote my Foundation (and related) novels on that basis. The first such story was Foundation itself, which appeared in the May 1942 Astounding Science Fiction. Meanwhile, it had also occurred to me that I could write robot stories for Campbell. I didnt mind having Earthmen superior to robots-at least just at first. The first robot story that Campbell took was Reason, which appeared in the April 1941 Astounding Science Fiction. Those stories, too, proved very popular, and presuming

upon their popularity, I gradually made my robots better and wiser and more decent than human beings and Campbell continued to take them. This continued even after Campbells death, and now I cant think of a recent robot story in which my robot isnt far better than the human beings he must deal with. I think of Bicentennial Man, Robot Dreams, Too Bad and, most of all, I think of R. Daneel and R. Giskard in my robot novels. But the decision I made in the heat of World War II and in my resentment of Campbells assumption have stayed with me. My Galaxy is still all-human, and my robots still meet only humans. This doesnt mean that (always assuming I live long enough) its not possible I may violate this habit of mine in the future. The ending of my novel Foundation and Earth makes it conceivable that in the sequel I may introduce aliens and that R. Daneel will have to deal with them. Thats not a promise because actually I havent the faintest idea of whats going to happen in the sequel, but it is at least conceivable that aliens may intrude on my closeknit human societies. (Naturally, I repel, with contempt, any suggestion that I dont introduce aliens into my stories because I cant handle them. In fact, my chief reason for writing my novel The Gods Themselves was to prove to anyone who felt he needed the proof, that I could, too, handle aliens. No one can doubt that I proved it, but I must admit that even in The Gods Themselves, the aliens and the human beings didnt actually meet face-to-face.) But lets move on. Suppose that one of my robots did encounter an alien intelligence. What would happen? Problems of this sort have occurred to me now and then but I never felt moved to make one the basis of a story. Consider- How would a robot define a human being in the light of the three laws. The First Law, it seems to me, offers no difficulty: A robot may not injure a human being, or through inaction, allow a human being to come to harm. Fine, there need be no caviling about the kind of a human being. It wouldnt matter whether they were male or female, short or tall, old or young, wise or foolish. Anything that can define a human being biologically will suffice. The Second Law is a different matter altogether: A robot must obey orders given it by a human being except where that would conflict with the First Law. That has always made me uneasy. Suppose a robot on board ship is given an order by someone who knows nothing about ships, and that order would put the ship and everyone on board into danger. Is the robot obliged to obey? Of course not. Obedience would conflict with the First Law since human beings would be put into danger. That assumes, however, that the robot knows everything about ships and can tell that the order is a dangerous one. Suppose, however, that the robot is not an expert on ships, but is experienced only in, let us say, automobile manufacture. He happens to be on board ship and is given an order by some landlubber and he doesnt know whether the order is safe or not. It seems to me that he ought to respond, Sir, since you have no knowledge as to the proper handling of ships, it would not be safe for me to obey any order you may give me involving such handling. Because of that, I have often wondered if the Second Law ought to read, A robot must obey orders given it by qualified human beings

But then I would have to imagine that robots are equipped with definitions of what would make humans qualified under different situations and with different orders. In fact, what if a landlubber robot on board ship is given orders by someone concerning whose qualifications the robot is totally ignorant. Must he answer, Sir, I do not know whether you are a qualified human being with respect to this order. If you can satisfy me that you are qualified to give me an order of this sort, I will obey it. Then, too, what if the robot is faced by a child of ten-indisputably human as far as the First Law is concerned. Must the robot obey without question the orders of such a child, or the orders of a moron, or the orders of a man lost in the quagmire of emotion and beside himself? The problem of when to obey and when not to obey is so complicated and devilishly uncertain that I have rarely subjected my robots to these equivocal situations. And that brings me to the matter of aliens. The physiological difference between aliens and ourselves matters to us-but then tiny physiological or even cultural differences between one human being and another also matter. To Smith and Campbell, ancestry obviously mattered; to others skin color matters, or gender or eye shape or religion or language or, for goodness sake, even hairstyle. It seems to me that to decent human beings, none of these superficialities ought to matter. The Declaration of Independence states that All men are created equal. Campbell, of course, argued with me many times that all men are manifestly not equal, and I steadily argued that they were all equal before the taw. If a law was passed that stealing was illegal, then no man could steal. One couldnt say, Well, if you went to Harvard and were a seventh-generation American you can steal up to one hundred thousand dollars; if youre an immigrant from the British Isles, you can steal up to one hundred dollars; but if youre of Polish birth, you cant steal at all. Even Campbell would admit that much (except that his technique was to change the subject). And, of course, when we say that All men are created equal we are using men in the generic sense including both sexes and all ages, subjected to the qualification that a person must be mentally equipped to understand the difference between right and wrong. In any case, it seems to me that if we broaden our perspective to consider non-human intelligent beings, then we must dismiss, as irrelevant, physiological and biochemical differences and ask only what the status of intelligence might be. In short, a robot must apply the Laws of Robotics to any intelligent biological being, whether human or not. Naturally, this is bound to create difficulties. It is one thing to design robots to deal with a specific non-human intelligence, and specialize in it, so to speak. It is quite another to have a robot encounter an intelligent species whom it has never met before. After all, different species of living things may be intelligent to different extents, or in different directions, or subject to different modifications. We can easily imagine two intelligences with two utterly different systems of morals or two utterly different systems of senses. Must a robot who is faced with a strange intelligence evaluate it only in terms of the intelligence for which he is programmed? (To put it in simpler terms, what if a robot,

carefully trained to understand and speak French, encounters someone who can only understand and speak Farsi?) Or suppose a robot must deal with individuals of two widely different species, each manifestly intelligent. Even if he understands both sets of languages, must he be forced to decide which of the two is the more intelligent before he can decide what to do in the face of conflicting orders-or which set of moral imperatives is the worthier? Someday, this may be something I will have to take up in a story but, if so, it will give me a lot of trouble. Meanwhile, the whole point of the Robot City volumes is that young writers have the opportunity to take up the problems I have so far ducked. Im delighted when they do. It gives them excellent practice and may teach me a few things, too.

Aliens Report By: Lily Wong 6M4 Some people believe that aliens live on mars. There is no life on mars so how can they live there. If aliens were real and did live on mars and proof was shown maybe (not sure) scientists can figure out how to live on mars just like how aliens live there. There are some girls in the world that might be creeped out bout aliens and think its really stupid and that robots are better. Boys love aliens (well some boys) and they think its cool but girls dont agree. Robots can be useful but you have to buy batteries and 1 of the 3 common batteries is very cheap but useless. Aliens are cool too theyre not harmful or anything like that. Alkaline batteries are the most common, easiest to get, and cheapest too. However they are useless, don't buy them. They have low power capacities, are heavy, have trouble supplying large amounts of current in short time periods, and get expensive to constantly replace. Robots are made from metal and ruts when water Aliens wont be bothered by it Ahhh, aboard the Gato Verde we seem to have plenty of free time to follow our follies, like experimenting with watercolors and debating/detesting the role of robots in our livesnow a daily occurrence, and the most recent source of my passionate confusion. If youre wondering what I mean by robots, look no further, because if you are reading this, then one is literally staring you in the face right now. It has always seemed weird to me that we try to learn more about the natural world by bottling it into numbers, spreadsheets, graphs, and whatever else robots like to eat. The results are in robot-speak, trying to describe with squared off logic the spherical realm. Its easy to be an advocate for robots, and it is this propensity for ease that has led us to welcome them into our homes and lives. Despite this, I invite you to think critically about our ever-increasing robot dependency and the balance that could exist between our experiences in the natural world and our obsession with quantification.

Is it enough to thoroughly experience an environment and form a relationship with it in order to foster an understanding of it? Could controlling and testing aspects of an environment reveal, in numbers, more valuable insights than our senses and intuition as part of the natural world provide? I feel that natural history splits the difference here between sheer experience and logical interpretation of the natural world. Unfortunately, the way we have engineered our institutions of learning has led us to a point now that places little or no value on natural historians, in favor of research. Science is loosing the humanity with which it began, as it veers from physical experience in the natural world to number crunching and technological interpretations. It is this dominance of fact over truth that I have found incredibly demoralizing as I continue exploring on my educational path. Could there still be a way to do science that bridges the gap between knowledge and understanding? Regardless, have no fear. Im sure when J-Pod returns there will be little to no free time aboard the Gato Verde for painting, debates, and blog-writing, so expect the next ones to be short! Bob Park will be remembered as a persistent human spaceflight critic, a leader of the antihuman-spaceflight movement. But he could also help solve one of the great space mysteries of all time: Do intelligent aliens exist, and if so, where are they? First, some background information. The Search for Extraterrestrial Intelligence (SETI) is challenged by an idea called the Fermi Paradox. Enrico Fermi thought that if intelligent alien civilizations existed, they would inevitably colonize the galaxy. Travelling slower than the speed of light, they would still colonize the galaxy in a relatively short time. And if our galaxy was colonized, we would know all about it. Fermi concluded that intelligent aliens do not exist. The Fermi Paradox drew a wide range of speculative hypotheses. Maybe intelligent aliens did exist, but they became extinct before they reached us. Maybe they have isolated us for observation and study (the zoo hypothesis). Maybe they are waiting until our civilization reaches a certain developmental stage (the sentinel hypothesis). Or could there be another explanation? Since the only intelligent civilization that we know of is our own, our experiences may provide insights into how intelligent aliens behave. The Park hypothesis states that We regularly have a debate over what to send to intelligent alien civilizations do exist, but they have not colonized space. It is known as the humans versus robots the galaxy because they dont debate, or the manned versus unmanned debate, or want to. with a more accurate description it could be called the both-humans-and-robots versus robots-only debate. Now, if an intelligent civilization

such as ours is having this debate, then it is possible that intelligent alien civilizations are doing the same thing. Of course, they wouldnt call it a humans versus robots debate. From their perspective it would be an our-species versus robots debate, and from our perspective it would be an aliens versus robots debate. And like our intellectuals, their intellectuals may conclude that alien spaceflight is obsolete, and a robots-only space policy would be sensible, logical, and right. They would dismiss colonization as a hopeless fantasy. If this is true, then aliens would be very difficult to detect. A robot probe is much harder to detect than worlds colonized by aliens. The communication between a robot probe and its homeworld would be a much weaker signal than the communication between a homeworld and its colonies. Aliens that resolved to stay on their home planet would leave a very small footprint on the galaxy, one easily overlooked. The Park hypothesis states that intelligent alien civilizations do exist, but they have not colonized the galaxy because they dont want to. It neatly resolves both the Drake Equation, which indicates that intelligent aliens are likely to exist, and the Fermi Paradox: no colonization means we dont see them. A criticism of the Park hypothesis is that while some alien civilizations act in accordance with Bob Parks principles, it would be unrealistic to expect all of them to comply. The analogy is that some people support Bob Park, but others disagree with him, sometimes quite strongly. This is a valid criticism, but the counter-argument is that unanimity is not required. If a dominant civilization, or group of civilizations, bans colonization throughout the galaxy, then it will not take place, no matter how much other civilizations protest. Since colonization is difficult to accomplish anyway, the addition of legal and political impediments will make it near impossible. If there are Parkist (or Park-like) civilizations in the galaxy, what should we expect of them? One characteristic of one-planet civilizations is their elevated rate of extinction. Every civilized planet in the galaxy is susceptible to planetary disasters, such as collision with other If we detect signs of a Parkist civilization, unfortunately they bodies, and large-scale nuclear or biological war. may not be around to see us. Two-planet civilizations have a reduced rate of extinction, and three-planet civilizations have further reductions. But since Parkist civilizations have decided to put all their eggs in one basket, their rate of extinction is significantly higher. This translates to a low value for L (civilization lifetime) in the Drake Equation. If we detect signs of a Parkist civilization, unfortunately they may not be around to see us. A galaxy that is predominantly inhabited by Parkist civilizations could be called a Parkist galaxy. Such a galaxy could be identified by its unique appearance: a few scattered single

planets of intelligent life, amid vast areas either devoid of life or occupied by simple organisms only. In contrast, a galaxy with Fermis alien civilizations would be filled with intelligent life. I have previously mentioned that we have the humans versus robots debate, and aliens may have its equivalent. Now that we have the Park hypothesis, it could also be possible that aliens have its equivalent too, perhaps named after a famous anti-alien-spaceflight activist of its world. At this very moment, aliens could be thinking about a civilization like ours, wondering whether we exist, where we are in space, and why Parkism is so popular here.

What is a person? The English term, "person," is ambiguous. We often use it as a synonym for "human being." But surely that is not what we intend here. It is possible that there are aliens living on other planets that have the same cognitive abilities that we do (e.g. E.T: The Extraterrestrial or the famous "bar scene" from Star Wars). Imagine aliens that speak a language, make moral judgments, create literature and works of art, etc. Surely aliens with these properties would be "persons"--which is to say that it would be morally wrong to buy or sell them as property the way we do with dogs and cats or to otherwise use them for our own interests without taking into account the fact that they are moral agents with interests that deserve the same respect and protection that ours do. Thus, one of our primary interests is to distinguish persons from pets and fromproperty. A person is the kind of entity that has the moral right to make its own life-choices, to live its life without (unprovoked) interference from others. Property is the kind of thing that can be bought and sold, something I can "use" for my own interests. Of course, when it comes to animals there are serious moral constraints on how we may treat them. But we do not, in fact, give animals the same kind of autonomy that we accord persons. We buy and sell dogs and cats. And if we live in the city, we keep our pets "locked up" in the house, something that we would have no right to do to aperson. How, then, should we define "person" as a moral category? [Note: In the long run, we may decide that there is a non-normative concept of "person" that is equally important, and even conceptually prior to any moral concept. At the outset, however, the moral concept will be our focus.] Initially, we shall define a person as follows:

PERSON = "any entity that has the moral right of self-determination."

Many of us would be prepared to say, I think, that any entity judged to be a person would be the kind of thing that would deserve protection under the constitution of a just society. It might reasonably be argued that any such being would have the right to "life, liberty and the pursuit of happiness." This raises the philosophical question: What properties must an entity possess to be a "person"? At the Mind Project, we are convinced that one of the best ways to learn about minds and persons is to attempt to build an artificial person, to build a machine that has a mind and that deserves the moral status of personhood. This is not to say that we believe that it will be possible anytime soon for undergraduates (or even experts in the field) to build a person. In fact, there is great disagreement among Mind Project researchers about whether it is possible, even in principle, to build a person -- or even a mind -- out of machine parts and computer programs. But that doesn't matter. Everyone at the Mind Project is convinced that it is a valuable educational enterprise to do our best to simulate minds and persons. In the very attempt, we learn more about the nature of the mind and about ourselves. At the very least, it forces us to probe our own concept of personhood. What are the properties necessary for being a person? Many properties have been suggested as being necessary for being a person: Intelligence, the capacity to speak a language, creativity, the ability to make moral judgments, consciousness, free will, a soul, self-awareness . . and the list could go on almost indefinitely. Which properties do you think are individually necessary and jointly sufficient for being a person? What is intelligence? Before turning to the specific arguments raised in the Star Trek episode, it will prove helpful to pause for a moment to consider the first property on the list, "intelligence." Could a computer be intelligent? Why or why not? A careful consideration of these questions requires a very close look both at computers and intelligence. And so we suggest that you first examine a few fascinating computer programs and think seriously about the questions, What is intelligence? and Is it possible for a machine to be intelligent? To help you reflect on these questions we recommend that you visit one of our modules on artificial intelligence. Artificial intelligence: Can a machine think? [When you've finished with that section, return here] Now that you have thought about "Intelligence" and pondered the possibility of "Machine Intelligence" let us turn to the Star Trek episode. You may find it interesting to note that while some people deny that machine intelligence is even a possibility, Commander Maddox (the one who denies that the android Data is a person) does not deny that Data is intelligent. He simply insists that Data lacks the other two properties necessary for being a person: self-awareness and consciousness. Here is what the character Maddox says regarding Data's intelligence.

PICARD:

Is Commander Data intelligent?

MADDOX: Yes, it has the ability to learn and understand and to cope with new situations. PICARD: Like this hearing. MADDOX: Yes. Commander Maddox, though admitting that Data is "intelligent" nonetheless denies that Data is a person because he lacks two other necessary conditions for being a person: selfawareness and consciousness. Before examining Maddox's reasons for thinking that Data is not self-aware, let us explore the concept of "self-awareness." It is important to get clear about what we mean by self-awareness and why it might be a requirement for being a person. What is Self-Awareness? Let us turn to the second property that the Star Trek episode assumes is necessary for sentience and personhood: "self-awareness." This has been the topic of considerable discussion among philosophers and scientists. What exactly do we mean by "selfawareness"? One might believe that there is something like a "self" deep inside of us and that to be self-aware is simply to be aware of the presence of that self. Who exactly is it, then, that is aware of the self? Another self? Do we now have two selves? Well, no, that's not what most people would have in mind. The standard idea is probably that the self, though capable of being aware of things external to it, is also capable of being aware of its own states. Some have described this as a kind of experience. I might be said to have an "inner experience" of my own mental activity, being directly aware, say, of the thoughts that I am presently thinking and the attitudes ("I hope the White Sox win") that I presently hold. But even if we grant that we have such "inner experiences," they do not, by themselves, supply everything that we intend to capture by the term, "self-awareness." When I say that I am aware of my own mental activity (my thoughts, dreams, hopes, etc.) I do not mean merely that I have some inner clue to the content of that mental activity, I also mean that the character of that awareness is such that it gives me certain abilities to critically reflect upon my mental states and to make judgments about those states. If I am aware of my own behavior and mental activity in the right way, then it may be possible for me to decide that my behavior should be changed, that an attitude is morally objectionable or that I made a mistake in my reasoning and that a belief that I hold is unjustified and should be abandoned. Consider the mental life of a dog, for example. Presumably, dogs have a rich array of experiences (they feel pain and pleasure, the tree has a particular "look" to it) and they may even have beliefs about the world (Fido believes that his supper dish is empty). Who knows, they may even have special "inner experiences" that accompany those beliefs. However, if we assume that dogs are not self-aware in the stronger sense, then they will lack the ability to critically reflect upon their beliefs and experiences and thus will be

unable to have other beliefs about their pleasure or their supper-dish-belief (what philosophers call "second-order beliefs" or "meta-beliefs"). That is to say, they may lack the ability to judge that pleasure may be an unworthy objective in a certain situation or to judge that their belief that the supper dish is empty is unjustified. But if that is getting any closer to the truth about the nature of self-awareness (and I'm not necessarily convinced that it is), then it becomes an open question whether being "selfaware" need be a kind of experience at all. It might be that a machine (a robot for example) could be "self-aware" in this sense even if we admit that it has no subjective experiences whatsoever. It might be self-aware even if we deny that "there is something that it is like to be that machine" (to modify slightly Thomas Nagel's famous dictum). Douglas Hofstadter offers a suggestion that will help us to consider this possibility. Now, let us turn to the Star Trek dialogue and see what they have to say about self-awareness. PICARD: What about self-awareness? What does that mean? Why am I self-aware?

MADDOX: Because you are conscious of your existence and actions. You are aware of your self and your own ego. PICARD: DATA: PICARD: DATA: PICARD: Commander Data. What are you doing now? I am taking part in a legal hearing to determine my rights and status. Am I a person or am I property? And what is at stake? My right to choose. Perhaps my very life. "My rights" . . "my status" . . "my right to choose" . . "my life". Seems reasonably self-aware to me. . . .Commander . . I'm waiting.

MADDOX: This is exceedingly difficult. We might well imagine that Commander Maddox is thinking about subjective experiences when he speaks of being "conscious" of one's existence and actions. However, Picard's response is ambiguous. The only evidence that Picard gives of Data being self-aware is that he is capable of using particular words in a language (words like 'my rights' and 'my life''). Is it only necessary that Data have information about his own beliefs to be self-aware or must that information be accompanied by an inner feeling or experience of some kind? Douglas Hofstadter has some interesting thoughts on the matter. Douglas Hofstadter on "Anti-sphexishness" In one of his columns for Scientific American ("On the Seeming Paradox of Mechanizing Creativity"), Douglas Hofstadter wrote a thought-provoking piece about the nature of creativity and the possibility that it might be "mechanized" -- that is, that the right kind of machine might actually be creative. While creativity is his primary focus, here, much of what he says could be applied to the property of self-awareness.

The kernel of his idea is that to be uncreative is to be caught in an unproductive cycle ("a rut") which one mechanically repeats over and over in spite of its futility. On this account, then, creativity comes in degrees and consists in the ability to monitor one's lower level activities so that when a behavior becomes unproductive, one does not continually repeat it, but "recognizes" its futility and tries something new, something "creative". Hofstadter makes up a name for this repetitive, uncreative kind of behavior--he calls it sphexishness, drawing inspiration for the name from the behavior of certain kind of wasp named, Sphex. In his discussion, Hofstadter quotes from Dean Wooldridge who describes the Sphex as follows: When the time comes for egg laying, the wasp Sphex builds a burrow for the purpose and seeks out a cricket which she stings in such a way as to paralyze but not kill it. She drags the cricket into the burrow, lays her eggs alongside, closes the burrow, then flies away, never to return. In due course, the eggs hatch and the wasp grubs feed off the paralyzed cricket, which has not decayed, having been kept in the wasp equivalent of a deepfreeze. To the human mind, such an elaborately organized and seemingly purposeful routine conveys a convincing flavor of logic and thoughtfulness -- until more details are examined. For example, the wasp's routine is to bring the paralyzed cricket to the burrow, leave it on the threshold, go inside to see that all is well, emerge, and then drag the cricket in. If the cricket is moved a few inches away while the wasp is inside making her preliminary inspection, the wasp, on emerging from the burrow, will bring the cricket back to the threshold, but not inside, and will then repeat the preparatory procedure of entering the burrow to see that everything is all right. If again the cricket is removed a few inches while the wasp is inside, once again she will move the cricket up to the threshold and reenter the burrow for a final check. The wasp never thinks of pulling the cricket straight in. On one occasion this procedure was repeated forty times, with the same result. [from Dean Wooldridge's Mechanical Man: The Physical Basis of Intelligent Life] Initially, the sphex's behavior seemed intelligent, purposeful. It wisely entered the burrow to search for predators. But if it really "understood" what it was doing, then it wouldn't repeat the activity 40 times in a row!! That is stupid!! It is reasonable to assume, therefore, that it doesn't really understand what it is doing at all. It is simply performing a rote, mechanical behavior -- and it seems blissfully ignorant of its situation. We might say that it is "unaware" of the redundancy of its activity. To be "creative", Hofstadter then says, is to be antisphexish -- to behave, that is, unlike the sphex. If you want to create a machine that is antisphexish, then you must give it the ability to monitor its own behavior so that it will not get stuck in ruts similar to the Sphex's. Consider a robot that has a primary set of computer programs that govern its behavior (call these first-order programs). One way to make the robot more antisphexish would be to write special second-order (or meta-level) programs whose primary job was not to produce robot-behavior but rather to keep track of those first-order programs that do produce the robot-behavior to make sure that those programs did not get stuck in any "stupid" ruts. (A familiar example of a machine caught in a rut is the scene from several old science fiction

films in which a robot misses the door and bangs into the wall over and over again, incapable of resolving its dilemma -- "unaware" of its predicament.) A problem arises, however, even if it were possible to create these programs that "watch" other programs. Can you think what it is? What if the second-order program, the "watching" program gets stuck in a rut? Then you need another program (a third-order program) whose job is to watch the "watching"-program. But now we have a dilemma (what philosophers call an "infinite regress"). We can have programs watching programs watching programs -- generating far more programs than we would want to mess with -and yet still leave the fundamental problem unresolved: There would always remain one program that was un-monitored. What you would want for efficiency sake, if it were possible, is what Hofstadter calls a "self-watching" program, a program that watches other programs but also keeps a critical eye pealed to its own potentially sphexish behavior. Yet Hoftstadter insists that no matter what you do, you could never create a machine that was perfectly antisphexish. But then he also gives reasons why human beings are not perfectly antisphexish -- and why we shouldn't even want to be. Can you see now why this kind of self-watching computer program might give us something like "self-awareness"? I am not saying that it is self-awareness -- that is ultimately for you to decide. But there are people who believe that human beings are basically "machines" and that our ability to be "self-aware" is ultimately the result of a complex set of computer programs running on the human brain. If one had reason to think that this was a plausible theory, then one might well think that Hofstadter's "self-watching" programs would play a key role in giving us the ability for self-awareness. Let us return now to the discussion of Star Trek and Commander Data and the question: Is Data, the android, self-aware? Whether you think that Captain Picard has scored any points against Commander Maddox or not, Maddox seems less confident about his claim that Data lacks self-awareness than he was initially. But that doesn't mean that Maddox is giving up. There is one more property that Maddox insists is necessary for being a person: "Consciousness". What is Consciousness? Must an entity be "conscious" to be a person? If so, why? What exactly is consciousness? Is it ever possible to know, for certain, whether or not a given entity is conscious? While there are many different ways of understanding the term, `consciousness,' one way is to identify it with what we might call the subjective character of experience. On this account, if one assumes that nothing could be a person unless it were conscious, and if one assumes that consciousness requires subjective experiences, then one would hold that no matter how sophisticated the external behavior of an entity, that entity will not be conscious and thus will not be a person unless it has subjective experiences, unless it possesses an inner, mental life.

Thomas Nagel discusses the significance of the "subjective character of experience" in his article, "What Is It Like to Be a Bat?" Note that Nagel is not concerned here with the issue of personhood. (He most definitely is not suggesting that bats are persons.) Rather he is interested to deny the claim that a purely physical account of an organism (of its brain states, etc.) could, even in principle, be capable of capturing the subjective character of that organism's experiences. Nagel's main concern is to challenge the claim, made by many contemporary scientists, that the objective, physical or functional properties of an organism tell us everything there is to know about that organism. Nagel says, "No." Any objective description of a person's brain states will inevitably leave out facts about that person's subjective experience--and thus will be unable to provide us with certain facts about that person that are genuine facts about the world. One of Nagel's primary targets is the theory called, functionalism, the most popular theory of the mind of the past twenty-five years. It continues to be the dominant account of the nature of mental states help by scientists and philosophers today. Before turning to Nagel's argument, you might want to learn a bit about functionalism. If so, click here and a new window will open up with an introduction to functionalism. Funtionalism: a theory of the mind Now that you have a basic understanding of the theory of functionalism, here is Nagel's reason for thinking that a functionalist account of the mind will never be able to capture the fundamental nature of what it means to be conscious. Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it. (Some extremists have been prepared to deny it even of mammals other than man.) No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar systems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism. There may be further implications about the form of the experience, there may even (though I doubt it) be implications about the behavior of the organism. But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism-something it is like for the organism. We may call this the subjective character of experience. It is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence. It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experience nothing.* * Footnote: Perhaps there could not actually be such robots. Perhaps anything complex enough to behave like a person would have experiences. But that, if true, is a fact which cannot be discovered merely by analyzing the concept of experience. It is not analyzable in terms of the causal role of experiences in relation to typical human behavior--for similar reasons. I do not deny that conscious mental states and events cause

behavior, nor that they may be given functional characterizations. I deny only that this kind of thing exhausts their analysis. . . . . . . If physicalism is to be defended, the phenomenological features must themselves be given a physical account. But when we examine their subjective character it seems that such a result is impossible. The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view. (Philosophical Review 83:392-393) But maybe Nagel is too quick to give up on a physicalist reduction of consciousness. There are scientific methods currently being used to explore the nature of consciousness. Here is a brief introduction to the scientific study of consciousness that you might find helpful. The Science of Consciousness If an android is to be a person, must it have subjective experiences? If so, why? Further, even if we decide that persons must have such experiences, how are we to tell whether or not any given android has such experiences? Consider the following example. Assume that the computing center of an android uses two different "assessment programs" to determine whether or not to perform a particular act. In most cases the two programs agree. However, in this particular case, let's assume, the two programs give conflicting results. Further, let us assume that there is a very complex procedure that the android must go through to resolve this conflict, a procedure taking several minutes to perform. During the time it takes to resolve the conflict, is it appropriate to say that the Android feels "confused" or "uncertain" about whether to perform A? If we deny that the android's present state is one of "feeling uncertain," on what grounds would we do so? Colin McGinn considers this question when he asks: "Could a Machine be Conscious?" Could there be something that it is like to be that machine? Very briefly, his answer is: Yes, a machine could be conscious. In principle it is possible that an artifact like an android might be conscious, and it could be so even if it were not alive, according to McGinn. But, he argues, we have no idea what property it is that makes US conscious beings, and thus we have no idea what property must be built into a machine to make it conscious. He argues that a computer cannot be said to be conscious merely by virtue of the fact that it has computational properties, merely because it is able to manipulate linguistic symbols at the syntactic level. Computations of that kind are certainly possible without consciousness. He suggests, further, that the sentences uttered by an android might actually MEAN something (i.e., they might REFER to objects in the world, and thus they might actually possess SEMANTIC properties) and yet still the android might not be CONSCIOUS. That is, the android might still lack subjective experiences, there might still be nothing that it is like to be that android. McGinn's conclusion then? It is possible that a machine might be conscious, but at this point, given that we have no clue what it is about HUMANS that makes us conscious, we have no idea what we would have to build into an android to make IT conscious. Hilary Putnam offers an interesting argument on this topic. If there existed a sophisticated enough android, Putnam argues that there would simply be no evidence one way or another

to settle the question whether it had subjective experiences or not. In that event, however, he argues that we OUGHT to treat such an android as a "conscious" being, for moral reasons. His argument goes like this. One of the main reasons that you and I assume that other human beings have "subjective experiences" similar to our own is that they talk about their experiences in the same way that we talk about ours. Imagine that we are both looking at a white table and then I put on a pair of rose-colored glasses. I say "Now the table LOOKS red." This introduces the distinction between appearance and reality, central to the discipline of epistemology. In such a context, I am aware that the subjective character of my experience ("the table APPEARS red") does not accurately reflect the reality of the situation ("the table is REALLY white"). Thus, we might say that when I speak of the "red table" I am saying something about the subjective character of my experience and not about objective reality. One analysis of the situation is to say that when I say that the table appears red, I am saying something like: "I am having the same kind of subjective experience that I typically have when I see something that is REALLY red." The interesting claim that Putnam makes is that it is inevitable that androids will also draw a distinction between "how things APPEAR" and "how things REALLY are". Putnam asks us to imagine a community of androids who speak English just like we do. Of course, these androids must have sensory equipment that gives them information about the external world. They will be capable of recognizing familiar shapes, like the shape of a table, and they will have special light-sensors that measure the frequency of light reflected off of objects so that they will be able to recognize familiar colors. If an android is placed in front of a white table, it will be able to say: "I see a white table." Further, if the android places rose-colored glasses over its "eyes" (i.e., its sensory apparatus), it will register that the frequency of light is in the red spectrum and it will say "Now the table LOOKS red" or it might say "I am having a red-table sensation even though I know that the table is really white." So what is the point of this example? Well, Putnam has shown that there is a built-in LOGIC when it comes to talk about the external world, given that the speaker's knowledge of the world comes through sensory apparatus (like the eyes and ears of human beings or the visual and audio receptors of a robot). A sophisticated enough android will inevitably draw a distinction between appearance and reality and, thus, it will distinguish between its so-called "sensations" (i.e., whatever its sensory apparatus reveals to it) and objective reality. Now of course, this may only show that androids of this kind would be capable of speaking AS IF they had subjective experiences, AS IF they were really conscious--even though they might not actually be so. Putnam admits this. He says we have no reason to think they are conscious, but we also have no reason to think they are not. Their discourse would be perfectly consistent with their having subjective experiences, and Putnam thinks that it would be something close to discrimination to deny that an android was conscious simply because it was made of metal instead of living cells. In effect he is saying that androids should be given the benefit of the doubt. He says: I have concluded . . that there is no correct answer to the question: Is Oscar [the android] conscious? Robots may indeed have (or lack) properties unknown to physics and undetectable by us; but not the slightest reason has been offered to show that they do, as the

ROBOT analogy demonstrates. It is reasonable, then, to conclude that the question that titles this paper ["Robots: Machines or Artificially Created Life?"] calls for a decision and not for a discovery. If we are to make a decision, it seems preferable to me to extend our concept so that robots are conscious--for "discrimination" based on the "softness" or "hardness" of the body parts of a synthetic "organism" seems as silly as discriminatory treatment of humans on the basis of skin color. [p.91] Top 10 Robot Weaknesses I mean, they're made of metal, they aren't slowed down by pesky questions of morality or ethics, and they are definitely resilient. This is what makes them such formidable political opponents. But the robot is not without its weaknesses, and due to J-tron's campaign we can now finally identify in broad scope, what they are. #1: An Awesome E-ffing Opponent A sure-fire way to take down a robot? That's simple, Be Like Barack. Nothing much one can do about their heritage at this point (trust me, if I could figure out a way to be half black, I would have done it years ago), but what you can do is try to be totally awesome and try to have a fair, intelligent fight (a nice layup never hurt anyone either). In this arena, the robot is no match. #2: Robots make really dumb choices. The robot is not known for its good judgement. Anyone a Futurama fan? No, well below is all the evidence you need. JMCR suggests using this flawed judgement to your greatest advantage by letting the robot self-destruct on its own. #3: This may come as a surprise to some, but robots generally have a hard time with human relationships, like marriage for example. Being the wife of a robot is a hard life. Physical and emotional distance compounded by readily available funds for drugs and a harsh media spotlight that encourages concentration camp chic make for one weird robot spouse. How does this fact help defeat a robot? Just think of the scandals! #4: It is a common misconception that robots are immortal. In fact, quite the opposite is true, and there is nothing more dangerous than an elderly, cantankerous robot who doesn't know he's past his prime. An effective tool here would be to constantly photograph the robot in unflattering light so as to constantly remind him (and everyone else) just how old he really is.

#5: Robots hate economics. It's a known fact. They're not that well-versed on the subject and it's hardly an area where mavericky risk taking is valued. Use the forces of supply and demand for good instead of evil, a simple chart should do the trick. #6: Robots need energy. And not just any kind of energy, they need oil and lots of it, no matter the cost. Too bad for them the world is running out, it's contributing to a massive environmental crisis who's far reaching effects touch on agriculture, disease, and population densities, and we don't have any of it. If the robot doesn't adapt soon, it will die out. Tear.... #7: Cities Cities are full of godless folk. Fancy lettuce-eating-liberals who wouldn't know an honest days work if it bit them on their privileged urban asses. The good news? Cities and robots are like oil and water. You wanna take down a robot? Lure and trap one into a big city and watch as socially-responsible, eco-friendly, pseudo-hippies listening to Girl Talk trample it in oblivion of neon and spandex. The revolution will be pod-casted. #9: George W. Bush. The one and only. The ultimate kryptonite. Robots and retards, star-crossed partners, whose actions are always more destructive together than they are apart. So what's the most effective way to best a robot in a duel? First, channel your inner Aaron Burr and drop some Hamiltons (robots love money), then allow him to partner with the mentally-challenged and he won't be able to resist. I hope this guide proves useful in all manner of robot attacks. Good luck.

Vous aimerez peut-être aussi