Vous êtes sur la page 1sur 5

Philosophy 1301 First Essay Alan Hurtado 800407974

In this paper I will argue against the possibility to upload a person into a computer. Lets say we are going to upload Wynton, our person, to a computer. This computer has to have a program, or a series of programs, running in order to emulate Wyntons thoughts and his reactions to different inputs just like the real Wynton would think or do. What do we need for this? Well, first of all, we need a database that contains all the knowledge that Wynton has until this very day. Every little thing that he could possibly recall, such as his background (past, history, ancestors), personal experiences (what does it feel to get burned? Why is it different from a headache? What does it feel to listen to music?), preferences, education, relationships, fears, current and even future projects (what am I going to do after I upload my mind to the computer?), etc. Every entry on this database must have a unique value of indexing, otherwise would be useless and the program would not be able to recall it. Now that the huge database is finished and indexed we need to make an algorithm that simulates Wyntons ability to maintain a coherent conversation with someone based on what he knows. It must be able to detect any question and respond it just like real Wynton would do. Even if the interviewer asks for the same answer several

times, the output of our program has to be the same as Wynton in that same situation. Imagine our algorithm is finished, coded in a programming language, and compiled. This program with proper output devices could easily fool someone who knows Wynton over the phone, or chat, but, is it a person? I wouldnt say so. It is still answering what we previously programmed it to respond. It seems that a person is more than a program that answers questions coherently upon previously gained knowledge. Lets build a body for it. Assume that we develop a more powerful computer and install it into a humanoid Wynton-like hardware, called Branford, which can interact physically with the world. Branford would be able to perceive light, temperature, textures, flavors, sounds, humidity, and all sorts of things that we humans can perceive in our daily lives. Now we need a program way more complex than the one we got so far. We will have to update the former database to register Wyntons reactions to our new input channels. This new database must have information on Wyntons perception of things, like the amount of light he can visually tolerate, recognition of objects and people, temperature threshold (when does he feels cold or hot?), even information on how he moves to reproduce the exact movement on different situations. Following the procedure, we make a program for each part of Branfords body so it acts exactly as Wynton, it also has the capability of transforming proteins and carbohydrates into energy equivalent to what Wyntons metabolism does in his body (all of these instructions and new information would significantly increase the

size of our database, and jeopardize the performance of the whole machine due to over-heating, but lets imagine that nanotechnology is 50 years ahead from now and we have no technological limitations for our project). Branford is now a physical and behavioral replica of his brother Wynton. It has the same weight, same height, color of skin, likes and dislikes the same things, etc., it as well walks the dog every morning and greets its neighbors cordially just like Wynton would do it. Is Branford a person now? For most behaviorists, it could be considered a person. Maybe a smaller group of functionalists could be fooled and agree that indeed is a person. But we know that Branford is still acting upon a series of commands and complex preprogrammed instructions. Imagine that we are convinced that Branford is a person and we close the project. What would happen in a couple of months? a couple of years? Branford cannot process more information than what it already has and eventually, its robot nature will be more and more evident. It would get stuck in time.

So, it still takes something else to be a person. The problem with Branford is that he is unable to learn new things, and we could come up with an algorithm similar of how we learn things, such as trial and error, but that would be programmed too. Lets say that it learns to win a tic-tac-toe game by registering all of its previous moves and looking just for the winning outcome. This can be thought as following: - Player 1 places [x] in any available cell of a matrix of 3x3, that cell becomes unavailable.

- Player 2 places an [o] in any available cell of the same matrix, that cell becomes unavailable. - Repeat process until all cells are unavailable or until the condition for winning is met.

X1Y1 X1Y2 X1Y3

X2Y1 X2Y2 X2Y3

X3Y1 X3Y2 X3Y3

- The condition for winning is that all of X1 or X2 or X3 or Y1 or Y2 or Y3 or (X1Y1, X2Y2, X3Y3) or (X1Y3, X2Y2, X3Y1) are filled with the same figure. Branford would need to play every possible move in both player 1 and player 2 situations to eliminate the moves that would make it lose, so it will learn which series of moves will make it win or end on a tie, in the worst case. These instructions could make it seem like it is learning from its mistakes, but as realistic as that may appear, it was already programmed to record his moves and eliminate the losing ones for its next match. It didnt figure it out by itself. It seems here that no matter how complex we make Branford, it is still an invention of us acting like a person by following detailed sets of instructions that we gave it. There are things in human beings that cannot be measured, therefore, cant be reduced to strings of 0s and 1s. Lets say that we further build a super-Branford that has way better sensors than Wynton and, consequently, old Branford. This super-Branford is able to detect chemical substances in the air, recognize any pitch below or above the human audible range, even perceive colors with the exactitude of the matching computer in

the most modern paint store. This is no different than the original Branford, it is much more useful, but it is no more person than old Branford. This new super Branford is only dealing with more precise information but still following instructions to interact with that information. Humans do not always need that precision, in fact, sometimes we avoid it. When tuning a piano, an orchestra, or a choir, the notes are not equally or mathematically spaced as it might be thought, some of them are flattened or raised by cents of hertz so we can hear them in tune. If piano tuners used their tuner (a device that measures not only pitch but resonance and spectrum of the wave produced) through all of the strings, you would hear that piano in tune in the middle range, but clearly out of tune in the deep low and up high register.

My conclusion at this point is that any program, no matter how complex it is, is reducible to its simplest values and instructions. Every single output is written somewhere in the programming line, even if it has better perception of our reality than ourselves (take the color matching machine, or advanced tuners), it is just following a series of instructions with more precise values, but is not capable of learning.

Vous aimerez peut-être aussi