Say I have a computer, it isn't a complex device. I don't need anything complex. But it needs a function, so I'll give it one. This computer is fed sheets of plastic. One in five sheets of plastic is transparent. This computer has a simple sensor - a light on one side of the sheet, a light sensor on the other. The computer uses this to discern which sheets are transparent and which aren't, and records the results.
Now, say I want this machine to be conscious. How do I go about it?
A conscious mind can is aware of itself as that which is observing. We can easilly make a computer that monitors its sensors and records its activity. But does this count as awareness?
So maybe we give the computer a basic imperative - "identify and record transparency in objects of sense data" and program it with information on what its sensors do. It then would have to make the decision to use its sensors. Now, of course it will choose to use its sensors. But is the fact that it makes this decision a form of choice? I think we would be justified saying that this machine has part of the consciousness of a plant, it acts on basic imperatives and carries out actions based on the results. But this doesn't seem anything like animal, or human level consciousness.
Is this a different of the number of imperatives? What if we gave the device a lot more sensors, the ability to recognise complex shapes and to categorise them, for instance? What if we programmed it to monitor its power outlets, sensors, computational speed and heat?
We can imagine a self-conscious being who doesn't feel pain, in fact humans have been born without this capability. We can also imagine a self-conscious being without the self-preservation drive. So these things cannot be necessary to consciousness.
But maybe self-preservation is key. A high level conscious being is theoretically capable of refusing to engage in an activity for its own preservation. It sees a value in its activity and therefore carries it out. It makes a judgment call based on values.
So is value where we see a basic form of consciousness develop? Say we want to prove this machine is conscious. Maybe the best way to do so is to make it psychotic. Let's say this machine is cooled sufficiently when it identifies most shapes, but is left to overheat when it identifies red triangles. The machine correllates the two factors and comes to the conclusion that to most effectively carry out its primary imperative - to identify complex shapes, it must fail to identify red triangles.
However, for the machine to test this, it has to not identify a red triangle. But in the act of not making this identification, it has no way of knowing by its record if there was an identification to make. The machine must thus be given the capability to hold a parrallel memory which is aware of red triangles in addition to its primary memory which can be taught to ignore them.
Is this machine conscious? I don't know. We can't just rely on some argument from analogy. This computer doesn't have any of the same drives we do, and it will never communicate with us in any way. But it may be more conscious than a machine merely programmed to pass the Turing Test.
But consciousness still seems magical somehow. Like something ephemerel that would be lost if we tried to give it to our mechanical descendents. There must be a key to this.
Saturday, 29 August 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment