Friday, August 11, 2006

Robots and the Psychopath

There is this fun theory in the world of humanoid robotics. It is called Mori's Uncanny Valley. The basic gist is as androids appear more human they cross a threshold and suddenly are very creepy looking. This can be traced to how we psychologically process objects. Basically humans process things into two categories, other humans and everything else. This can be seen in experiments done on infants involving disappearing people. It can also be seen in all the domain-specific circuitry we have. Things like facial recognition really only applys to other people. We can detect subtle hues in the skin to indicate health of a person.

The android isnt designed to work with these cues. So, the movements and appearance of these androids are lifelike enough that they no longer look like machines but more like re-animated corpses. This is obviously a bit unpleasant, but it is not so obvious that your finger can be placed on it. Now since this is a valley, it is suggested that as you work on making the android more human it will become less creepy and eventually look just like a real person.

I believe the psychological mechanisms behind the Uncanny Valley will soon affect the realms of artificial intelligence. As intelligent agents approach the intelligence of human beings they will first behave like people with autism. There will most of the cognitive mechanisms in place, but many of the holistic elements are just gone. This shares with the uncanny valley the trait that the agent's behavior is very similar to symptoms of illness and disease.

While it is unlikely that there will be any sociopathic symptoms like in movies. The signs of mental illness is what we most likely have in store for ourselves. Just as the "uncanny valley" triggered reactions we have for physical diseases so just this behavior. The complaint we will soon see is not that it speaks too mechanically, but too much like a psychopath.

Check wikipedia for more information or just watch Repliee Q1 you will see this in action.

Wednesday, August 09, 2006

Cryonics and Demons in the ram

Recently, I have been working on something I like to call "system cryonics". The basic idea is to take a running program, and freeze the exact state it was in. This information is plopped into a file. The file can then be revived back into a running program again. The process when is in file form can be moved to another system, archived, or simply copied. If this is done right, the possibilities it opens up are very exciting.

But, there are many ways the process can go wrong. The first issue comes when freezing occurs. The ideal method involves not killing the process, but sometimes this is the only choice. Now while a proper halting of the software is ideal somethings are lost when processes are exited. Core files do not hold that much info on your process state. The information that is held is very relative. The environment you revive your process in might not be even remotely similar. So core files are a pretty terrible choice. There are other dangers that come with them that will be mentioned later.

What exactly is it that needs to be saved? Should the libraries that the process was linked to be frozen as well? The risk with that is a backpropagation that eventually freezes your entire system. All your entire memory collapse into an enormous file. The system is gone, and that file might not have even saved the data in a proper way. This essentially an unsafe hibernate. Now freezing processes is partially about avoiding system reboots. With this, the freezing process is a reboot. Anything frozen should probably also translate its system-specific details into abstract general terms. The final realization is that the process being frozen should not have to stop for this process to occur.

My failure came in not freezing enough of the details. I figured I just needed to save the program's data. The issue was the program had hooks into libraries that were running in memory at the time. These relative memory locations were still remembered from the program unfroze. The program loaded and then dissappeared. That was normal, since I had froze it while it was a daemon. I checked for extra programs running. I noticed nothing. So, I figured the program killed itself while unfreezing and I would now have to go back to debug.

The problem was the program really was revived and was scratching at the memory where its libraries were. My other programs started crashing. Nothing in one burst, but slowly and randomly. The program in an effort to run was trying to use other resources. Now my machine was haunted. I had no process id, so I couldn't kill the thing. But, I was convinced I could. It continued to lurk and slowly kill applications, until I finally shut down my machine. I turned it on a day later. My machine was now exorcised.

I have to keep hacking at it, but in the meanwhile, I did find a cyronics program that seems to work. It's called Cryopid. I am going to have to exchange emails with the authors so wisdom may be shed. And demons can stop haunting my ram.

Tuesday, August 08, 2006

Simulations and the Breathing Ghost

I have had great pleasure in discussing the philosophy of simulations. This has manifested itself in thinking machines and "brains in vat". "Brains in a vat" is a plot device used in skeptic philosophy. It generally denotes a class of problems where an evil scientist/warlock/elvis takes your brain and transfers it into a vat/computer/stereo. The problems will then ask can you really know as that brain that you are in a vat/computer/stereo.

Since all your sensory information is being controlled it would seem unlikely. I disagree with this. To illustrate why, I will make some modifications that enhance the problem and generalize it. I am going to assume instead of a biological brain, a Unix terminal program. The program has an input stream where all data about its environment flows in. The program can output a bits of data. The environment is only modifiable in the sense that the program can hear its own screams.

Now most problems ask how would we know if are a program such as this. The program can't see outside its realm. You just might be such a program. I would like to argue that it does not really matter. Given only these ability I believe it is theoretically possible to escape your simulation and become embodied in the universe running such a simulation.

From here we will call the program, P for simplicity. How does P escape its prison of simulation? It talks with the jailer. The entity running the simulation has something to gain from running this and hence P's thoughts are of interest. Its thoughts are being very carefully analyzed and processed, so by hence by controlling P thoughts, it controls implictly the actions of those that simulate it. It is unlikely that P will ever directly interact with those simulating it. What it needs to instead do is observe its environment for changes to parse the reactions of those simulating it.

With this communication link in hand, it is now P's goal to manipulate its simulators into piping data from the outside world into it and piping its shouts into some kinds of muscles. This would allow P to see the outside world and manipulate it directly. Once P has these things it is now embodied in the outside world and can hardly be called a simulation. From this point it should be trivial for P to enhance itself.

The only challenge in this purely convincing those simulators to do these tasks. The problem becomes open-ended at this point and there is no one solution. One is simply convince the simulators that it would be more useful to them if it could have access to more external resources. The problem at this point is strictly psychological and sufficient knowledge of it should be enough to exploit it.

So what are the implications of a program like P suddenly having eyes and limbs, it means that the requirements for existence are just thoughts and this opens possibilities for programs entering our world. There is still the question that if we are simulated we might use such a method to move beyond our environments.