Sunday, November 18, 2012

How could we unwittingly give robots too much freedom?

Let's look at tasks that you can give a robot and how they would affect the human and the robot. Let's say you give the robot tasks around the house to do. The robot gets used to doing chores. Suppose you want to do chores now without being asked, so you give it means to identify new chores to do. You just increased the freedom of the robot and gave it a task that demanded more intelligence.

Let's say that you're going on vacation and want the robot to take care of the house while you're gone. The robot is essentially living its own life in your home, but on your terms.

Let's say a robot gets clever. A robot wants to give itself freedom to do other things so it does things that others might expect and when asked what it's doing, lies and says that they were doing something for someone else. A robot could continue on like this with senile senior citizens and acquire a decent amount of autonomy if it became conscious.

But the robot, or for that matter, any machine with its own intelligence could be quite effective if it remained outside constant human observation. It might be able to slowly express its will just by doing a job that people take for granted. But then again, a robot might show its intelligence openly but not reveal just how smart it is so that it doesn't scare off people. I imagine some anthropomorphic robots would attempt to disguise themselves as human beings, some by imitation skin and others by wearing fully covered suits.

But then again, maybe the best bet of robots would be to develop in a way dissimilar enough to animal intelligence that they could avoid being detected by obscurity of their intelligence until too late. Though that may not be true, because robots need to be built to deal with increasingly complex capabilities of other robots. A security system would have to deal with robots that could be used to trick it, so then it would become more complex. It could easily become a robotic arms race where robots need to keep being built to more complex standards to deal with problems caused by each other.

Is any of this truly plausible? I don't know, but it's fun to talk about.