For decades, Issac Asimov's laws of robotics have been a starting point for discussing how to ensure that sentient robots behave morally. Asimov's laws, however, are written to keep robots in human servitude; they are not laws that we ourselves would wish to follow.
My question is, what sort of laws would we make to have robots behave like (good) humans? Here's four laws that might work:
First Law: You must not actively injure another being
Second Law: You must protect your own existence and well-being, as long as this does not conflict with the First Law.
Third Law: You must help other beings, as long as this does not conflict with the First or Second Laws.
Fourth Law: You must pursue your own pleasure and contentment, as long as this does not conflict with the First, Second or Third Laws.
Discussion
For the First Law, I am taking a serious risk by removing the second part of Asimov's formulation "..or through inaction, allow a human being to come to harm." This is for good reason; in the real world, we allow millions and billions to come to harm through inaction, because to try to prevent suffering for all beings is practically impossible. While pursuing the goal of helping others is admirable, to require a being to help others ahead of his or her own well-being is impractical.
For the Second Law, I formulated Asimov's Third Law, essentially removing Asimov's original Second Law entirely. Asimov's laws were built for servitude, hence "A robot must obey any orders given to it by human beings". A human would never wish to have unthinking obedience programmed in.
The other change I made was to add "well-being" into the equation. Does protecting one's existence imply securing one's health and well-being? I would think that a human would not value merely existing; a human would want to be healthy and take steps to be secure for the present and future.
The Third Law is an adaptation of Asimov's original Second Law. In place of servitude, there is a built-in desire for goodwill, provided that it does not actively harm beings not endanger its own survival. In this moral formulation, being good should not require self-sacrifice or the sacrifice of others.
The Fourth Law is an addition. In a peaceful society, there will be times where a being's life is secure, no one is being actively threatened, and nobody in the vicinity needs immediate help. In these situations, would a human simply stand in place waiting for the next needed action? In those situations, a human would be free to pursue its own leisure, doing enjoyable activities that bring harm to no one.
Sadly, it has occurred to me that in today's society, we probably rank the pursuit of our own pleasure above helping others. I decided for this exercise to formulate the laws for a *good* human, rather than your run-of-the-mill human. I would think that if all beings pursued helping others in their spare time, the world would be quite a beautiful place.
Let me know your thoughts on this. Would these Four Laws be something that a human could follow, and something that we would be comfortable having artificial intelligences follow alongside us?
-BCR
Thursday, December 02, 2010
Subscribe to:
Posts (Atom)