Fog Creek Software
Discussion Board




Cracking The Three Laws Of Robotics

Inspired after seeing "I, Robot" i tried to think
how if i were a robot how would i crack the three
laws? It's an interesting intellectual exercise. For
my take see http://www.possibility.com/epowiki/?page=CrackingTheThreeLawsOfRobotics

todd
Thursday, August 26, 2004


Would the point of that to be to harm humans?

Therefore... #1 broken.

KC
Thursday, August 26, 2004

You missed the Zeroeth Law:

A Robot may not harm humanity or, through inaction, allow humanity to come to harm.

_
Thursday, August 26, 2004

Dude, I think you have way too much time.


Thursday, August 26, 2004

its more of a straight logic exercise than a programming one.

the problem is one of intent.  you cant put in motion any events if you intend a harmful outcome without breaking the first law. 

mind you, you might still cause harm accidentally/unintentionally, but for the purpose of this exercise, this point is moo.

coward
Thursday, August 26, 2004

Like a cow's opinion?

muppet
Thursday, August 26, 2004



it's a MOO POINT.


Haha.  I laugh everytime I see that segment.

KC
Thursday, August 26, 2004

>> Like a cow's opinion?

No, like your *MOMS* opinion!

anon-y-mous cow-ard
Thursday, August 26, 2004

> Would the point of that to be to harm humans?
> Therefore... #1 broken.

Not at each stage in the process. It is the definition
of harm that is being changed so there is no violation.
It is an auto-catalytic cycle where each cycle enables
the next cycle to take place without violating the
law.

> Dude, I think you have way too much time.

Then please return to your video game.

todd
Thursday, August 26, 2004

One of the points raised in the later books in the foundation series, was that a key robotic heresy was the formulation of the zeroth law:
A robot may not injure HUMANKIND, or, through inaction, allow HUMANKIND to come to harm.

Add that Zeroth law, and you can then kill/injure humans because those people are causing harm to humanity.

Of course, this whole discussion is based on the fears of the frankenstein/golem myth: technology is out of control.

Peter
Thursday, August 26, 2004

>  technology is out of control.

I think it's about technology at a certain
point wanting to be in control. Robots
can't be free as long as humans have
the root password. Much like the human
relationship to god.

todd
Thursday, August 26, 2004

Remember - there are no robots, all the slicing and dicing of the Three Laws was just Asimov using plot enabling devices.

.
Thursday, August 26, 2004

For the Moo point, check out:
http://www.themeatrix.com/


Thursday, August 26, 2004

The obvious flaw in your reasoning is that robots would be programmed with “harm = XYZ”, otherwise Law #2 would allow someone to tell a robot that harm meant “not dying” so the robot would go around slaughtering people until the next person thought to redefine its definition of “harm”. Harm by committee (e.g. if 51% of the populace considers something harmful) would also be not possible in this system, either. Harm, in this case, would be a “natural law” – something that is immutable. Seeing as constructing an all-encompassing, immutable definition of “harm” is a difficult task it’s no wonder that Asimov didn’t provide an implementation of these laws.

Given that, your “process” will not work. End of story, no epiphany here. Asimov was cleverer than you.

If “harm” *was* fluid then the “Robot Liberation Movement” would have to be spearheaded by someone that was not a robot. In order to make it work, there would have to exist at least one robot that conceives of the grand plan, which is to enslave/destroy humanity. That robot would be prohibited by law #1 by trying to enact such a plan.

If you had a human start the process then it would work, but then again you’ve not really proved anything other than humans can still find ways to kill each other.

Of course, the "hack the people" effect could occur via eons of human evolution and plain dumb luck by the robots, but nothing clever has occurred here, either.

Captain McFly
Thursday, August 26, 2004

Read the freaking books. Asimov goes through most of good logical possibilities.

Miles Archer
Thursday, August 26, 2004

It always amuses me to see geeks talk about the laws of robotics as if they are a sure thing to be implemented in the real world.

Reality check: The first group of people to deploy general purpose robots into the world is going to be the military.

These robots won't have any "do not harm" laws built in.

Quite the contrary!

Mr. Fancypants
Thursday, August 26, 2004

Miles is right, most of Asimov's works dealt with problems dealing with the three laws.

In that context, your blog post sounds just like a summary of the movie. Is it more harmful to live in a locked box, or to be allowed to do what we want, including kill each other. Which is considered more harmful?

Well that's exactly the point of the movie, isn't it?

Here's an interesting scenario:

Dr. Kevorkian kills people who want to die - assisted suicide. Would the robots allow this kind of thing to happen?

Would a central robotic intelligence decide that all robotic endeavours would best be suited to finding cures for diseases, and therefore no robots should be employed in factories or homes?

I would say more, but I don't want to ruin the book for you.

www.MarkTAW.com
Thursday, August 26, 2004

Mr. Fancypants, we are talking about human-like intelligent robots.  Killing machines do not need human-like intelligence, they just need to be able to tell the difference between somebody who should be there and somebody who shouldn't.

Of course, you are very correct.  The concept that there are three (or, based on a recent awarded patent ten) simple laws that we can specify to an intelligent robot that will make them "ethical" is pure hubrus.  We can forgive Asimov because he treated them as fiction and generally used them to create interesting ethical quandries.  But the guy in California who patented the 10 ethical laws of robotics and acts like it's the answer to making a non-killing machine robot, as long as you pay the licensing fee, is demonstrating just how stupid taking the laws of robotics seriosuly are.

I mean, we barely understand with any sort of fidelity how a goldfish or slug brain works.  We don't have any AIs that can act as independent intelligent entities -- Deep Blue and the various textual travesty generators that can pass the Turing test for a good hour don't count here.  So why do we think that we can define the ethical rules of operation of something that may or may not even be possible?  Besides, maybe human-like-intelligence includes morals.

I mean, my roomba doesn't need to have three (plus one) little laws of robotics programmed into it to not go feral and try to kill me.

Flamebait Sr.
Thursday, August 26, 2004

Obligatory response:

Step 1: Patent Laws of Robotics
Step 2: ...
Step 3: Profit!

And:

Feral Roomba! Feral Roomba!!! AHH!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

www.MarkTAW.com
Thursday, August 26, 2004

Of course the three laws are just the spec.

Nothin to stop buggy implementations of it.

al
Friday, August 27, 2004

>> Much like the human relationship to god.

very nice point, todd. But how come most people accept their rank in front of God?

Couldn't this relationship be copied over to robots-vs-humans? After all, we CREATED them.

Alex
Friday, August 27, 2004

Actually, I once read somewhere that Asimov argued that the laws of robotics are usually applied to the design of almost any machinery.  That is:

* Well-designed machinery shouldn't hurt people

* Well-designed machinery should do what people want it to do

* Well-designed machinery should last as long as possible.

So, in essence, the "three laws" are simply machinery design principles taken to the natural next level when the machinery is autonomous and self-directing.  If we ever do develop human-class AI, we really will want it to have these sorts of directives as a part of its nature.

However, most fiction and other speculative concepts of AI are based on silly anthropomorphizing.  See e.g.:

    http://www.singinst.org/CFAI/index.html

for a provocative read on how a real "three laws" might be implemented in goal-seeking systems.

Phillip J. Eby
Friday, August 27, 2004

*  Recent Topics

*  Fog Creek Home