The inherent flaws of Isaac Asimov’s 3 Laws of Robots have been reviewed with much scrutiny by many people over the years since there inception as a plot device. The favoured outcome; a rule to correct the problem already laid out in the first Law.
0. No robot may harm humanity or, through inaction, allow humanity to come to harm.
This removes ambiguity from the 1st Law, now it only concerns an individual and the Zeroth Law protects humanity as a whole.
Most people should be familiar with the box office success I, Robot (2004), it’s 2035 AD, robots are everyday tools and are programmed to live and serve alongside humans. Detective Spooner is called out to investigate the apparent suicide of the scientist that designs robots; Dr. Alfred Lanning. A robot is found in close proximity to the crime scene and Spooner suspects it might be the perpetrator despite robots never having injured a human because of the unbreakable 3 Laws in there Circuits.
Those with a superficial interest in Science Fiction assume that the 3 Laws just ‘break’ because its a movie. This is not the case. Below are the 3 laws:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Can you see the flaw that lets the movie take place? The laws are in a descending order of importance, so the first law must always be followed, the second if it can, and the third if our orders permit it. So you can ask it to kill itself because the 2nd Law overrides the robots self preservation (3rd) however you can’t ask the robot to shot someone else because it would break the 1st Law.
The reason the robots can kill humans is the 2 letter word ‘or’ in the first Law. Its a logical operator that means one or the other. So if they follow the second part of the 1st Law in an attempt to preserve humanity they can injure humans.
A logical robot would find the first part most important and follow it first. An altruistic robot, one with emotions such as compassion would want no harm to all humans; the greatest good.
This is why the smarter a robot, indeed computers, the harder it will get to control them because there understanding of the laws we give them might surpass ours with dire consequences.