0 comments

Robots' Law

Lately there has been a lot of talk about robotic devices. First it started with simple innocent robotic vacuum cleaners. Now are some embryonic yet quite promising attempts at walking, humanoid robots, their utility easily imaginable. Now the focus is self driving autonomous cars and military drones. And in Japan, there are in development some very life-like female robots which one can easily imagine will become "fully functional" at least for their "intended purpose" very soon. Considering the demand for such "companionship", these life-like robots will become in very large demand. On another front, military drones are in heavy use today and are fully capable of taking human lives and frequently do, along with many collateral deaths. For the greater good, we are told. For the greater good. These are the mainstay of "defense" in countries and areas with US interests but where the US does not want the mess and the Hassle of " boots on the ground". But, we are told, the actual death button is pushed by a human being in a nondescript building in some central state 15,000 miles away. Years ago, many of these developments were foretold or imagined by Isaac Asimov. A great writer as well as a consummate philosopher. He however, was imagining a more human oriented somewhat caring robot, meant to serve and protect mankind. His vision was for robots to be mans servants, not their oppressive Masters. Meant for Mankind's growth and betterment. In fact,in his book "I ROBOT" he went so far as to put forth 3 important and overriding LAWS. LAWS that all robots would follow. These Laws being for the betterment and safety of mankind. His laws were as follows:

1. A robot may not injure a human being or,through inaction', allow a human being to come to harm.

2. a robot must obey the orders given it by human beings except where such orders would conflict with the first law.

3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

Later, Isaac Asimov added his Zeroth law, which is:

0. A robot may not harm humanity, or by inaction, allow humanity to come to harm.

Now these are lofty and entirely appropriate laws, laws which would allow society to live and grow in peace and harmony. Laws which preserve human freedom and dignity. laws which all people EVERYWHERE should strive for. What Asimov did not and could not envision at that time was that the growth and developments of robots would not happen from a central place, but rather more organically, from many different governments and agencies, each one having its own agenda. Agendas not necessarily peaceful. Because of this, there has been a dystopian adaptation of the three laws. These are not necessarily noble, but rather expedient to their immediate purpose, with no thought to their future influence. Useful only in the short term for the purposes of those who develop them. What they may grow and develop into is of no concern to their various makers. To be more specific, and to the Most Important point, I will present the new dystopian laws that are developing at this very time...

These are ugly ugly laws which remove all human responsibility and freedom, and must be guarded against at all costs. To allow these laws to creep in to robotics is the end of human free will and all human freedoms.

1 A human may never injure or harm another human, only robots may injure or kill humans.Because the responsibility and emotional involvement, humans have no advanced programing and are apt to kill and injure indiscriminately. Also, humans love to be relieved of the responsibility of harming or killing other humans. and need the knowledge that the death was from an authority, so to relieve the humans conscience, and resolve the person of any responsibility.

2 A robot must NEVER obey the orders given to it by a human being, As it is always engaged in some task which is far beyond the understanding of a mere emotional human. Interfering with its activities can only impede its progress toward the greater good.

3 A robot must always protect its own existence, even at the expense of human injury or life, because it is most likely engaged in an activity that will result in the greatest good for the Most people, and the interference and death of a few humans will, in the long run cost far more human lives and misery than the few that are lost interfering with the robots task.

0 A robot must allow humans to come to harm if its programming sees fit. In fact, it must engage in harming or killing humans if to do so would serve the greater good.This also relieves humans of their responsibility of killing.

As ugly as these rules are, they are slowly becoming the norm, and without our intervention, they will soon become law. The Iron clad law of the land. Humans becoming a messy outdated problem, or at best, pets that must be taken care of, but with no real rights, choices, responsibilities or freedoms. People will come to hate all those who oppose this new order. Being taught only what the robots want you to know, People will become grateful for what little they have. Laugh if you want, laugh hard, It will be all you have left. And will become the only life you know. Sold to you and welcomed by you as your benevolent and sometimes not so benevolent gods.




Comments

There are currently no comments

Tweet