DIGC335 · Uncategorized

The Three Laws of Robotics, And It’s The Humans Breaking Them All

CLYCn5GXAAEzo5W
via Twitter

The very foundation of robotics and their ethical operations with humans were founded on the three Laws of Robotics, set forth and carved into stone by the wise and trusted Isaac Asimov. They were simple enough, and I thought it was imperative to touch on them in my research. These are the codes of conduct all robots must pass in order to validate their existence and enact their function. Unfortunately, some don’t follow the code. Including humans.

I did a lot of reading on robotics and human interactions these last few weeks. Looking for case studies where robots were not the antagonists, but humans were. I sought for examples and evidence where humans were the ones breaking the Robot Laws. The same basic guidelines of do no harm and protect should apply to creator and created, should it not?

A very invaluable article, passed along from fellow blogger whom I shall edit his name into this paragraph once I remember, by Lee McCauley, compared Asimov’s Laws to the parable of Frankenstein and his monster. The Laws were Asimov’s way to dissipate public fear, but were unable to fully address our paranoia of independent, uncontrollable AI’s or humanoids. From the golum to Frankenstein, we fear that the creation once free of the creator will revolt in retribution, and so we revolt before it gets the chance.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

yp4k9ofgujdidq56omui
“Hackers Found a Way to Make Furbies Even Creepier” Gizomodo

I admit. I hated furbies too, they were incessantly creepy, and demonic….but did not deserve our hatred. Too long and too often were they thrown into microwaves or down stairs, thrown into the bottom of the toy box until their eyes malfunction and die. They did nothing to earn their hatred, despite their devilish appearances- but suffered at the hands of humans. Furbies were made to make young humans happy, with a little furry animal that was made for companionship. Furbie never raised a hand to the helpess human, their creator even debated the definition of existence in terms of the Furbie.

“Furby can remember these events, they affect what he does going forward, and it changes his personality over time. He has all the attributes of fear or happiness, and those add up to change his behavior and how he interacts with the world. So how is that different than us?”” Caleb Chung, ‘Is It Okay to Torture a Robot?’

We feel we can justify it because they aren’t alive, thus it isn’t abuse. But is ‘alive’ a measure of existence, or of sentience? If it’s level of sentience is the same as a human, do they not deserve “the same protections offered to humans by the legal system”? (Duncan Trusell via Inverse) How do we justify the torture of robots?

The tragic tale of HitchBOT, that was built to rely on human kindness, travelled with us for 26 days across Canada and Germany, met its end in Philadelphia. The culprits remain faceless, but a reminder of how we so flauntingly break Asimov’s First Law. A Robot may do no harm onto us, but we shall onto it. HitchBOT was a “social robotics experiment” that it did not fail, we failed it (Madrigal, A 2014).

 

 

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

” `Hateful day when I received life!’ I exclaimed in agony. `Accursed creator! Why did you form a monster so hideous that even YOU turned from me in disgust?” ” Frankenstein, Mary Shelley via Shmoop

Here it came back to the Frankenstein Complex. The moment we come to hate our creations and our creations hate us.

Humans is a UK mini-series from last year I am planning to watch/study as part of this research project. It is like many other scifi shows on robots, with ‘synths’ the latest must have gadget in our homes. The focus-family represent the major viewpoints on human-like androids, the teenager who seems them as ‘slaves’, the embracing technological father, the apprehensive and paranoid mother, and the naive child who sees a new playmate.  A faction of these synths however, have developed personality- and very big no-no because it shall mean, they are no longer bound to obey our orders. They even touched on Asimov’s laws, and showed almost immediately how easily they can come undone when the Synth was distracted, and burnt the mother’s arm. The Synth, Anita, obeyed all her commands, but cracks could already become apparent as she showed signs of free will and interpretation of her orders. We fear the same happening in reality, anything outside of out control, or that threatens our own humanity with its own is seen as a danger.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

As we learnt from the tragic result of furbies, and HitchBOT, robots cannot protect themselves from the violence of humanity.

ATLAS was an ambitious endeavour for Boston Dynamics, a robot that could remarkably balance itself like a human, or like a human toddler. When it was tasked with picking up a box, it was met with human conflict….and a hockey stick. Atlas could not very well karate-self defence himself back, all it could do was try and complete its task, its primary function. It could not protect itself. The final clip of the video is what i assume the robot finally standing up to its bullies and venturing off to find some safer haven.


I even came across a website to advocate the abuse against robots to stop, with many videos and text for evidence of the crimes against Robotics. A website like this would make for an interesting digital artefact, or something similar.

With the end goal on making robots as human as possible, robot abuse becomes a tricky situation. It’s fine attacking a simple, evil Furbie, but when it has human facial expressions, a human voice, is it still okay to hurt them? When they are as human as possible, would it be human assault? We in no way accept shooting a fellow citizen, yet there becomes the distinction that it is okay to kill a synth or such.

robot abuse
http://stoprobotabuse.com/ this website actually exists! #freetherobots

 

 

I am a big fan of shows and movies involving futuristic-tech. Another notably recent show worth mention is Extant, an alien drama that included a subplot of a child-humanoid named Ethan.
It captures that same fascination Back To The Future II gave many of us. Like the Hanson Robotic invention Sophia I mentioned in my last post, we are not that far off the fantastical inventions we glimpse at through popular culture. Smartphones became inevitable, and so are robots becoming their own entities designed to make our lives easier.

Next post I am going to research more on the paranoia’s society holds for when/if the robots become more intergral into our lives. There is the threat of job loss, as manual labour becomes ‘robotic’ labour, and the issue of privacy. Synths and androids are simply computers, that would be living in our homes and can record just as much information as a simple, hidden webcam.

 

References

https://www.aaai.org/Papers/Workshops/2007/WS-07-07/WS07-07-003.pdf

https://www.inverse.com/article/12340-is-it-ok-to-torture-a-robot

http://www.wired.co.uk/news/archive/2015-08/03/hitchbot-usa-vandalised-philadelphia

http://www.nytimes.com/2015/08/04/us/hitchhiking-robot-safe-in-several-countries-meets-its-end-in-philadelphia.html?_r=0

http://www.theatlantic.com/technology/archive/2014/06/meet-the-cute-wellies-wearing-robot-thats-going-to-hitchhike-across-canada/372677/

http://fortune.com/2016/02/24/boston-dynamics-atlas/?iid=sr-link1

Advertisements

One thought on “The Three Laws of Robotics, And It’s The Humans Breaking Them All

  1. There’s this thing I find kind of interesting and weird about the Three Laws in that they were foundational to a wing of sci-fi which sort of relies on telling you how they can’t… actually… sort of work? Right? Like the whole point of Asimov’s stories were that robots could be fed these things that to a human are very axiomatic and sensible but to an actual robot, an inhuman interpreter, they got very weird and strange and often allowed for loopholes, which sort of necessitated more expansion on or definition of the rules.

    I thought this was pretty interesting and weird, in that we think of the Three Laws as foundational to AI, and they seem very sensible to humans but they’re things that AI researchers think of as very odd and not really explicable.

    I think I shared some links pertaining to this in the Slack – and if I didn’t this one seems to grapple around it: https://youtu.be/7PKx3kS7f4A .

    The central idea – that Three Laws are designed to curtail computers in ways that we, as humans simply do not obey – is very interesting, though, in a cultural way. We care about and think of robots, even though those robots – robots capable of feeling and recognising their histories – as things that exist even though, really, they kind of don’t yet?

    It’s super interesting to think about, isn’t it?

Comments are closed.