blogging · Uncategorized

can we make robots ethical?

Our major concern for the future of robots and AI, are robots becoming crappy because its creators, us fickle humans, are crappy. Like accidentally swearing in front of your 1 year old and it’s first word being #$%&, we worry about passing on our least favourable qualities to our creations. The last thing anyone wants is a robot with anxiety or a god complex.

In order to exist, they need to be safe for humans and not a threat to us, and give them the qualities that won’t bring about the robopocalypse. We need to give them ethics.

“What happens, when these robots are forced into making ethical decisions….
…a robot left in an impossible double blind, how could it possibly equip an automated intelligence to cope with this type of complexity?”
“Can Robots Be Ethical” 
Waleed Aly and Scott Stephens, ABC Radio

Ethics is essentially dividing everything in the world into two categories: things that are right and things that are wrong. This seems fairly simple to translate into robots. Killing human = Bad, giving human coffee = Good. However, we also know ethics isn’t always that black and white. Ethics varies from person to person on what we view is acceptable, and varies even greater over cultures. Ethics involves a constant dialogue, between ones self and others, as we are forced to make these categories. Do I save a family from a burning building I have just walked past? Or continue with my mission to buy food? What would a robot do in this situation? Can it override it’s task once it is faced with an ethical decision? Should a driverless car have the opportunity to be programmed to swerve to hit one person rather than two? Is that the ethical choice? (Newman, 2016)

An article published in Nature debated logic is how we debate and decide on ethics, and thus a logical program would be how we give robots ethics (Deng 2015). Logic is based off intelligence, so how much intelligence does a robot need for logic, for ethics? UK roboticist Alan Winfield, put a robot to the test of saving a human from falling off a cliff (real humans and real cliffs were not used in this experiment). The Test subject saved its human every time with minimal logic, programmed in with Asimov’s 1st law to protect itself from the cliff to save the other ‘human’ robots. When challenged to save two at the same time, and saved at least one but would go into a “dither” and sometimes save neither. The Dither meant it required more choices and decisions in order to act, ie if one human was a child or adult, which do i save first? Even humans in this situation wouldn’t know what to pick.

Michael Fisher, a UK computer scientist posed an ethically bound system of how robots can function is paramount to reassuring the public that are “scared of robots when they are not sure what is is doing, or will do” in a given situation” (Newman, 2016). However, there will always still be risk of harm to those around the robot, or the driverless car who must swerve to avoid hitting someone and into the path of another vehicle. This kind of robot error is no different than human error, when forced into a double blind.

Is having ethical robots the key to harmonious interactions between robots and humans? Having some degree of ethics instilled in robots would mean they need the autonomy to make the decisions on their own as the situations arose. Would programmed ethics clash with Asimov Laws, when a robot cannot fulfil saving humans without endangering itself? What if the robot chooses wrong, and does not ethically act? As Waleed Aly and Scott Stephens discussed, ethics are implied as rules, and to be used in certain situations (2016).

How far would robots go in their logic of ethical choices, to maximise happiness for all humans? Harvesting all the organs of one man to save five? Robots reaching the Singularity is a moment discussed in media and pop-culture, and truly surpassing us. What then will it mean for ethics, without the humans around to define it? People naturally follow social conventions, and is an element robots will always struggle to understand. The balance of ethics and morality, of self preservation and acting selflessly, are all perilous and susceptible to the human condition and how we act in our world (Caplan 2015). Is that balance something that can be translated into code, and could robots ever follow in our footsteps?

The_Descartes_System
via PhilosophyNow

References & Further Viewing

http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881

https://philosophynow.org/issues/110/Can_Robots_Be_Ethical

http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881

Can we trust robots to make moral decisions?

http://theconversation.com/you-say-morals-i-say-ethics-whats-the-difference-30913

https://radio.abc.net.au/programitem/pglxVLWkb6?play=true

Advertisements