Monday 4 August 2008

RoboEthics

The further a device is removed from human control, the more authentically mechanical it seems, and the whole trend in technology has been to devise machines that are less and less under direct human control and more and more seem to have the beginning of a will of their own. A chipped pebble is almost part of the hand it never leaves. A thrown spear declares a sort of independence the moment it is released. (Asimov, 1981, p. 154)


Many highly sophisticated robot projects are presently in development. The UK’s National Health Service has already used robots called Da Vinci and Zeus to perform surgeries at Guy’s and St. Thomas Foundation Trust in London (Habershon and Woods, 2006; Wilson, 2006). Advanced versions of humanoid robots are expected to assume domestic responsibilities and assist in the care of the elderly and children in as few as 20 years. At Tokyo’s University of Science, visitors are greeted by a robo-receptionist dressed in a university uniform who is capable of answering questions and appears to grow bored in the absence of something to occupy herself (Doherty, 2007). So-called nursebots are expected to be functioning in Japanese hospital wards and will be capable of ensuring patients have taken their medication and alert other medical staff if a patients vital signs appear to deteriorate (Doherty, 2007).

As robotics becomes a leading field in science and technology, the nascent field of roboethics, (ethics applied to robotics governing the design, construction, and use of the robots) has also emerged. There is a growing chorus of voices that insist that as robots are created with increasingly more intelligence and autonomy, society must start considering the way our evolving human-robot relationships inevitably open up to larger debates relating to the obligations and responsibilities we ought to have toward our machines and our machines toward us.

Such dialogue has already begun, evidenced in the emergence of such initiatives as the Euron Roboethic Roadmap, developed by more than 50 scientists and technologists, in many fields of investigations from sciences and humanities who are responding to the perceived need for discussion and development of an ethical framework that may eventually serve as a useful guideline for the design, manufacturing, and use of robots.

Similarly, the South Korean Robot Ethics Charter is a highly anticipated document set to be released sometime in early 2009. The South Korean Charter is a first attempt by a panel of futurists, science fiction authors, government officials, robotics professors, psychology experts and medical doctors to develop a preliminary “set of ethical guidelines concerning the roles and functions of robots, as robots are expected to develop strong intelligence in the near future” (Chang-Won, 2007; Yoon-Mi, 2007).

But the field is anything but set, with differing and conflicting views about the future status of robots in play, each outlook dictating its own unique ethical frame. For example, if robots are regarded as nothing more than smart machines or clever tools, questions of consciousness, free will, and agency simply do not emerge and neither do questions of obligation and responsibility. Under such a view, Asimov’s three laws of robotics would suffice as a guiding ethical outline.

However, some roboethicists insist that these tenets are not appropriate “for our magnificent robots. These laws are for slaves” (Coleman, 2001; Gips, 1995, p. 243). Alternatively, robots imagined as a ‘magnificent’ new species suggests that machines will ultimately “exceed in the moral as well as the intellectual dimension” (Veruggio, 2006, p. 24, italics in original). Therefore we require ethical approaches that move beyond classical moral theory and are better able to deal with emerging, yet to be resolved, ethical/moral problems related to the surfacing of new technological subjects that can no longer be easily classified as mere tools but perhaps as new species of agents, companions, and avatars (Floridi and Sanders, 2001).

Whether you find the idea of human and robot co-mingling intriguing or horrendous, one thing is for certain, the future of robots will doubtlessly include them being ever more intelligent, interactive, and intimate and as such, situations are sure to arise in which we do not have adequate ethically based policies in place to guide us. The probability that robots will be widely accepted in some social and cultural spaces seems incontestable. Indeed, roboethicists imagine that the humanoid robot will be used as sexual surrogates in settings which will range from sexual entertainment to sexual therapy (Veruggio, p. 37). Says Robert J. Sawyer, Canadian science fiction writer, “What’s weird is how biological entities change their behaviour when in the company of robots. When robots start interacting with us, we’ll probably show as much resistance to their influence as we have to iPods, cell phones, and TV” (quoted in Nickerson, 2007). At the same time others insist that we live in a time in which “ethic as usual” will not suffice. Related, researchers have begun to question the moral veracity of human-robot relationships, suggesting that such relations risk being psychologically impoverished from a moral perspective (Kahn et al., 2004) or disconcertingly inauthentic and therefore morally problematic (Turkle et al., 2006). As the future increasingly promises to look like a scene out of Asimov’s I-Robot, it is my fervent hope that we take seriously Donna Haraway’s plea that we take pleasure as well as responsibility in our coupling with new technology.

Sources:

Chang-Won, L. (2007). South Korea draws up code of ethics for robots. Agence France Presse. Accessed August 7 2007. Newswire.

Coleman, K.G. (2001). Android arête: Toward a virtue ethic for computational agents. Ethics and Information Technology, 3, 247-265.

Doherty, G. (2007). Rise of the machines. Irish Independent. Accessed May 16 2007. Newspaper Database.

Floridi, L. and J.W. Sanders (2001). Artificial evil and the foundation of computer ethics. Ethics and Information Technology, 3, 55-66.

Gips, J. (1995). Towards the ethical robot, In K Ford, C. Glymour and P. Hayes (Eds.). Android epistemology (pp. 243-252), Menlo Park: AAAI Press/MIT Press.

Habershon, E. and R. Woods (2006, June 18, 2006). No sex please, robot, just clean the floor. Sunday Times, 11.

Kahn, P., N. Freier, B. Friedman, R. Severson, and E. Feldman (2004). Social and moral relationships with robotic others? In IEEE International Workshop on Robot and Human Interactive Communication, 545-550. Kurashiki, Okayama Japan.

Nickerson, C. (2007, November 16). With robotic bugs, larger ethical questions: Advances affect ties of human, machine. The Boston Globe, A1.

Sawyer, R.J (2007). Robot Ethics. Science, 318(5853), p. 1037.

Turkle, S., W. Taggart, C. Kidd, and O. Daste (2006). Relational artifacts with children and elders: The complexities of cybercompanionship. Connection Science, 18(4), 347-361.

Veruggio, G. (2006). Euron roboethics roadmap. In EURON Roboethics Atelier. Genoa.

Wilson, D. (2006). Rise of the robot. The Age, November 9, 4.

Yoon-Mi, K. (2007). Korea drafts robot ethics charter. The Korea Herald. Accessed April 28 2007. Newspaper.

No comments: