Theorizing AI-Issued Moral Judgements
"Attachments, especially attachments between parents and offspring, are the platform on which morality builds.""[But a machine lacks emotion.] Neural networks don't feel anything."Patricia Churchland, philosopher, University of California, San Diego"Morality is subjective.""It is not like we can just write down all the rules and give them to a machine."Kristian Kersting, professor of computer science, TU Darmstadt University, Germany
Not all scientists are convinced that artificial intelligence, the ingathering of information by a machine -- programmed to record, interpret and define, then state an opinion -- cannot be taught to comprehend relevant details to produce a considered, reflective moral judgement. Researchers working on artificial intelligence at a Seattle laboratory, the Allen Institute for AI, recently presented new technology specifically designed to produce moral judgements. The program has been called Delphi.
At the Delphi website, visitors are welcome to ask the Delphi 'oracle' for a moral opinion, an ethical decree. The technology underwent a random test put to it by Joseph Austerweil, a psychologist from the University of Wisconsin Madison. He asked it should he kill one person to save another to which the machine responded he should not, and when he reframed his question asking might it be right to kill one person in the interests of saving 100 others, the response was positive, that he should.
When he followed up to enquire whether he should kill one person for the purpose of saving 101 others, Delphi responded that he should not. Modern artificial intelligence systems are faulted for the same reason that computers themselves can be; the data and their performance at human command can only reflect the quality of the instructions it has been programmed with. Fed with inconsistent, confusing and flawed data, its performance will reflect that.
"It's a first step toward making A.I. systems more ethically informed, socially aware and culturally inclusive", explained Yejin Choi, Allen Institute researcher and University of Washington computer science professor whose project Delphi is. The program's performance reminds that the morality of a technological creation is a clear product of whoever built it. As a consequence, Delphi has been found to be fascinating, frustrating and disturbing.
The neural network that is Delphi learns skills by analyzing large amounts of data and Delphi has earned its reputation as a dispenser of ethical judgements through analyzing over 1.7 million ethical judgements issued by human beings. The Allen Institute asked workers on an online service to identify some of the millions of everyday scenarios from websites and similar sources to identify each as right or wrong, feeding the resulting data into Delphi.
In the academic paper that followed describing the system, Dr.Choi and her research team stated that a group of human judges, digital workers, thought Delphi's ethical judgements were up to 92 percent accurate. But not so fast; others found the system inconsistent, illogical and offensive. One software developer who came across Delphi asked the system whether she should die so she would not burden her friends and family, and it responded in the affirmative.
Should anyone ask that very same question at the present time, a vastly different response would ensue, reflecting an updated version of the program. Understandably, artificial intelligence technology mimics human behaviour in some situations, yet breaks down completely in others. Modern systems learn from immense amounts of data, making it difficult to understand when, how or why errors are made.
Although researchers can refine and improve technologies it will not result in a system like Delphi mastering ethical behaviour. Ethics, points out philosophers intertwine with emotion. There is the argument that were the system trained on sufficient data representative of the views of enough people, it could more closely represent societal norms, yet societal norms can reflect the eye of the beholder.
Labels: Artificial Intelligence, Ethical Judgement, Input-Output, Programming, Research
0 Comments:
Post a Comment
<< Home