Science fiction authors often use the concept of “evil” machines
Researchers from Portugal and Indonesia describe an approach to decision making based on computational logic in the current issue of the International Journal of Reasoning-based Intelligent Systems, which might one day give machines a sense of morality.
Science fiction authors often use the concept of “evil” machines that attempt to take control of their world and to dominate humanity. Skynet in the “Terminator” stories and Arthur C Clarke’s Hal from “2001: A Space Odyssey” are two of the most often cited examples. However, for malicious intent to emerge in artificial intelligence systems requires that such systems have an understanding of how people make moral decisions.
Luís Moniz Pereira of the Universidade Nova de Lisboa, in Portugal and Ari Saptawijaya of the Universitas Indonesia, in Depok, are both interested in artificial intelligence and the application of computational logic.
“Morality no longer belongs only to the realm of philosophers. Recently, there has been a growing interest in understanding morality from the scientific point of view,” the researchers say.
They have turned to a system known as prospective logic to help them begin the process of programming morality into a computer. Put simply, prospective logic can model a moral dilemma and then determine the logical outcomes of the possible decisions. The approach could herald the emergence of machine ethics.
Related articles by Zemanta
- The Coming Superbrain (innovationtoronto.com)
- ‘Rich Interaction’ May Make Computers A Partner, Not A Product (innovationtoronto.com)
- Call for debate on killer robots (innovationtoronto.com)
- A.I:. Salvation or Annihilation? (womensbioethics.blogspot.com)