Programming Hate Into AI Will Be Controversial, But Possibly Necessary
Zoltan istvan | October 17, 2015
Therein lies the conundrum. In order for a consciousness to make judgments on value, both liking and disliking (love and hate) functions must be part of the system. No one minds thinking about AI’s that can love — but super-intelligent machines that can hate? Or feel sad? Or feel guilt? That’s much more controversial — especially in the drone age where machines control autonomous weaponry. And yet, anything less than that coding in empathy to an intelligence just creates a follower machine — a wind-up doll consciousness.
In a report he wrote for the D.O.D. titled “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Arkin argued that guilt, remorse, or grief could be programmed to occur. Source: http://ieet.org/index.php/IEET/more/lagrandeur20151002 |
<more at http://techcrunch.com/2015/10/17/programming-hate-into-ai-will-be-controversial-but-possibly-necessary/; related links: http://ieet.org/index.php/IEET/more/lagrandeur20151002 (Could Artificial Morals and Emotions Make Robots Safer? october 2, 2015) and http://hplusmagazine.com/2014/04/29/could-a-machine-or-an-ai-ever-feel-human-like-emotions/ (Could a machine or an AI ever feel human-like emotions? April 29, 2014)>
No comments:
Post a Comment