Will machines one day have intelligence and autonomy comparable to human intelligence and autonomy? How will we integrate them into our lives--into our society-- and importantly, will they have rights and responsibilities in the moral sense? Colin Allen writes about the future of moral machines.
...the topic of machine morality is here to stay. Even modest amounts of engineered autonomy make it necessary to outline some modest goals for the design of artificial moral agents. Modest because we are not talking about guidance systems for the Terminator or other technology that does not yet exist. Necessary, because as machines with limited autonomy operate more often than before in open environments, it becomes increasingly important to design a kind of functional morality that is sensitive to ethically relevant features of those situations. Modest, again, because this functional morality is not about self-reflective moral agency — what one might call “full” moral agency — but simply about trying to make autonomous agents better at adjusting their actions to human norms. This can be done with technology that is already available or can be anticipated within the next 5 to 10 years.
No comments:
Post a Comment