The Case for Explicit Ethical Agents
AI Magazine 38 (4): 57--64 (December 2017)

Morality is a fundamentally human trait which permeates all levels of human society, from basic etiquette and normative expectations of social groups, to formalized legal principles upheld by societies. Hence, future interactive AI systems, in particular, cognitive systems on robots deployed in human settings, will have to meet human normative expectations, for otherwise these system risk causing harm. While the interest in 'machine ethics' has increased rapidly in recent years, there are only very few current efforts in the cognitive systems community to investigate moral and ethical reasoning. And there is currently no cognitive architecture that has even rudimentary moral or ethical competence, i.e., the ability to judge situations based on moral principles such as norms and values and make morally and ethically sound decisions. We hence argue for the urgent need to instill moral and ethical competence in all cognitive system intended to be employed in human social contexts.
  • @flint63
  • @dblp
This publication has not been reviewed yet.

rating distribution
average user rating0.0 out of 5.0 based on 0 reviews
    Please log in to take part in the discussion (add own reviews or comments).