Abstract

Morality is a fundamentally human trait which permeates all levels of human society, from basic etiquette and normative expectations of social groups, to formalized legal principles upheld by societies. Hence, future interactive AI systems, in particular, cognitive systems on robots deployed in human settings, will have to meet human normative expectations, for otherwise these system risk causing harm. While the interest in 'machine ethics' has increased rapidly in recent years, there are only very few current efforts in the cognitive systems community to investigate moral and ethical reasoning. And there is currently no cognitive architecture that has even rudimentary moral or ethical competence, i.e., the ability to judge situations based on moral principles such as norms and values and make morally and ethically sound decisions. We hence argue for the urgent need to instill moral and ethical competence in all cognitive system intended to be employed in human social contexts.

Links and resources

Tags

community

  • @flint63
  • @dblp
@flint63's tags highlighted