Date: Friday May 25, 2018, 2017
Place: NII, room 1208
Title: A Modular Framework for Computational Ethics – Reasoning about the Right or the Wrong according to different ethical principles
Speaker: Gauvain Bourgne (Sorbonne University)
The study of morality from a computational point of view has attracted a rising interest from researchers in artificial intelligence. Indeed, the growing autonomy of artificial agents and increase in the number of tasks that are delegated to them urges us to address their capacity to process ethical restrictions and preferences, be it within their own internal structure or for interaction with human users. Fields as varied as health-care or transportation pose ethical issues that are in this sense particularly pressing, as they may confront agents with decisions that yield immediate or heavy consequences. Computational ethics can also help us better understand morality and reason more clearly over ethical concepts that are employed throughout philosophical, legal and technological domains.
In this context, we provide a modular architecture that allows for the systematic and adaptable representation of ethical principles. To achieve this, we present a novel and modular logic-based framework for representing and reasoning over a variety of ethical theories, based on a modified version of the Event Calculus and implemented in Answer Set Programming. The ethical decision-making process is conceived of as a multi-step procedure captured by four types of interdependent models which allow the agent to assess its environment, reason over its accountability and make ethically informed choices. The overarching ambition of the presented research is twofold. First, to allow the systematic representation of an unbounded number of ethical reason- ing processes, through a framework that is adaptable and extensible by virtue of its designed hierarchisation and standard syntax. Second, to avoid the pitfall of much research in current computational ethics that too readily embed moral information within computational engines, thereby feeding agents with atomic answers that fail to truly represent underlying dynamics. We aim instead to comprehensively dis- place the burden of moral reasoning from the programmer to the program itself.