RI: Small: Foundations of Ethics for Multiagent Systems

  • Singh, Munindar M.P. (PI)

Project Details

Description

The surprising capabilities of modern artificial intelligence (AI) technologies raise legitimate concerns of the power that intelligent agents potentially wield over human welfare. Ethics is inherently a multiagent concern and thus a view of ethics limited to a single agent is inadequate. However, research into AI ethics today is dominated by work on individual agents. This project—dubbed Mae for Multiagent Ethics—adopts a sociotechnical stance in which agents (as technical entities) help autonomous social entities or principals (people and organizations). This multiagent conception of a sociotechnical system (STS) captures how ethical concerns arise in the mutual interactions of multiple stakeholders. These foundations would enable us to realize ethical STSs that incorporate social and technical controls to respect stated ethical postures.

To realize the envisioned foundations requires new thinking, in three thrusts. First, Mae will develop a model of ethics geared toward STSs by incorporating ideas from philosophy and political science, especially value theory and the design of a society. Second, Mae will develop reasoning techniques to help identify potentially unethical outcomes of ethical interventions. Third, Mae will develop techniques for eliciting stakeholders' values in context, continually applying the above-mentioned analysis methods to help stakeholders refine their requirements to specify an acceptable STS. These techniques will include enhancements of formal modeling tools, such as model checkers, to accommodate reasoning about ethics as well as agent-based social simulation applied for evaluating the ethicality of system outcomes due to individual decision making. Mae's novelty lies in how it expands multiagent system modeling to include constructs such as norms and values; combines formal reasoning and social simulation to verify STS specifications with respect to a stated ethical posture; and helps specify an STS by identifying ethical tradeoffs between alternative STS specifications and capturing an STS's ethical posture in its social and technical architectures.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

StatusActive
Effective start/end date1/10/2130/9/24

Funding

  • National Science Foundation: US$500,000.00

ASJC Scopus Subject Areas

  • Artificial Intelligence
  • Computer Science(all)

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.