Project Title:
Developing a trust governance framework for AI in Emergency Healthcare

Grant Period:
01 Mar 2024 – 28 Feb 2027

Quantum:
S$339,137

Funding Source:
AI Singapore

Principal Investigator:
Julian Savulescu

Co-Investigators:
Marcus Ong (Duke-NUS), Brian Earp, Owen Schaefer, Liu Nan(Duke-NUS), Yoon Sung Won(Duke-NUS), Cheong Kang Hao (SUTD)

Collaborators:
Angela Ballantyne (Uni of Otago), Cameron Stewart (Uni of Sydney, Law), Michael Dunn (NUS), Sean Lam (Duke-NUS), Mayli Mertens(Duke-NUS),Chan Hui Yun (NUS), Toh Hui Jin (NUS) , Ngiam Kee Yuan (NUHS)

Project Summary

In the high pressured settings of Emergency Healthcare (EH), practitioners routinely make life-and-death decisions about which patients to prioritise for life-saving care and when to withdraw interventions, such as resuscitation. Practitioners often make these urgent decisions with limited time, resources, and access to information. AI could assist with this decision-making to rapidly diagnose and assess the prognosis of patients in need of EH. More rapid decision-making may improve clinical outcomes while contributing to safer, more effective, and cost-efficient delivery of EH services. However, the trust of practitioners, patients, and the public, in AI to assist with making reasonable life-and-death decisions, is essential.

With an overarching goal of promoting trustworthy AI, this research aims to generate empirically informed and philosophically-robust ethics and governance frameworks for the adoption of AI to support life-sustaining decisions in EH. The research is designed with an innovative and interdisciplinary approach that will engage stakeholders in exploring the ethical, legal and trust issues around AI adoption. In collaboration with EH practitioners, data scientists and healthcare professionals in Singapore, the research will consider four use case examples of AI in EH: three involve decision support systems that are currently being developed at SingHealth for cardiac arrest conditions, and another forward-looking example involving generative AI to synthesise and create ethical guidance. The research will consider these case examples within four themes that will:

  1. map the ethico-legal and philosophical literature,
  2. describe the attitudes and trust of key stakeholders,
  3. synthesize the literature and empirical findings using processes of Collective Reflective Equilibrium in Practice, and
  4. develop policy recommendations for implementing trustworthy AI in EH settings.

This research is innovative because of the interdisciplinary approach and multiple layers of integration between normative analysis and empirical research. In consultation with key stakeholders in the clinical, scientific and patient communities, the research will test and iteratively refine interpretations of the research findings. It will also be the first internationally to explore attitudes and trust of key stakeholders towards the use of AI to support decisions to limit or withdraw care in EH settings.