Centre for Biomedical Ethics at NUS Medicine awarded new Wellcome Trust Discovery Platform

Published: 09 May 2023

Centre for Biomedical Ethics awarded new Wellcome Trust Discovery Platform

The Centre for Biomedical Ethics (CBmE) at the Yong Loo Lin School of Medicine, National University of Singapore (NUS Medicine) has been awarded one of eight new Discovery Platforms, the highest research award, by the Wellcome Trust, a global charitable foundation focused on health research based in London, United Kingdom.

The CBmE will partner with the University of Oxford in the ANTITHESES Platform for Transformative Inclusivity in Ethics and Humanities, to address an urgent need for research to be able to engage meaningfully with the radical value disagreements, polarisation, and the characteristic of informational uncertainty in contemporary medical science, practice, and policy.

The CBmE will receive around S$1.3 million from 2025 to 2030, to conduct research into the Collective Reflective Equilibrium, which aims to create a methodology that utilises public opinion to derive public values, and integrate this with ethical theories, principles and concepts.

The eight new Discovery Research Platforms by the Wellcome Trust are aimed at addressing a range of practical, technological and methodological barriers holding up progress across a wide array of fields. Overcoming these barriers will enable researchers to ask even more creative and boundary-defying questions. With a total funding of £73 million, the platforms will bring together researchers, teams and networks of collaborators to develop new tools, knowledge and capabilities, to deliver a positive and inclusive research culture for researchers to conduct their best work.

“The world is becoming increasingly polarized. The most profound issue of our time is deep value disagreement. This Wellcome Trust Discovery Platform Award affords the National University of Singapore the opportunity to partner with the University of Oxford to make progress on developing an ethical framework for dealing and living with conflicting values. Dialogue and disagreement can be healthy. They can enable progress. But in a globalized world empowered by technology, moral disagreement can have dire existential consequences. We must make progress,” said Professor Julian Savulescu, Director of the Centre for Biomedical Ethics at NUS Medicine.

Pandemic Ethics: From COVID19 to Disease X: led and edited by Professor Julian Savulescu, Director of the Centre for Biomedical Ethics at NUS Medicine

Prof Savulescu has led and edited a new book, Pandemic Ethics: From COVID19 to Disease X.

According to estimates, there is a 25% chance of another global pandemic in the next decade, an unknown future threat which the World Health Organisation refers to as “Disease X”. As the world prepares for scientific breakthroughs in preparation for it, it’s important to be ethically prepared as well, according to editors of the book, Prof Savulescu and Professor Dominic Wilkinson from University of Oxford.

Pandemics raise the deepest ethical questions about the value of life. The risks presented affect different people differently, and the distribution of benefits and burdens should be considered carefully. These are difficult ethical decisions which must be made transparently, ethically and responsibly.

The book is thus aimed at arming policy makers with tools to make better decisions in the future. To do so, Prof Savulescu and Prof Wilkinson assembled a group of international experts in philosophy, ethics, law and economics, review lessons from COVID-19 for future pandemics.

Pandemic Ethics: From COVID19 to Disease X is available for purchase on both Oxford University Press and Amazon.

The new Ethics Framework for Controlled Human Infection Model Studies

As a member of the UK Pandemic Ethics Accelerator, Prof Savulescu has also co-authored the new ethics framework for Controlled human infection model studies (CHIM).

Controlled human infection model (CHIM) studies involve intentionally exposing human subjects to known pathogens under controlled conditions, which were required during the COVID-19 pandemic to accelerate vaccine development. These studies were ultimately conducted, but they began more than a year after the WHO’s declaration of the COVID-19 pandemic.

This has highlighted the need for clarity around the ethics of conducting CHIM studies, to rapidly generate and evaluate foundational information about a pathogen, and to inform the development of potential medical countermeasures.

Thus, the Report aims to establish a determinate decision procedure for conducting CHIMs, before and during pandemics. The Report is available online in the Pandemic Ethics Accelerator’s library. A summary of the report is available here.

The Ethical Dilemma of Responsibility in Large Language Models

Today, the text generated by large language models (LLMs) like ChatGPT are highly lauded for its speed, accuracy, and usefulness. But when it comes to responsibility surrounding their outputs, complex ethical issues arise.

To address this, researchers at the National University of Singapore and the University of Oxford, in collaboration with international experts, published a groundbreaking paper in Nature Machine Intelligence to address ethical issues posed by LLMs.

The study revealed that LLMs like ChatGPT pose crucial questions regarding the attribution of credit and rights for useful text generation. This diverges from traditional responsibility debates that primarily focused on harmful consequences of Artificial Intelligence (AI) and urges for an update in our concept of responsibility in LLMs.

A key finding of the research, is that while human users of these technologies cannot fully take credit for positive results generated by an LLM, it still seems appropriate to hold them responsible for harmful uses, such as generating misinformation, or being careless in checking the accuracy of generated text.

“We need guidelines on authorship, requirements for disclosure, educational use, and intellectual property, drawn from existing normative instruments and similar relevant debates. Norms requiring transparency are especially important, to track responsibility and correctly assign praise and blame,” added Prof Savulescu, senior author of the study.

The study, co-authored by an interdisciplinary team of experts in law, bioethics, machine learning, and related fields, delves into the potential impact of LLMs in critical areas such as education, academic publishing, intellectual property, and the generation of mis- and disinformation.

In addition to the aforementioned topics in Ethics, CBmE will be hosting a Public Lecture on the Ethics and Regulation of Technology on 28 June 2023. The lecture delves into the obstacles and opportunities presented by the regulation of technology, Artificial Intelligence, as well as Rapidly Advancing Stem Cell Technologies.