Designing an assessment system for safe, effective, and empathetic practitioners

Number of Citations: 0

Submitted: 19 August 2025
Accepted: 30 September 2025
Published online: 7 October, TAPS 2025, 10(4), 1-4
https://doi.org/10.29060/TAPS.2025-10-4/GP3858

Dujeepa D. Samarasekera1, Gominda Ponnamperuma2, Lee Shuh Shing1 & Han Ting Jillian Yeo1

1Centre for Medical Education (CenMED), Yong Loo Lin School of Medicine, National University of Singapore, Singapore; 2Faculty of Medicine, University of Colombo, Sri Lanka

Abstract

Introduction: Medical education aims to produce healthcare professionals who are not only competent, but also able to perform effectively in clinical practice settings. Assessment systems are critical to achieving this by guiding learning, ensuring competence, and certifying readiness for independent practice. This article proposes a staged assessment approach that integrates both competence and performance to ensure safe and empathetic healthcare practice.

Methods: First, we analysed the strengths and limitations of the existing assessment methods and their roles in medical education. Then, we explored strategies to integrate diverse assessment tools into a cohesive assessment system capable of effectively and reliably evaluating the competencies required for developing holistic practitioners.

Results: Competence is assessed via structured assessment tools such as written assessments. Clinical performance in real-world settings relies on Supervised in-practice assessments (SuPs), including tools like Direct Observation of Procedural Skills (DOPS) and Mini-Clinical Evaluation Exercises (mini-CEXs). Assessment tools used to evaluate performance rely on expert judgement, which, although subjective, is essential for evaluating non-cognitive skills such as empathy and professionalism.

Conclusion: This article outlines the design of a progressive assessment system, transitioning from objective assessment methods such as Multiple-Choice Questions (MCQs) to performance-focused methods, anchored by Entrustable Professional Activities (EPAs), using Workplace-Based Assessment tools and portfolios. The progression from early objective assessment tools to those which leverage expert judgement and situational specificity are highlighted as essential for preparing safe, effective, and empathetic healthcare practitioners.

Practice Highlights

  • Modern assessment systems focus on both competence in non-practice settings and performance in authentic clinical practice settings.
  • A combination of tools is required to assess from “knows” to “is” level of clinical performance.
  • Expert evaluations provide qualitative insights into candidate performance.

I. INTRODUCTION

Traditionally, assessments in clinical education strived for standardisation, structuredness and objectivity. A single quantitative method, such as paper-based Multiple-Choice Questions (MCQs), was often used to assess a student’s competence in knowledge. Similarly, the Objective Structured Clinical Examinations (OSCEs) or long/short clinical cases were used for assessing psychomotor and affective domains related to clinical skills. To deliver healthcare effectively and empathetically, a broad range of skills must be cultivated. Over the years, there has been a gradual yet noteworthy transition from exclusively focusing on the development and assessment of one’s competence in clinical skills to placing greater emphasis on enhancing clinical performance within specific clinical contexts (Hays et al,2024).

Miller’s Pyramid (1990) of Clinical Competence illustrates this progression: from “knows” to “knows how”, “shows how”, and finally “is” as proposed by Cruess et al. (2016). At present, medical and health professional training programs judiciously select a combination of assessment methods to ensure learners are task-ready, empathetic, and safe for clinical practice. This article proposes and elaborates on the use of a staged assessment approach in health professional training, progressing from the development of competence to the refinement of clinical performance within specific practice contexts. The core idea is that competence alone does not ensure effective clinical practice. Both competence and performance must be developed to ensure safe and compassionate care.

Figure 1: The diagram adapted from Cruess et al (2016) article on “Amending Miller’s Pyramid to Include Professional Identity Formation”, to illustrate a shift in focus as trainees progress to the later stages of training from assessment of competence to assessment of performance.

II. COMPETENCE AND ITS ASSESSMENT

As illustrated in Figure 1, competence or “Readiness to Practice” refers to an individual’s “ability”, encompassing knowledge, psychomotor or clinical skills, and attitudes, which together form the foundation of medical practice. Knowledge-based skills include problem-solving and clinical reasoning, psychomotor skills involve physical examinations and procedural techniques, while affective skills pertain to empathetic communication. Historically, our assessments have primarily focused on evaluating competence, employing a range of assessment tools as the following.

Written assessments, such as MCQs and Modified Essay Questions (MEQs), are designed to evaluate the “knows” and “knows how” levels of Miller’s Pyramid. These assessments primarily focus on theoretical knowledge, including understanding disease pathophysiology, as well as the procedural steps involved in performing clinical skills and managing medical conditions.

In contrast, practical and competence-based assessments, such as OSCEs, evaluate psychomotor and affective competencies, including procedural skills, diagnostic reasoning, patient interaction and communication in a controlled environment. Long and short cases, on the other hand, assess the same abilities within semi-controlled environments. These assessment formats target the “shows” level of Miller’s Pyramid, emphasising the development and demonstration of clinical skills in structured, controlled testing settings.

The feature of “shows” assessment methods is that they promote standardisation and assessment based on a rubric. Hence, they are “objective” and fairly reliable for assessing specific aspects of competence.

III. PERFORMANCE AND THE ROLE OF SUPERVISED IN-PRACTICE ASSESSMENTS (SuPs)

As illustrated in Figure 1, as learners progressed from early stages to later or final stages of learning, the focus shifts from assessment of competence to assessment of performance. While the assessment of knowledge continues to play an important role, the focus increasingly shifts towards ensuring that graduates are ready for clinical practice. Performance or “Quality in Practice” requires learners to apply their competence in dynamic, high-pressure clinical settings. These situations are both context-specific and situation-specific. In modern medical education, Entrustable Professional Activities (EPAs) anchor these authentic clinical tasks. EPAs focus on specific professional responsibilities, such as managing acute care or conducting patient handovers. These tasks are assessed by an “expert” using professional judgement. Entrustment decisions are based on evaluations from multiple experts (Cate & Schumacher, 2022).

Common tools used during SuPs include Direct Observation of Procedural Skills (DOPS), Case-Based Discussions (CBDs), multi-rater or 360 assessments, and Mini-Clinical Evaluation Exercises (mini-CEXs). These tools provide real-time feedback on a student’s or resident’s clinical performance in specific contexts. Collectively, they are also known as Workplace-Based Assessment tools (WBAs).

As students’ progress through clinical rotations or clerkships, these SuPs are compiled into an assessment portfolio. This portfolio includes case logs, feedback from supervisors and learner reflections. Together, these elements document the student’s longitudinal development. At certain time points, the portfolio is assessed by a Committee of Experts (CoE), and an Entrustment Decision is given. SuP assessments immerse learners in authentic clinical environments, enabling them to demonstrate how they apply competence gained in clinical practice. Final judgement of a student or a trainee’s performance and fitness for clinical practice then should be based on the CoE’s value judgement based on the portfolio.

IV. ADVOCATING FOR EXPERT JUDGEMENT: HOLISTIC EVALUATION OF A LEARNER

Expert judgement by assessors when conducting SuP assessments is commonly perceived to be subjective and bias-laden as it shifts away from quantitative to qualitative measures. However, we offer a different insight on how SuP assessments can triangulate with other more “objective” assessment tools to formulate a complete evaluation of a learner.

Expert judgement made by assessors can synthesise multiple facets of performance of a task such as that involving clinical reasoning, empathy and professionalism, in a specific context, into an interconnected evaluation, something that an objective assessment is unable to measure authentically. Multiple ‘subjective’ evaluations by many experts often provide richer, more personalised feedback that helps learners understand their strengths and areas for improvement, promoting deeper learning and growth. At the same time, in WBA, if multiple cases (i.e., patients with varying disease conditions) in many situations/contexts are assessed by multiple expert assessors, both validity and reliability of such assessment are not compromised.

Expert judgement is essential for performance assessments. While often viewed as subjective, expert judgement is vital for evaluating attributes like clinical reasoning, empathy, and professionalism. For example, in EPA-based assessments, experts determine whether learners can perform specific tasks independently, considering not just technical skills but also communication, prioritisation, and adaptability (Cate & Regehr, 2018). To ensure consistency, assessors require thorough calibration through training. Standardised tools, rating scales, and regular discussions among assessors enhance reliability and minimise bias.

Non-cognitive skills such as empathy and professionalism are essential for safe practice but challenging to assess. Portfolios which incorporate Multi-Source Feedback (MSF) provide avenues to evaluate these qualities, incorporating input from patients, peers, and supervisors. Reflective exercises encourage learners to explore biases, communication styles, and values, fostering self-awareness and empathy, and continued learning.

V. PRACTICAL CONSIDERATIONS

A. Balancing Objectivity and Subjectivity

The challenge lies in balancing “objective” assessments with “subjective” evaluations of performance. While MCQs and OSCEs provide standardised measures, expert judgement is crucial for situational assessments. Safeguards need to be in place to maximise the value of subjectivity while ensuring fairness and reliability. These include developing a structured rating scale, calibrating assessors on the scale through vocalising their thought process, discussion on biases, and using judgements from many assessors and contexts before an assessment decision is made.

B. Resource Allocations

SuP assessments demand significant resources, including trained assessors, robust documentation systems, and protected time for feedback as well as the transience of the judgement. Institutions must prioritise these investments to sustain an effective assessment system.

C. Prioritising Transparency

Ensuring transparency of expectations and standards for all assessment tools for educators and learners is important. This involves clearly defining and effectively communicating the criteria for both “objective and subjective” components of the assessment process. Judgements should be documented and explained, with a clear linkage to observable behaviours or outcomes, to foster understanding and trust in the assessment process.

VI. COMPETENCE AND ITS ASSESSMENT

Designing an assessment system to develop a safe, effective, and empathetic practitioners requires a staged, integrated approach. Competency-based assessments build foundational skills, while SuP assessments evaluate task-specific performance through expert judgement. The gradual shift from competence to performance ensures learners are prepared for the complexities of clinical practice. By incorporating EPAs, expert feedback and portfolios, the system prepares graduates to deliver patient-centred, professional, and safe care.

Future innovations like simulation-based assessments, AI-driven capture of assessor comments and feedback systems hold promise for further improving the credibility, transferability, dependability and confirmability of assessments processes for health professional programmes. The ultimate goal is to prepare practitioners for high-quality, empathetic care in an evolving healthcare landscape.

Notes on Contributors

Dujeepa Samarasekera contributed to the concept and writing of the manuscript.

Lee Shuh Shing and Han Ting Jillian Yeo contributed to writing and editing the manuscript.

Gominda Ponnamperuma contributed to reviewing the manuscript.

Funding

This study has not received any funding.

Declaration of Interest

There are no conflicts of interests related to the content presented in the paper.

References

Cate, O. T., & Regehr, G. (2018). The power of subjectivity in the assessment of medical trainees. Academic Medicine, 94(3), 333–337. https://doi.org/10.1097/ACM.0000000000002495

Cate, O. T., & Schumacher, D. J. (2022). Entrustable professional activities versus competencies and skills: Exploring why different concepts are often conflated. Advances in Health Sciences Education, 27(2), 491–499. https://doi.org/10.1007/s10459-022-10098-7

Cruess, R. L., Cruess, S. R., & Steinert, Y. (2016). Amending Miller’s Pyramid to Include Professional Identity Formation. Academic medicine: Journal of the Association of American Medical Colleges91(2), 180–185. https://doi.org/10.1097/ACM.0000000000000913

Hays, R.B., Wilkinson, T., Green-Thompson, L., McCrorie, P., Bollela, V., Nadarajah, V.D., Anderson, M.B., Norcini, J., Samarasekera, D.D., Boursicot, K. and Malau-Aduli, B.S. (2024). Managing assessment during curriculum change: Ottawa consensus statement. Medical Teacher, 1-11. https://doi.org/10.1080/0142159X.2024.2350522

Miller, G. E. (1990). The assessment of clinical skills/competence/performance. Academic Medicine, 65(9), S63-7. https://doi.org/10.1097/00001888-199009000-00045

*Dujeepa D. Samarasekera
Yong Loo Lin School of Medicine,
National University of Singapore,
Block MD 11, #01-11,
Clinical Research Centre 10 Medical Drive,
Singapore 117597
Email: dujeepa@nus.edu.sg

Announcements