How to review a medical curriculum

Number of Citations:

Published online: 1 June, TAPS 2016, 1(1), 23-25
DOI: https://doi.org/10.29060/TAPS.2016-1-1/SC1009

Richard Hays

University of Tasmania, Australia

Abstract

A curriculum is an important component of a medical program because it is the source of information that learners, teachers and external stakeholders use to understand what learners will experience on their journey to recognition as a medical graduate. While many focus on and debate the content of a medical curriculum, with some suggestions that there should be national curricula for each jurisdiction or even a global curriculum for all medical programs, the curriculum content is only one factor to consider when designing, revising or accrediting a curriculum. Just as important are the alignment with the program’s mission and health workforce needs, the presence of agreed graduate outcomes, the theoretical bases of the curriculum, the prior learning of commencing students, the curriculum implementation models, the assessment of student progress and program evaluation processes. This paper presents a framework for this more holistic approach to reviewing a curriculum, proposing triangulation of information from several sources – documents, websites, learners, teachers and employers – and considering several accreditation standards that impact on curriculum design and delivery.

Keywords:        Curriculum design; curriculum review; accreditation; social accountability; program evaluation

I. BACKGROUND

In medical education the curriculum defines medical programs, guides the teaching by faculty and informs the learning by students of what is required to become a doctor. For basic medical education, the outcome is recognition as a novice practitioner, and for subsequent levels there are more specific outcomes related to particular specialties. The term ‘curriculum’ is defined in the Oxford Dictionary as ‘the subjects comprising a course of study in a school or college’, which suggests an emphasis on the content, whereas learning may depend significantly on how the content is delivered, learned and assessed.

The pace of medical curriculum review has increased globally due to several factors. Several new medical programs have been established, based on growing populations and rising health care standards, particularly in developing nations. Whether purchased from existing institutions or developed locally, new curricula have to be designed and most new programs face either mandatory or voluntary accreditation processes. Demographics are changing, particularly in developed nations, where the population is ageing and living with increasingly complex and chronic health care needs, requiring a larger and differently trained medical workforce (Duckett, 2005).   Many universities are seeking efficiencies in program delivery, because the small group, clinician-led models preferred in medical education are expensive; leaders ask, perhaps not unreasonably, why medical education cannot be provided as effectively by less expensive methods, such as large group lectures supported by on-line resources and more junior faculty. We find ourselves in what might be termed a ‘post-PBL’ environment, where PBL programs have been criticised for gaps or lack of depth in anatomy, pathology and other foundation sciences, even though PBL models were developed in part to address the rapid increase in the knowledge base for medical practice, promoting peer-supported and self-directed learning (Dolmans et al, 2005). Can coping with this knowledge explosion be done differently?

Employers find that some medical graduates are not yet ‘work ready’, able to take responsibility for their actions or contribute to safe patient care without (ex-pensive) supervision and further training. Finally, regulators are becoming more vocal about challenges to the commonly used self-regulation model for the medical profession, amidst increasing complaints and concerns about competence and errors. Although most of these concerns relate to communication skills and professional behaviours of a small minority, regulators are increasing requirements for standards to be met by medical graduates outside of the traditional scientific knowledge domains.   As a result, there are increasing requirements for accreditation or formal recognition of medical programs by regulatory authorities to ensure that programs produce the graduates needed to provide medical care. Arguably, the strongest accreditation systems are conducted by the General Medical Council for the UK, the Australian Medical Council for Australia and New Zealand, and the Liaison Committee for Medical Education (USA and Canada), but many other jurisdictions have, or are developing, strong accreditation processes.   There are also global standards developed and promoted by the World Federation of Medical Education (WFME), which map reasonably well to most standards. While the World Federation of Medical Education is not an accrediting body, there are moves to mandate that accreditation standards and processes must comply with the WFME global standards for graduates to be eligible for recognition across jurisdictional borders (Karle, 2006).

There are therefore two broad categories of curriculum review. The first is that conducted by medical schools, new and old, to develop, maintain or refresh curricula that are current and fit for purpose. This should be a continuous process, with changes based on some kind of evidence, ideally evaluation data. The second category is that conducted by regulatory bodies during accreditation processes, in which the curriculum is always a major focus. For both categories, a broader, more holistic view of a curriculum, rather than just content, should be adopted. This means that a curriculum review should seek information or data from much more than just descriptions of the subject content. This paper presents a framework for achieving this more holistic approach.

II. METHOD

This paper is based on an analysis of the structure of standards and accreditation protocols of the General Medical Council, the Australian Medical Council, the Liaison Committee for Medical Education and the World Federation for Medical Education. In each case medical programs are measured against several standards, where only one standard might specifically address curriculum content, but other standards address delivery, assessment and evaluation. Sources of evidence for a curriculum review may therefore be found when considering almost all standards.

A. A framework for reviewing a medical curriculum

Although a curriculum should be well described in writing, such documents are a single source of information about what is intended. Judgements about curriculum content and process are best made through triangulation of information and data from a combination of potential sources that reflect a wide range of issues, as summarised in Table 1. Most of these sources should be readily accessible, although requires both electronic access (through a guest log in account) and a physical visit to inspect the facilities. Further information, particularly about implementation, can be obtained through observation of aspects of program delivery, such as teaching sessions and clinical examinations.

Constructive alignment of a curriculum, from the vision and mission through curriculum delivery and assessment, is important because it demonstrates that the curriculum is a more holistic, ‘connected’ entity. It shows that curriculum content, process and intended outcomes are planned and designed with an explicit intention to produce a particular kind of graduate. Ideally, the outcomes are the same as those of the accreditation body, although many schools will add some of their own. For example, while all schools in a particular jurisdiction may plan to produce ‘work ready’ graduates safe to enter postgraduate training, some may have additional outcomes relating to elite research performance or to meeting the needs of underserved populations, following the growing international trend towards social accountability (Boelen and Woollard, 2009).

There should be evidence of purposeful, theory-based educational design (Prideaux, 2003). There is a spectrum of pedagogical models, from separate subjects delivered to large groups by lectures, through to highly integrated (vertically and horizontally) programs delivered through interactive, small groups, following a case-based or problem-based learning model. While educators may have a preference for a particular model, all can work, so long as the content, delivery and assessment methods are done well. It is important to design the curriculum content and process to match the learners’ characteristics at entry. For example, school leaver programs tend to be longer and to have adjustment to university life and introductory foundation sciences early, followed by more integrated, clinically-immersed learning, whereas graduate entry programs commence with an assumption that students are ready to commence with the more integrated, clinically-oriented approach.

An additional consideration is cohort size, because interactive, small group models are difficult to deliver unless group size is appropriate (8-10 maximum?). This has implications for the physical facilities and intranet-based Learning Management System (LMS), because small group, interactive learning required larger numbers of tutorial rooms that are appropriately furnished and equipped, and accessible, flexible and interactive repositories of electronic learning resources.

Ideally, all learning outcomes are measurable – this may be a matter of wording – and then form the basis of assessment practices, such as method selection, blueprinting, item bank development and standard setting. It is important that an integrated curriculum has integrated assessment, otherwise students may focus on non-integrated sources (a ‘hidden curriculum’) rather than the curriculum. Finally, there should be evidence of evaluation processes that monitor the curriculum content and delivery. A medical curriculum should be a continuously evolving entity, with decisions for change based on the best available evidence. Such evidence may come from both the routine, annual or semester-based program-wide data on participation, and the more reflexive and exploration of specific questions or concerns that arise during academic years. There should be evidence of evaluation feedback being formally considered, with decisions to make changes and then evidence that the change has taken place and participants advised of the results of the evaluation.

 

Curriculum feature

    Information sources        

Website /LMS

Program outline Unit/subject outlines Assessment reports* Faculty Students* Stakeholders

Facilities Tour

Aligned with Vision & Mission
Measurable graduate outcomes
Purposeful design
Appropriate for admission point
Suitability of facilities and LMS
Aligned with assessment
Evaluation explicit and built-in

Table 1. Framework for reviewing a medical curriculum

Table 1 includes the potential sources of information that should be sought when a curriculum is reviewed. This demonstrates the potential weakness of reviews based on only documents, because the documents describe what is intended to take place, not necessarily what does take place. Hence speaking with faculty (including part-time clinical teachers), students, employers and regulators can provide different information that describes the curriculum-in-action.   Also important is the direct observation of teaching sessions of various types and of clinical assessment, both in the workplace and in OSCEs. It is not unusual for application to vary widely due to local ‘modifications’, despite apparently similar, ‘standard’ descriptions.

III. CONCLUSION

Reviewing a curriculum should be a continuous activity to maintain currency and fitness for purpose. The review should adopt a more holistic approach that includes curriculum content, delivery and assessment practices, as well as resourcing. This paper presents a framework to guide curriculum reviewers the issues to consider and the potential sources of information on which to base judgements.

Notes on Contributors

Richard Hays is an experienced medical educator with qualifications in both medicine and education. He has contributed to or led the design of several medical education programs and has also conducted formal medical program reviews at approximately 20 institutions in the United Kingdom, Europe and the Asia-Pacific region.

Ethical Approval

Ethical approval is not sought because there is no data presented and no possibility of identification of individual patients or students.

Declaration of Interest

There is no conflict of interest, including financial, consultant, institutional and other relationships that might lead to bias or a conflict of interest.

References

Boelen, C. & Woollard, R. (2009). Social accountability and accreditation: a new frontier for educational institutions. Medical Education, 43, 887-894.

Dolmans, D., De Grave, W., Wolfhagen I. & Van Der Vleuten, C. P. M. (2005). Problem-based learning: future challenges for educational practice and research. Medical Education, 39, 732-741.

Duckett, S. (2005). Health workforce design for the 21st century. Australian Health Review Quarterly, 29, 201-210.

Karle, H. (2006). Global standards and Accreditation in medical education: a view from the WFME. Academic Medicine, 81, S43-S48.

Prideaux, D. (2003). ABC of learning and teaching in medicine: Curriculum design. British Medical Journal, 326, 268-270.

 

*Richard Hays
University of Tasmania
Medical Science 2 Building, 17 Liverpool St, Hobart
Tel: (03) 6226 4721
Fax: (03) 6226 7704
Email: richard.hays@utas.edu.au

Announcements