PRE-CONFERENCE WORKSHOPS
On Saturday 24 and Sunday 25 February 2024, the Ottawa 2024 program will be dedicated to pre-conference workshops. The exciting range of workshops on offer allows participants to register for workshop content and activity that fits with their areas of interest. You may be starting out in a field and need some initial guidance on how to get started. Or alternatively you may be wishing to dig deeper into an area of particular focus for you.
The program has been arranged so that you can pursue a particular focus across the two days. Examples of themes include technology and assessment, programmatic approaches to assessment and assessment design for learning. You may also decide to ‘pick and mix’ your activity across a range of themes.
As a worked example (not intended to be comprehensive), if you are interested in psychometric evaluation of assessment data you may pick out the workshop on Saturday morning on Item Response Theory and then select to spend Saturday afternoon exploring the potential of the Rasch model for quality assurance and improvement. Sunday might see you registering for a different theme such as going “behind” and “beyond” closed doors relating to Programmatic assessment approaches.
We encourage early registration for workshops to avoid disappointment if your choice fills up quickly. Registrations to attend the workshops will be in addition to the conference registration fees. A discount applies to participation in more than one workshop.
Please note that “ESMEA Essential Skills in Medical Education – Assessment” is a two-full day workshop. Registrants for this workshop must attend both days and cannot attend other pre-conference workshops.
Saturday 24 February 2024 – 9:00am to 12:30pm
PC01 | ESMEA Essential Skills in Medical Education - Assessment
Please note this is a two-full day workshop. Registrants for this workshop must attend both days and cannot attend other pre-conference workshops.
Workshop Facilitators: Prof Katharine Boursicot, Prof Sandra Kemp, Prof Jennifer Williams, Prof Brian Jolly and Dr Lucy Wilding
The ESMEA Course accredited by AMEE introduces the fundamental principles of assessment for healthcare professions educators who are involved with assessing undergraduate students,
graduate trainees and practicing doctors and who wish to gain a thorough foundation in assessment. After completing the course, participants will have acquired a vocabulary and a framework for understanding essential concepts in assessment, and familiarity with the principles for their practical implementation. This course is designed for health professional educators who want a comprehensive introduction to evidence-based assessment principles, and also for experienced educators who would like an update on contemporary best practice in assessment.
PC02 | A Practical Introduction to Item Response Theory
Workshop Facilitators: Stefan Schauber, Dr Dario Cecilio Fernandes
The Ottawa consensus framework for good assessments (Norcini et al. 2018) highlights several criteria for quality of assessment. Especially in the context of high-stakes assessment, the criteria of Validity, Reliability and Equivalence can be substantiated using psychometric methods. This workshop will facilitate an understanding of the basic features of Item Response Theory (IRT; DeMars 2010) and how it can practically help to evaluate aspects of the quality of assessments as well as support decision making process. In educational measurement, IRT has become the de-facto standard when analyzing data from high-stakes tests. Furthermore, IRT is the foundation for many technology-enhanced assessments and for computerized adaptive testing more specifically. Hence, an understanding of the underlying statistical procedures will help practitioners to capitalize on new developments in the area of digital assessments. Participants will be able to discuss the advantages and disadvantages of IRT and to interpret the most important outcomes of an IRT analysis.
PC03 | Decision Making in Competency-Based Medical Education: What Information is Needed for Competency Committees to Make Defensible Decisions?
Workshop Facilitators: Dr Shelley Ross, Dr Brent Kvern, Dr Kathrine Lawrence, Dr Cheri Bethune, Dr Keith Wilson, Dr Alison Baker, Dr Erich Hanel, Dr Annelise Miller, Dr Theresa van der Goes and Dr Karen Schultz
A crucial component of assessment in health professions education is making summative (high-stakes) decisions about how well a learner is progressing within their training program, and/or the learner’s level of competence. Much work has been accomplished across the health professions in defining the competencies that should be assessed during training in a variety of health professions education programs. However, there is still work to be done in two crucial areas: first, determining what kinds of assessment information need to be collected about a learner’s progression towards competence; and second, establishing effective ways for training programs to make sense of the assessment information collected. For many programs, the second area is of particular concern, as they struggle to determine appropriate processes for competence committees to combine various pieces of assessment data in order to arrive at defensible and accountable summative decisions. The workshop will discuss key concepts of defensible summative decision making, framed in the context of published evidence, theory, and best practices. These include: programmatic assessment, matching tools to purpose, rater cognition, and sensibly combining a variety of assessment tools to come to a progress decision about a learner.
PC04 | Improving Feedback Conversations in Clinical Assessment Through Intellectual Candour
Workshop Facilitators: A/Prof Margaret Bearman, Prof Elizabeth Molloy
Feedback conversations are important for learning through assessment but unfortunately they often don’t deliver on this potential. Despite literature espousing active learner involvement in feedback, ‘interactions’ often take the form of well-intended supervisor monologues. This workshop describes the phenomenon of intellectual candour or “…disclosure for the purpose of one’s own learning and the learning of others” (Molloy and Bearman 2018, p 1). We outline how educators judiciously disclosing their own uncertainties may influence feedback conversations for the better. In this workshop, illustrative examples of workplace assessment conversations will be presented, foregrounding how the healthcare assessment milieu influences learners reluctance to render themselves vulnerable. Participants will experiment with how ‘the moves’ of the educator can promote or stifle learner engagement in feedback, with a particular focus on intellectual candour.
PC05 | (Re)Designing Test Blueprints using Entrustable Professional Activities
Workshop Facilitators: Dr Carlos Gomez-Garibello, Dr Maryam Wagner and Dr Paola Fata
With the transition to competency-based medical education (CBME), training programs are incorporating Entrustable Professional Activities (EPAs) in their curriculum, teaching and assessment. Whilst the use of EPAs is becoming increasingly ubiquitous, educators do not always capitalize on the opportunities to integrate EPAs throughout their programs. Even less frequent is the use of EPAs to develop test blueprints to test clinical knowledge. We propose using an Evidence-Centred Design (ECD) framework to use EPAs to design assessments and/or tests in a systematic and rigorous manner. The workshop will be delivered through plenaries, hands-on activities, and discussions. Part 1 will be devoted to teaching the basics of the creation of test blueprints, and the educational tenets of EPAs and their components. Part 2 involves illustrating an evidence-based approach (Mislevy et al, 2003) to use EPAs to develop a test blueprint. The process will be exemplified using the Canadian Association of General Surgeons’ (CAGS) formative exam. During Part 3 of the workshop, participants will be provided a selection of EPAs, competencies, and a test blueprint template, so they can apply the principles covered in Part 2 to develop test blueprint. Part 4 will engage participants in a discussion of how to apply the ideas and principles in their educational contexts.
PC06 | The Realities of Programmatic Assessment – Tips and Pitfalls for Progression Decision Making
Workshop Facilitators: Dr Nidhi Garg, Tyler Clark, Dr Lauren O’Mullane and Prof Deborah O’Mara
Programmatic Assessment (PA) is increasingly being implemented in medical and health professional programs throughout the world since first being outlined by Schuwirth and Van der Vleuten (2011). While there is widespread support for the principles of PA, studies to-date have been largely theoretical and some medical schools have implemented only some aspects of PA (Torre et al., 2020). Unintended consequences of PA can include an increase in data complexity, staff workload, resource requirements, persistent failure to fail, non-transparent decision-making and increase in student stress (Ryan et al., 2023). Sydney Medical School implemented PA in 2020 alongside a curriculum reform. While there have been benefits to implementing a PA system, challenges also arose. This necessitated adaptations to the initial vision of PA to suit our context in which allow for progression decisions to be made for multiple large cohorts at a single time point.
Saturday 24 February 2024 – 1:30pm to 5:00pm
PC01 | ESMEA Essential Skills in Medical Education - Assessment
Please note this is a two-full day workshop. Registrants for this workshop must attend both days and cannot attend other pre-conference workshops.
Workshop Facilitators: Prof Katharine Boursicot, Prof Sandra Kemp, Prof Jennifer Williams, Prof Brian Jolly and Dr Lucy Wilding
The ESMEA Course accredited by AMEE introduces the fundamental principles of assessment for healthcare professions educators who are involved with assessing undergraduate students,
graduate trainees and practicing doctors and who wish to gain a thorough foundation in assessment. After completing the course, participants will have acquired a vocabulary and a framework for understanding essential concepts in assessment, and familiarity with the principles for their practical implementation. This course is designed for health professional educators who want a comprehensive introduction to evidence-based assessment principles, and also for experienced educators who would like an update on contemporary best practice in assessment.
PC07 | Applying a Framework for Systems of Assessment
Workshop Facilitators: Brownell Anderson, Prof John Norcini and Prof Anna Ryan
Education and practice in the health professions requires knowledge, skills, and attitudes that cannot be captured in a single assessment. Separate measures are required, and these are frequently applied in isolation. In 2019 an updated version of the criteria for good assessment was published[1]. The most notable change was the addition of a framework for systems of assessment. Systems of assessment integrate this series of individual measures to achieve one or more purposes (e.g. feedback versus decisions, high vs. no stakes) for one or more groups of stakeholders (e.g. students, faculty, patients, and regulatory bodies). The development and implementation of such systems is challenging and there is little guidance available to help educators. After a brief review of the criteria from the 2018 Consensus Framework for Good Assessment participants will work in small groups with scenarios describing a variety of different assessment situations. They will apply the framework to the scenarios and identify strengths, weaknesses, and improvements. The workshop will conclude with small group presentations and discussion.
PC08 | Applying the Rasch Model for Quality Assurance and Improvement in Assessments of Competency: Techniques for Exploring Item Quality, Examiner Behaviours and What Feedback Surveys are Really Telling You
Workshop Facilitators: Dr Imogene Rothnie, Dr Curtis Lee
This workshop provides participants with foundational knowledge and practical examples of applying the Rasch model (Rasch, 1960/1980) and its extension to the many Facet Rasch model (Linacre, 1988) for quality assurance and improvement of assessments in health professional programs. The Rasch model and its variants are a feature of Rasch measurement theory that focuses on creating linear measures of latent constructs frequently targeted in assessing competence: e.g. knowledge, skills and, increasingly, examiner severity. The Rasch model can be used in the context of written and clinical assessments and in the development and analysis of survey data and the construction of measurement scales. The workshop does not require specialised statistical knowledge and is suitable for those wishing to begin or build on their existing knowledge of Rasch measurement theory and its application in assessments of clinical competence.
PC09 | How to Understand Candidate Behaviour Patterns in Computer-Based Testing Using Visualisation “ClickMaps”
Workshop Facilitators: Dr Gil Myers, Dr Alison Sturrock, Prof Chris McManus
In-depth information on how candidates respond to single best answer items during a real time-limited examination has been made possible with the advent of computer-based tests(CBT)(1). We have developed a programme to create a “ClickStream” (2) of keyboard/mouse movements. This allows us to visualise how students move between questions, view images, select answers, revise answers, and return to questions. Due to the precise timing of events and volume of data produced, it is challenging to interpret these events so we have created visual “ClickMaps”(3) of behavioural patterns, allowing us to quickly evaluate detailed maps of candidate behaviour. We will discuss our ClickMap and inferences, and outline different strategies used by candidates. We have evidence from multiple student groups about how candidates behave and the effect that their methods have on their outcomes. As collusion is a worry in high-stakes assessments, the introduction of CBT has made this worry more acute. We will examine our “ClickMaps” patterns in light of Acinonyx data and discussions around cheating.
PC10 | Using Theory and Evidence to Design and Implement Programmatic Assessment for Competency-Based Medical Education: Lessons Learned from Canadian Family Medicine Postgraduate Training
Workshop Facilitators: Dr Shelley Ross, Dr Brent Kvern, Dr Alison Baker, Dr Cheri Bethune, Dr Kathrine Lawrence, Dr Keith Wilson, Dr Erich Hanel, Dr Annelise Miller, Dr Theresa van der Goes and Dr Karen Schultz
Many health professions education programs globally are in the planning or implementation phase of adopting competency-based medical education (CBME). Concurrently, training programs are considering ways to improve approaches to assessment to better capture evidence about the competence of their learners. Programmatic assessment, first introduced to health professions education by van der Vleuten and Schuwirth in 2005, is an approach to assessment that shifts the focus away from summative examinations. Instead, the focus is on the longitudinal collection of multiple pieces of data about learner competence, collected through multiple different assessment tools. While many programs have embraced the idea of programmatic assessment, designing effective and trustworthy programmatic assessment is a serious challenge for many programs. How can those who plan assessment for health professions training programs ensure that they are assessing the right things, with the right tools, in the right way? Many programs have already done the work of identifying and describing the professional competencies that must be assessed during training. The next steps are to determine 1) which assessment tools are most appropriate to assess each competency, and 2) how to make sense of the assessment data that is collected. This interactive workshop will give participants the opportunity to work through the process of developing programmatic assessment, guided by a template in the form of a worksheet. Using the case example of Canadian family medicine training, participants will identify the enablers and barriers faced by accrediting bodies and individual programs in transforming assessment. We will introduce the basic principles of programmatic assessment, and give scenario prompts to facilitate frequent small group breakout discussions to consolidate the information shared and make it actionable. Using the worksheet, individuals and small groups will look at how to operationalize the didactic information about what factors to consider in designing programmatic assessment.
PC11 | Workplace Based Assessment Outside of Patient Care and Medical Knowledge
Workshop Facilitators: Dr Laura Culver Edgar, Raghdah Al Bualy and Dr Eric Holmboe
Assessment remains a major challenge for programs and faculty, especially for the competencies of professionalism, interpersonal and communication skills (e.g., teamwork), systems-based practice (e.g., quality improvement and patient safety, roles within the healthcare system), and practice-based learning and improvement (e.g., evidence-based practice/scholarship/life-long learning). All programs accredited by the Accreditation Council for Graduate Medical Education (ACGME) and Accreditation Council for Graduate Medical Education – International (ACGME – I) are required to assess a similar set of subcompetencies, referred to as Harmonized Milestones, under each of these 4 core competencies. Using evidence-based principles of “good assessment” and methods in conjunction with faculty development in these essential competencies can improve overall programmatic assessment. The session will contain three sections. The introduction will enable participants to learn about the evidence-based process used to create the Harmonized Milestones and how these subcompetencies map to the medical education frameworks of other systems and countries. The second section will provide an overview of the principles of “good assessment” and utility for assessment methods used for each. Participants will discuss these tools and share how they have assessed these key competencies in their context. The final section will review how to implement these tools through the creation of shared mental models. The session will close with small group discussion with report out and discussion on how programs can implement these tools and meet the challenges of assessing each of these competencies.
Sunday 25 February 2024 – 9:00am to 12:30pm
PC01 | ESMEA Essential Skills in Medical Education - Assessment
Please note this is a two-full day workshop. Registrants for this workshop must attend both days and cannot attend other pre-conference workshops.
Workshop Facilitators: Prof Katharine Boursicot, Prof Sandra Kemp, Prof Jennifer Williams, Prof Brian Jolly and Dr Lucy Wilding
The ESMEA Course accredited by AMEE introduces the fundamental principles of assessment for healthcare professions educators who are involved with assessing undergraduate students,
graduate trainees and practicing doctors and who wish to gain a thorough foundation in assessment. After completing the course, participants will have acquired a vocabulary and a framework for understanding essential concepts in assessment, and familiarity with the principles for their practical implementation. This course is designed for health professional educators who want a comprehensive introduction to evidence-based assessment principles, and also for experienced educators who would like an update on contemporary best practice in assessment.
PC12 | Assessing Cultural Safety in Primary Care Consultations for First Nations People
Workshop Facilitators: Prof Kay Brumpton, Dr Raelene Ward, Dr Rebecca Evans and Prof Tarun Sen Gupta
This medical education workshop aims to explore the assessment of cultural safety in primary care consultations for First Nations people by bringing together First Nations peoples, educators, academics, and primary care providers to share their experiences. Assessment of cultural safety in primary care consultations for First Nation patients is complex. Assessment needs to consider defined components of cultural safety, educational theory, social, historical, and political determinants of health. Furthermore, cultural safety must be determined by First Nations peoples. However, current community-derived definitions of cultural safety are very broad, and whilst describing cultural safety, do not provide specific, or measurable attributes to guide health professional (registrar or health professional student) assessment. This risks culturally safe care being intangible for medical learners. Consideration needs to be given to assessment design that amplifies First Nations peoples’ voices and reflects the complexity of cultural safety. The workshop will be facilitated by a multidisciplinary team of experts, including Indigenous healthcare professionals. Through their personal experiences and expertise, these facilitators will provide valuable insights into culturally safe practices and highlight the importance of respectful partnerships between healthcare providers and Indigenous communities.
PC13 | Behind Closed Doors: Making Defensible High Stakes Progression Decisions in Competency-Based Health Professions Education
Workshop Facilitators: Dr James Kwan, Dr Faith Chia, Dr Wee Khoon Ng, A/Prof Dong Haur Phua and Dr Tracy Tan
Groups such as progression review committees or clinical competency committees are tasked with making high stakes summative decisions in undergraduate and postgraduate health professions education. Such committees are responsible for ensuring that learners have met the requirements to progress to the next stage of their training and that graduates of their programs are ready for appropriate levels of independent practice. They meet at regular intervals, review assessment information about individual learners from multiple sources, synthesise this information to judge their performance against a set of performance standards, document the rationale for the decision, provide feedback to learners and implement a remediation action plan if appropriate. Despite the advantages of group decision making in sharing information to make informed decisions, monitoring learner performance over time and early identification of struggling learners, there is wide variability in group processes and the quality of decisions being made. Therefore, it is imperative that members of such groups undergo the necessary training to improve the quality and defensibility of high stakes progression decisions for individual learners, as well as structured remediation plans with documentation of impact.
PC15 | Essential Assessment Workshops for Faculty Development in the Era of Diversity, Equity, and Inclusion (DEI)
Workshop Facilitators: Prof Ara Tekian and Prof John Norcini
Medical schools often offer their faculty educational materials or hands-on experiences in assessment, which generally have a positive effect on the quality of the educational program. However, the materials and workshops that are offered tend to focus on a few specific topics that are determined by the interest and expertise of staff or the traditions of the school and issues in diversity, equity, and inclusion (DEI) are not necessarily considered. In this interactive workshop, five essential components of a complete faculty development program in assessment will be discussed in small and large groups with practical examples with attention to gender, race/ethnicity, sexual orientation, ability, and international medical graduates. This workshop itself will serve as an example of what participants might offer at their own institutions in the era of DEI. All participants will get five templates for organizing the workshops.
PC16 | How to Redesign Assessment in Health Sciences Education to Include Advancing Modern Technologies
Workshop Facilitators: Dr Peter de Jong and Prof Lambert Schuwirth
In the field of assessment, technological developments are being introduced at high speed. Some of these new technologies can be used to complement or enhance traditional methods. However, in certain situations, technology can also act as a replacement or even transformation of existing methods, systems, or even human roles. In those cases where impact is high, it is important to redefine the value proposition of the assessment first, and subsequently to redesign the assessment program as a whole. Redesigning assessment in the health sciences to incorporate modern technologies can enhance the effectiveness, efficiency, and relevance of the evaluation process and will prepare students for a very different way of handling knowledge in the future. The session will start with an introduction on the topic, followed by a small group activity where we will explore what impact technology can have on the goals and principles of testing. The groups will report back their views in a large group discussion. In a second small group activity, we will develop some examples and scenarios to actually integrate modern technologies into assessment programs. The session will be concluded by drafting a summary and formulating take home messages.
PC17 | Maximising the Learning from Case-Based Discussions: Assessing Clinical Decision-Making and Identifying Biases
Workshop Facilitators: Dr Ruth Hew, Dr David Mai and A/Prof Victor Lee
The Case based Discussion (CbD) or chart stimulated recall, is a well described work-place based assessment (WBA) tool. In its most basic form it can be used to identify concrete knowledge gaps. Used optimally, CbDs can help tease out a learner’s clinical reasoning and decision-making, identify biases and guide further learning. This makes the CbD an important tool in the assessment toolkit. The logistics of the CbD are as follows: 2-4 cases are identified by the learner and forwarded to the assessor who then chooses one for discussion. The trainee presents a summary and synthesis of the case, the assessor probes for their clinical reasoning and the trainee can demonstrate their reflective skills, clinical decision-making and identify any potential biases. As the original documentation forms part of the assessment, the assessor can also comment on the trainee’s documentation skills. In this interactive workshop, participants will be able to explore the components that go to make up an effective learning encounter using a CbD. The workshop conveners will use facilitated small group discussions, role plays and video simulated rating and calibration exercises to engage participants in the exploration of the important components of the CbD, to illustrate how to optimise the utility of the tool and demonstrate how to use it as a tool for both learning and assessment.
PC18 | Optimising Formative Assessment on Professionalism Using Feedback – Coaching Model
Workshop Facilitators: A/Prof Diantha Soemantri and Rita Mustika
Professionalism is at the core of health professionals’ competencies; however, significant challenges still exist, especially in terms of how best to assess it. Since assessment drives learning, discussion, and studies on how to assess professionalism are still ongoing. Continuous feedback is currently considered as the heart of assessment, aligned with the principle of programmatic assessment. Therefore, the assessment of medical professionalism should also take into account the importance of feedback. Formative assessment is one of the means to provide actionable feedback that students can use to improve their learning, in this case, their professionalism attributes. Sargeant et al (2015) developed a feedback model called R2C2, which consists of four phases, build relationships, explore reactions, explore content, and coach for performance change. This model facilitates students to reflect on the feedback. In the last phase of the R2C2 model, students will be coached using triggering questions to come up with action plans for continuous improvement. The coaching phase is then essential; hence we propose to combine the R2C2 model with GROW coaching model, which stands for Goal, Reality, Options, and Will. We believe that the GROW model will augment the coaching phase of R2C2, given more thorough, in-depth, and structured ways to coach students to set goals, explore current situations and options for moving forward, and agree on specific action plans. Therefore, the R2C2-GROW model will enhance the feedback delivered within the formative assessment of professionalism.
PC19 | OSCE and OSTE Stations to Address Racism and Other Biases – Planning and Implementation
Workshop Facilitators: Dr Elizabeth Kachur, Dr Alice Fornari and Dr Thanakorn (TJ) Jirasevijinda
Global efforts to promote Diversity, Equity and Inclusion (DEI) have highlighted the need for bias-reduction education and assessment at all training levels. This includes clinicians in practice who need these skills in patient care as well as teaching. Biases can target any personal characteristics, from race and gender to sexual orientation and immigration status. Whether they are implicit or explicit, they will affect our interactions with patients, families, learners, and colleagues. Medical professionals and teachers bear the responsibility to maintain awareness and work to mitigate the adverse effects of biases. Formative Objective Structured Clinical Exams/Exercises (OSCEs) have proven to be effective and efficient training tools for addressing complex interpersonal situations. The same is true for Objective Structured Teaching Exercises (OSTEs) that have become a quintessential faculty development tool. Whether they are for trainees or faculty, station encounters can lead to multi-source feedback and debriefings to support best practice strategies. Over the years a variety of OSCE and OSTE stations have been developed to address racism and other biases. They can be categorized as focusing on 1) patient encounters (e.g., sequelae of historic racism); 2) encounters with learners and colleagues (e.g., allyship related to gender discrimination); 3) managing bias and microaggressions when learners themselves are the target (e.g., rejection of care providers because of their personal characteristics). These are difficult stations for everyone involved (including simulated/standardized participants – SPs), but with adequate pre- and de-briefing, they can be a powerful preparation for future real-life scenarios. This Pre-Conference Workshop will address the opportunities and challenges of each station type. The goal is to help participants create and implement bias-related OSCE or OSTE stations that fit their particular program.
PC20 | The Case of the Quiet Learner – Does Being an Introvert Harm Your Clinical Assessments?
Workshop Facilitators: Dr Beth Bierer, Prof Elizabeth Molloy and Brownell Anderson
The role of bias in assessment has long been recognized as a source of measurement error. Recently, the emphasis on bias in assessment has focused on gender and race/ethnicity. The association between being an introvert (or being quiet as a sign of cultural respect) on performance assessment ratings has received minimal attention in the medical education literature, particularly in graduate medical education (GME). The American Psychological Association defines introversion as a personality trait which, like extraversion, exists as a continuum of attitudes and behaviors. Introverts often appear reserved and are socially and cognitively more reflective than their think-aloud extraverted colleagues. Available studies reveal that introverts receive lower scores on interpersonal behaviors in clerkships. This session will incorporate a difficult teaching/learner case discussion to delineate strategies to share with faculty and introverted learners faced with this assessment challenge. The session will utilize a modified “Morbidity and Mortality” format focused on an actual case of a third year resident who, during their residency exit interview, revealed the perception that their naturally shy/quiet demeanor led to inaccurate assessments of their medical knowledge and decision-making ability. The participants will discuss the case and will be given the opportunity to ask for additional information before generating hypotheses, and articulating what they would say/do during the resident’s exit interview as well as possible long-term solutions for others. There will be a brief summary of the available literature from business and medical education regarding introverts’ behaviors and associated performance assessments.
Sunday 25 February 2024 – 1:30pm to 5:00pm
PC01 | ESMEA Essential Skills in Medical Education - Assessment
Please note this is a two-full day workshop. Registrants for this workshop must attend both days and cannot attend other pre-conference workshops.
Workshop Facilitators: Prof Katharine Boursicot, Prof Sandra Kemp, Prof Jennifer Williams, Prof Brian Jolly and Dr Lucy Wilding
The ESMEA Course accredited by AMEE introduces the fundamental principles of assessment for healthcare professions educators who are involved with assessing undergraduate students,
graduate trainees and practicing doctors and who wish to gain a thorough foundation in assessment. After completing the course, participants will have acquired a vocabulary and a framework for understanding essential concepts in assessment, and familiarity with the principles for their practical implementation. This course is designed for health professional educators who want a comprehensive introduction to evidence-based assessment principles, and also for experienced educators who would like an update on contemporary best practice in assessment.
PC21 | Assessing Assessment Authenticity: A Holistic Approach to Reviewing and Renewing the Authenticity of Assessments in a Changing World with Artificial Intelligence
Workshop Facilitators: Dr Thao Vu and Prof Paul White
This workshop is for educators, course designers and curriculum leaders to reflect on the authenticity of their program’s assessments in the contemporary health education context, explore what is working and what can be renewed, and create a personalised action plan for enhancing assessment authenticity. Participants will be engaged in an active learning experience underpinned by a Discover-Explore-Apply-Reflect (DEAR) instructional model with intersected theory bursts and small group application and reflective activities. Learning occurs in a DEAR active learning environment where participants discover and explore research-based materials, then apply these materials to unpack you own perceptions of assessment authenticity in your contemporary education setting, self-evaluate the degree of authenticity of your assessments, and share your tools, challenges and unique action plan with other educators, course designers and curriculum leaders.
PC22 | Assessment for Inclusion in Health Professions Education
Workshop Facilitators: Dr Joanna Tai, Prof Rola Ajjawi, Prof Margaret Bearman, Dr Simon Fleming, A/Prof Rhea Liang and Prof Nalini Pather
The diversity of learners in health professional education is increasing as a result of concerted efforts to ensure the composition of the health workforce better reflects the general population. However, research has demonstrated that there is an attainment gap for minority group students in the health professions, which cannot be explained by student psychological and demographic factors (Lucey et al., 2020). Rather than persisting with a deficit view of learners from under-represented groups, we propose that designing assessment for inclusion is a crucial component of promoting equity in health professional education. Assessment for inclusion “recognises diversity in student learning, and endeavours to ensure that no student is discriminated against by virtue of features other than their ability to meet appropriate standards” (Tai et al 2023, p484). Assessment for inclusion principles will be introduced along with relevant educational concepts like Self-Determination Theory, and supporting research evidence. Participants will work in small groups to explore inclusion dilemmas and tensions in assessment, drawing on scenarios from multiple research projects into inclusion, and sharing of their own experiences. Participants will also consider issues of systems-level assessment design and identify individual and organisational strategies to implement assessment for inclusion.
PC23 | Beyond Closed Doors: Maximising the Educational Impact of High Stakes Learner Progression Decisions by Improving the Individual and the Institution
Workshop Facilitators: Dr James Kwan, Dr Faith Chia, Dr Wee Khoon Ng, A/Prof Dong Haur Phua, Dr Tracy Tan and A/Prof Subha Ramani
Groups such as progression review committees or clinical competency committees are tasked with making high-stakes summative decisions in both undergraduate and postgraduate health professions education. Such committees are responsible for ensuring that learners have met the requirements to progress to the next stage of their training and that graduates of their programs are ready for appropriate levels of independent practice. In addition to making these decisions, such committees are also responsible for providing feedback and coaching to promote learning and growth among their learners, and at a systems level, for improving the quality of the education program. However, there are significant gaps in how feedback data are utilised to promote individual learner development and practice change while concurrently applied at a systems level for quality improvement of the educational program.
PC24 | Developing Feedback Literacy for Assessment: How Can We Take Collective Action?
Workshop Facilitators: Dr Christy Noble, Dr Matthew Sibbald and Prof Elizabeth Molloy
It is increasingly being recognised that enhanced feedback processes – related to assessments – are enabled when the feedback literacy of both learners and educators are purposefully developed (Molloy et al., 2020). This realisation has generated a flurry of activity to design and implement pedagogical resources that aim to develop feedback literacy (e.g. workshops and learning modules) (Noble et al., 2020). With increasing activity in this space, there is a risk that we duplicate efforts e.g., individual universities develop resources to enhance feedback related to common assessment tasks. Also, with resources contracting, we have an opportunity to take collective action (i.e. “action taken by a group in pursuit of members’ perceived shared interests” (Oxford Reference)) to optimise the benefits of improved feedback literacy and creativity through working collaboratively. This interactive workshop will be informed by the principle of collective action (Oxford Reference). In the first part of the workshop, we will describe the literature and evidence for different approaches to developing feedback literacy and reflect on our experiences from three universities. We will also facilitate a structured discussion to illuminate participants’ experiences on developing feedback literacy in different assessment contexts e.g. classroom, simulation settings and clinical settings. In the second part of the workshop, participants will engage in interactive activities designed to nurture sharing of ideas to extend the mission. A key goal will be to develop a framework for co-generating or sharing (and contextualising) practical resources to develop feedback literacy for assessment.
PC25 | Employing Logic Models to Align Program Planning, Delivery, and Inclusive and Multisource Assessments with Short- and Long-Term Outcomes
Workshop Facilitators: Dr Kimberly Dahlman, Dr Alana Newell, Prof Neil Osheroff and Dr Nancy Moreno
Logic models are valuable tools for program planning and evaluation. A logic model visually connects intended outcomes with the resources and activities that go into a program, depicting relationships among program inputs, planned work, measurable outputs or products, and short- and long-term outcomes or impacts for students or other learners. Thus, a well-constructed logic model can guide decision-making about data collection, assessment timepoints and overall program evaluation, supporting clear alignment between these measures and the desired outcomes. Importantly, these assessments and measurements need to be inclusive and multisource, considering near- and distal-peer and self-assessments. In this IAMSE-organized session, an experienced investigator and program evaluator team will guide participants through each of the elements of a logic model, and provide templates, strategies, and practice in the development of well-aligned education program logic models.
PC26 | Learner Assessment in the Time of Artificial Intelligence: Friend or Foe?
Workshop Facilitators: Dr Maryam Wagner and Dr Carlos Gomez-Garibello
Recent and sustained developments in generative artificial intelligence (AI) tools, and their widespread accessibility are challenging typical methods for assessment of learners in health professions contexts. Learners are using AI tools such as ChatGPT to brainstorm ideas, answer multiple choice questions, and submit both edited and non-edited written materials (Hao, 2023). The pervasiveness of these tools and their uses have prompted many discussions about their impact on assessment- both positive and inciting challenges. The workshop will be delivered through mini-plenaries, hands-on activities, and discussions. Part I of the workshop is a short plenary presentation to introduce the audience to various capabilities of artificial intelligence tools that learner assessment. Part II of the workshop will engage participants in a jigsaw activity that guides participants to explore key issues and topics of assessment in the context of AI, its risks and challenges, and ethical and pragmatic implications for workplace-based assessment and research. The workshop will conclude with Part III with a facilitated discussion to synthesize the outcomes of the workshop.
PC27 | Making Sense of Work-Place Based Assessment
Workshop Facilitators: Prof Brian Jolly
In 2012 Jim Crossley and I published a paper(1) with this title. It was a work that tried to condense the research on and ideas about WBA at that time around 4 main concepts. First that the right questions be asked; second that they needed to be phrased appropriately, and third, they should be about important and measurable features, and finally they should be directed towards the right people, i.e. those that would know about those measurable features. Depending on which database is used it has been cited between 250 and 300 times. This interactive workshop will unpack those ideas along with a narrative that describes how the paper nearly did not get written, and when it did some of its ideas became uncomfortable for some clinicians and academics. It raised the issues of interprofessional contribution to assessment, and how the questions should be expressed in the best way (not always agreed upon) to evaluate the particular skills or domain of interest. This workshop will critically reflect on the premises in the original paper in light of the emerging evidence and feedback from the application of WBA over the last decade. For example, we did not consider the patient’s voice in the assessment process. Participants will be given opportunity to outline their challenges in assessment and whether adopting WBA is feasible, and any successes they cherish. The four concepts will be outlined, and their usefulness discussed. The workshop will aim to help participants to refine their own interests and pursuits (for example as researchers, to shape the research questions that still need to be addressed about WBA, or as practitioners on how they might develop their practice).
PC28 | Multimodal Innovative Assessment Strategies for Bioethics Competencies in Medical and Health Professions Education
Workshop Facilitators: Prof Russell D’Souza, Prof Mary Mathew, Prof Princy Palatty and Prof Joseph Thornton
Medical ethics involves the application of moral rules to situations exclusive to the medical world. It warrants sound moral reasoning that originates from the fundamentals of bioethics. Bioethics encompasses a multitude of disciplines that, includes medical health and law. Its execution involves personnel from various specialties like physicians, researchers, lawmakers, politicians, and social scientists. It is not just sufficient to have mere knowledge and technical skills but to have a profound understanding of bioethics principles, professionalism, and communication skills. The students were expected to “catch” the bioethical aspects that were hidden within the curriculum. The current assessment strategies aim to abide by Bloom’s taxonomy and test the awareness and knowledge gained by the student. Imparting bioethics in medical and health professions education is an art, and it is challenging to figure out the most appropriate method of instruction. Any educational system relies on assessment. There is no single effective assessment tool for bioethics and professionalism. Using diverse tools and assorted raters in multiple settings would give a clear picture of what the student has captured. The assessment of competencies in bioethics relates to the Cognitive, Psychomotor (behavioural) and Affective domains. The assessment of the cognitive and psychomotor domains is amenable to assessment, with tools available. Still, the affective domain poses a greater challenge in validating bioethics as it relates to internalization and actual practice, which is difficult to assess. A multi-modal approach is necessary to assess bioethics in medical and health profession education from varied situations. This workshop will offer newly developed formative and summative assessment strategies and tools to evaluate bioethics competencies in medical and health profession students in the affective, cognitive, and behavioural domains. The tools for the assessment of bioethics presently involve Standard assessment tools. This workshop will introduce the developed and validated Innovative assessment tools. This interactive workshop will have three parts. Part one will focus on the assessment tools. Part two will deal with the assessment implementation. Part three will concentrate on the analysis of the assessment. Participants will be given all the tools for assessment, and the groups are at liberty to choose an assessment strategy and administer it. They will debrief together on the analysis of the tool and then be assessed.
PC29 | Planning to Mitigate the Unintended and Undesired Consequences of Programmatic Assessment
Workshop Facilitators: Prof Anna Ryan, Dr Mike Tweed, A/Prof Glendon Tait and A/Prof Suzanne Schut
A programmatic system of assessment is focused on longitudinal delivery of authentic assessment events involving different assessment formats where data is accumulated against a meaningful framework. Designed to address some of the problems associated with traditional assessment systems, programmatic assessment aims to increase student engagement, reduce failure to fail, and provide rich feedback to support learner growth and development, while also allowing robust progress decisions. In most contexts where traditional assessment approaches are in place, a shift to programmatic assessment involves significant change. While the outcomes of such change should have expected and intended consequences, such change can also lead to unintended and undesired consequences such as increased assessment workload, increased student anxiety, unlimited opportunities to meet standards, devaluing of individual assessments (or topic areas) and unwieldly complexity of feedback and decision-making data.