Stay tuned for upcoming announcements of our lineup of captivating speakers and details about their presentation at Ottawa 2024.
Prof Lambert Schuwirth
Read Their Biography
Lambert Schuwirth is a Strategic Professor in Medical Education, College of Medicine and Public Health at Flinders University.
He graduated from Maastricht Medical School as an MD and he has been involved in medical education and medical education research since 1990. His main interest is in assessment of medical competence and performance, both in undergraduate and postgraduate training settings. He worked at Maastricht University for almost 20 years as assistant, associate and full professor in the Department of Educational Development and Research, before coming to Flinders in August 2011.
In 1991, he joined the Department of Educational Development and Research at Maastricht Medical School, taking up various roles in student assessment: Chairman of the Inter-university and the Local Progress Test Review Committee, the OSCE Review Committee and the Case-based Testing Committee. Since the early 2000s, he has been Chair of the overall Taskforce on Assessment. He has been an advisor on assessment to medical colleges in the Netherlands, Australia and the UK. In 2010, he chaired an international consensus group on educational research, the results of which were published in Medical Teacher.
Lambert Schuwirth MD, FANZHPE
Strategic Professor of Medical Education
Chair Prideaux Health Professions Education
College of Medicine and Public Health
Distinguished Professor of Medical Education, Chang Gung University Taiwan
Professor of Medicine (adjunct)
Uniformed Services University for the Health Sciences USA
Assessment reform in a time of disruptive technological changes
Technological developments are causing disruptions to health-professions education and assessment. Generative AI is being considered the most disruptive of these developments.
In times of rapid developments, It can be tempting to just look at this technology and make reactive changes or, worse, to try to forbid the use of it in assessment. But there is also growing consensus that technology and AI are here to stay, and that a fundamental rethink of our approaches to assessment is needed in this time of disruptive changes.
One suggested direction of reimagining assessment is one in which the learner with their use of technology becomes focus of the assessment and competence is seen from the perspective of distributed cognition rather than pure ‘biological’ cognition. The ‘biological’ cognition perspective sees the use of technology as cheating or cognitive offloading. Yet, our modern students are afforded with this technology, and therefore they will be also afforded as future healthcare professionals and so will their clients, consumers or patients.
Another proposed avenue is towards more assessment-for-learning programs and a reduced focus on assessment as individual artefacts or tests, or relying on pure assessment-of– learning programs.
Assessment for learning is much more than merely the provision of feedback in an otherwise traditional assessment program, or purely formative assessment. Such programs include, for example, distributed assessments, meaningful collation of assessment information, interleaving, collaborative assessment, increased learner agency and of course a design from the perspective of distributed cognition.
In this keynote presentation the background and design principles of such an assessment-for-learning program will be presented. Participants will be challenged to critique current assessment approaches in their own health professional discipline to determine the extent to which they are promoting assessment for learning.
Dr Marcy Rosenbaum
University of Iowa Health Care
Read Their Biography
Marcy Rosenbaum is Professor of Family Medicine and Faculty Development Consultant for the Office of Consultation and Research in Medical Education at the University of Iowa Carver College of Medicine. She has been actively involved in teaching, curriculum development and conducting research on clinician-patient communication and health professions education for the more than 30 years.
She oversees communication skills training for students, residents and practicing health care providers at the University of Iowa Hospitals and Clinics. She also has spent her career conducting research and directing programs focused on enhancing health professional faculty teaching skills in classroom and clinical settings. She has published extensively and facilitated train the trainers courses throughout the world. She is the current past president of EACH: International Association for Communication in Healthcare and past founder and Chair of tEACH, the teaching committee of EACH.
Exploring the complexity of communication skills and feedback (formative assessment) for healthcare learners
Health professional educators note that providing feedback (formative assessment) to learners on their performance that is well received and impacts learners’ subsequent performance is a consistent challenge. This talk will explore the unique challenges that accompany feedback conversations related to learners’ clinical communication skills as part of formal communication learning sessions and in the immediate context of learners’ interactions with patients during supervised patient care (including as part of assessments such as WBAs/Mini CEX). Questions to be considered will include if and how communication skills feedback is different from feedback on other clinical skills and why and when communication skills feedback is challenging to accomplish effectively, particularly in the context of workplace-based education. Assumptions underlying approaches to giving effective feedback on learner communication skills and also research challenges in investigating practices and outcomes of communication skills feedback will be critically examined. This talk will be of interest to anyone who is involved in communication skills education and/or assessment for healthcare professional learners whether as a classroom or clinical teacher, a curriculum and evaluation developer, a healthcare practitioner and/or as a healthcare education or practice researcher.
Prof Kevin Eva
University of British Columbia
Read Their Biography
Dr. Kevin Eva is Associate Director and Scientist in the Centre for Health Education Scholarship, and Professor and Director of Educational Research and Scholarship in the Department of Medicine, at the University of British Columbia. He completed his PhD in Cognitive Psychology (McMaster University) in 2001 and became Editor-in-Chief for the journal Medical Education in 2008.
Dr. Eva maintains a number of international appointments including Honorary Skou Professor at Aarhus University (Denmark), Honorary Professorial Fellow at the University of Melbourne (Australia), and visiting professor at the University of Bern (Switzerland).
The core theme of his diverse research interests is the question of how can we improve decision-making in the context of health professional training and practice? Awards for this work include the Karolinska Institutet Prize for Research in Medical Education (Sweden), an Honorary Fellowship from the Academy of Medical Educators (UK), MILES Award for Mentoring, Innovation, and Leadership in Education Scholarship (Singapore), and the President’s Award for Exemplary National Leadership from the Association of Faculties of Medicine in Canada.
Assessment in health professional education: Unveiling successes, confronting challenges, and paving the way forward
In the closing keynote, Dr Eva will stimulate discussion on the status of assessment research that has been completed to date in health professional education including reflection on where successes have triggered effective change and translation to practice. He will juxtapose those stories against areas that we still don’t seem able to get right despite decades of effort in the face of ongoing evolutions in society, education, and technology. The audience will be encouraged to think about why that may be the case and strategies will be outlined for what we can do about it as a field. Through this conversation, researchers and practitioners will be challenged to think deeply about “what’s next?” as we strive to improve assessment for stakeholders ranging from students to administrators and patients.
Prof Iain Martin
Read Their Biography
Professor Iain Martin is Vice-Chancellor and President of Deakin University in Australia.
He came to Deakin from the position of Vice-Chancellor of Anglia Ruskin University in the United Kingdom. Prior to that, he was Deputy Vice-Chancellor Academic at the University of New South Wales. Professor Martin spent a number of years at the University of Auckland in New Zealand with positions including Professor of Surgery, Dean of the Faculty of Medical and Health Science, and Deputy Vice-Chancellor with responsibility for external and strategic partnerships.
Professor Martin grew up in the United Kingdom and attended the University of Leeds where he completed his medical degree, Doctorate and Master of Education.
Since being appointed as Deakin’s Vice-Chancellor in 2019, he has served as Chair of the Barwon Regional Partnership and Chair of the Australian Technology Network (ATN) of Universities. Since January 2022, he has served as Chair of the Victorian Vice-Chancellors’ Committee and is a member of the CSIRO’s Australian Centre for Disease Preparedness Advisory Group.
In March 2023, Professor Martin was reappointed as Deakin’s Vice-Chancellor for a further five-year term and continues to lead the implementation of the University’s strategic plan, Deakin 2030: Ideas to Impact.
Generative AI and assessment in health professional education – Moral panic, ethics, wisdom and technology, what wins out?
The very rapid evolution of AI, especially the use and adoption of transformer based large language models is touching every aspect of education and assessment.
ChatGPT very publicly highlighted their spread with the platform reaching 100 million users in less than 2 months. The last months of 2022 and 2023 saw what can only be described as panic in many educational settings as to their impact on education, most notably assessment.
From the perspective of a university educational environment this talk will explore how we consider AI and in particular LLM’s on how we frame, design and deliver meaningful, valid and accurate assessment in the health professional education context, when these tools are ubiquitous.
The moral panic that has been evident given that these LLM based tools can ‘pass’ professional exams in medicine and law, should perhaps best be reframed as why is there not moral panic that we are relying on assessments that the currently available LLM’s can ‘pass’.
This talk will deliberately aim to be provocative and challenging in exploring the interface of assessment design, ethics, wisdom and technology as we embrace what may be possible, what should be done and how do we know what our students know.
A/Prof Suzanne Schut
Delft University of Technology
Read Their Biography
Suzanne Schut is an Assistant Professor in Educational Science at Delft University of Technology. After graduating, she started working as a teacher and assessment consultant in higher education. In 2014, she joined the medical education community and completed her PhD research on programmatic assessment at the School of Health Professions Education (SHE), Maastricht University. Her main research interest is in adaptive expertise and workplace-based assessment, more specifically in teacher-student relationships within the assessment environment. Her activities in the Department of Educational Development and Research were focussed on student assessment. She chaired the assessment review committee of the MD program and is currently a member of the assessment expert panel of the World Health Organization (WHO). Over the years, she was privileged to work with many students, teachers, and other more knowledgeable partners, gaining a lot of experience with the design and implementation of assessment programs and facilitated faculty development courses on assessment and feedback in different settings and countries. Recently, she made the transition to the field of teacher education, aiming to challenge and enhance her understanding of the impact of assessment on students and teachers by investigating assessment approaches and interpersonal assessment relationships in a different field and context.
Whole-system assessment approaches, like programmatic assessment, aim to improve high-stakes decision-making processes while simultaneously benefit and support learning. Our interest in using assessment for more than decision-making and accountability purposes might be higher than ever. However, problems and concerns about the utility of assessment for learning remain persistent and, in practice, learners most likely perceive any assessment as a high-stake hurdle instead of a learning opportunity. A strong sense of agency has the potential to engage learners more actively in the assessment process, positively influence the perceived authenticity of assessment, learners’ feedback receptivity and their willingness to learn from assessment. Although the value and importance of learners’ agency for continuous development is emphasized by many scholars in the field, and learners’ active involvement in learning processes might be aligned with current views on learning, conventional hegemony prevails in the context of assessment.
During this presentation, I’ll seek to further explore and discuss in interaction with the audience, the challenges related to simultaneously using assessment for formative and summative purposes. Based on the current state of research, I’ll focus on how whole-system approaches and the concept of assessment as a continuum of increasing stakes, are perceived by key stakeholders at the frontline of our assessment practices: learners and teachers. How do these perceptions influence learning and teaching? If we aim to purposefully integrate assessment and learning with the implementation of whole-system approaches, does it make sense to treat active involvement of learners in assessment any differently from their role and involvement in learning? What is the potential for learners’ agency in assessment and should we perhaps create more conditions in which learners are empowered and may empower themselves? The overall question and discussion will focus on when and how whole-system approaches afford a more meaningful impact on learning, and if and how we overcome the more undesirable influences of assessment.