Predictive validity of the Dundee multiple mini-interview
Abstract
Context
The multiple mini-interview (MMI) is the primary admissions tool used to assess non-cognitive skills at Dundee Medical School. Although the MMI shows promise, more research is required to demonstrate its transferability and predictive validity, for instance, relative to other UK pre-admissions measures.
Methods
Applicants were selected for interview based on a combination of measures derived from the Universities and Colleges Admissions Service (UCAS) form (academic achievement, medical experience, non-academic achievement and references) and the UK Clinical Aptitude Test (UKCAT) in 2009 and 2010. Candidates were selected into medical school according to a weighted combination of the UKCAT, the UCAS form and MMI scores. Examination scores were matched for 140 and 128 first- and second-year students, respectively, who took the 2009 MMIs, and 150 first-year students who took the 2010 MMIs. Pearson's correlations were used to test the relationships between pre-admission variables, examination scores and demographic variables, namely gender and age. Statistically significant correlations were adjusted for range restrictions and were used to select variables for multiple linear regression analysis to predict examination scores.
Results
Statistically significant correlations ranged from 0.18 to 0.34 and 0.23 to 0.50 unrestricted. Multiple regression confirmed that MMIs remained the most consistent predictor of medical school assessments. No scores derived from the UCAS form correlated significantly with examination scores.
Conclusions
This study reports positive findings from the largest undergraduate sample to date. The MMI was the most consistent predictor of success in early years at medical school across two separate cohorts. UKCAT and UCAS forms showed minimal or no predictive ability. Further research in this area appears worthwhile, with longitudinal studies, replication of results from other medical schools and more detailed analysis of knowledge, skills and attitudinal outcome markers.
Introduction
It is widely accepted that so-called ‘non-cognitive’ or ‘non-academic’ attributes (such as interpersonal skills and moral reasoning) are important for medical school selection in addition to academic achievement.1 Developed and introduced at McMaster in 2004, Dundee has since adopted the multiple mini-interview (MMI) as the primary pre-admissions measure for this purpose. Other schools in the UK are increasingly following suit.
MMIs aim to assess a broad array of candidates' personal characteristics through ratings from multiple snapshots of behaviour in an objective structured clinical examination (OSCE)-like rotational approach. This type of interview was first introduced by Eva et al.2 because of the need for an interview process with robust psychometric properties, unlike most traditional interviews. By testing a larger content sample with multiple independent interviewers, MMIs have demonstrated that they can offer a more accurate picture of a candidate's behaviour.3 With compelling evidence on reliability and other satisfactory psychometric properties from the USA, Australia and the UK,2, 4-8 MMIs continue to be adopted across medical and dental schools worldwide.
Attention has now shifted to the ability of MMIs to predict performance in medical school and beyond. A number of studies have demonstrated that they show statistically significant and practically relevant relationships with future performance.9-11 Beginning with Eva et al.,11 MMIs were shown to have significantly predicted mean scores of OSCEs (stand β = 0.44) among 45 medical students. The same cohort was followed by Reiter et al.,9 who found that MMIs correlated with a range of clerkship (r = 0.28–0.57) as well as licensing examination performance measures (r = 0.37–0.39) and this relationship did not lose its predictive power after controlling for other variables. Eva et al.10 once again followed the same cohort, together with an additional group of postgraduate residents (n = 22), through another licensing examination and found significant correlations of 0.35 and 0.36, respectively.
Although these studies have successfully demonstrated predictive validity, it is clear that more research is needed as the majority of this work was based on the same small Canadian cohort. Furthermore, the MMIs used in the predictive validity studies were heavily weighted towards ethical decision-making and the authors acknowledge that MMIs developed elsewhere may be aimed at different characteristics.10 Therefore, the body of evidence examining the predictive validity of MMIs would benefit from an analysis of different and larger cohorts and from outside of North America.
Although it is certainly beneficial to consider the usefulness of MMIs, the same expectation should be set for all admissions measures.9 It is therefore important to consider the predictive ability of MMIs relative to other pre-admissions measures. Dundee, like most UK medical schools, considers scores derived from Universities and Colleges Admissions Service (UCAS) form components, namely personal statements, references and academic achievement, in addition to an aptitude test in the form of the UK Clinical Aptitude Test (UKCAT).12
UCAS personal statements present a biography of non-academic achievements, work experience and usually a justification for career choice. Presented as free text, they are challenging to score consistently and subject to a range of influences, such as social opportunity. Neither they, nor references, have been shown to predict success in medical school. Ferguson et al.13 and Siu and Reiter14 reviewed predictors of success in medical school and found that there was a lack of evidence that personal statements or references have any predictive value in subsequent achievement. Wright and Bradley15 also found that not only did scores derived from the personal statement fail to predict medical school examination performance, but they were also biased towards those from more advantaged socio-economic backgrounds. Although existing data are not positive on the use of personal statements or references, there are not enough studies for definitive conclusions.13
The UKCAT (http://www.ukcat.ac.uk) is an intelligence test used to ‘assesses a range of mental abilities identified by university medical and dental schools as important’.16 It joins other established tests for selection into medical schools, such as the Graduate Australian Medical Schools Admissions Test (GAMSAT), Medical College Admissions Test (MCAT; for US candidates) and BioMedical Admissions Test (BMAT; used by some English medical schools), although it is distinct because it aims to purely assess aptitude and contains no knowledge-based component. Though the literature suggests that the MCAT, GAMSAT and BMAT each have some success at predicting future performance, this level of success has not been replicated so far with the UKCAT. Lynch et al.17 examined the predictive validity of the UKCAT at two Scottish medical schools and found it did not predict Year 1 performance. Similarly, Yates and James18 investigated whether the UKCAT predicted performance during the first 2 years of medical school at Nottingham University and found that it had a poor predictive value. To date, only Wright and Bradley15 have presented evidence of predictive validity; they found UKCAT scores to be predictive of Year 1 and 2 knowledge-based examination scores at Newcastle University.
Medical school selection in the UK has to work with a range of markers that seek to assess educational achievement (e.g. school grades), fluid intelligence (aptitude testing), motivation and other reported non-academic achievements (statements) and interview scores. The first two are thought to reflect largely cognitive abilities and the latter two non-cognitive, but we are cognisant that this divide is debated and that these instruments are seen as imperfect.
The predictive validity of the Dundee MMIs and other pre-admissions measures can now be evaluated for the first 2009–2010 and second 2010–2011 MMI cohorts, the former of which had, at the time of the study, completed 2 years of medical school. This study extends the findings of Dowell et al.7 and adds to the body of evidence by examining the relationship of MMIs and other pre-admissions measures with performance in medical school examinations. It takes advantage of an appreciably larger sample size, younger student population and a geographically distinct cohort relative to other published studies. It aims to address which aspects of the selection process can be justified in terms of predictive validity for knowledge-based and OSCE examinations in early medical school and to establish if MMIs are useful in the UK.
Methods
Admissions tools
Dundee Medical School's admissions process is similar to most other UK medical schools, as described by Parry et al.12 UCAS forms were examined first for minimum academic qualifications. In total, 1278 and 1553 applicants applied to the standard 5-year medical course at Dundee and met the minimum academic requirements in 2009 and 2010, respectively. All applications were scored by one experienced member of the medical school's admissions team, with a second member of the team reviewing those who were close to the cut-off point. Numerical scores were assigned to UCAS form components, namely medical work experience and non-academic achievement (both derived from the personal statement), academic qualifications and references. For analysis purposes, a non-academic score total was created, which consisted of an aggregate of references, medical experience and non-academic experience scores. Widening Access markers were also considered for 44 (3.50%) and 53 (3.40%) local applicants in 2009 and 2010, respectively. A combined weighted outcome of UCAS form and UKCAT scores was then used to select candidates for interview.
At the MMI stage, candidates rotated around 10 7-minute stations. MMI content was developed based on a predefined set of non-cognitive attributes determined by the medical school's admissions committee. These were interpersonal skills and communication (including empathy), logical reasoning and critical thinking, moral and ethical reasoning, motivation and preparation to study medicine, teamwork and leadership, honesty and integrity. Cronbach's alpha reliabilities for the 2009 and 2010 MMIs were 0.70 and 0.69, respectively. Further details on the development of these MMIs are provided in Dowell et al.7 Offers were then given to the candidates based on a combined weighted MMI and pre-interview score, with the MMI score being assigned a heavier weighting.
Medical school examinations
Two standardised assessments (written and OSCE) are completed at the end of each semester of Year 1; two are completed at the end of Year 2. Raw percentage scores at first sitting for individual assessments were used. Appendix 1 shows a description of medical school examinations for each year. There were no appreciable differences in examination content, format or curriculum between the years. Table 1 shows Cronbach's alpha reliabilities for each assessment.
Examination | 2009 cohort | 2010 cohort |
---|---|---|
Year 1 | ||
Semester 1 written | 0.87 | 0.82 |
Semester 1 OSCE | 0.75 | 0.78 |
Semester 2 written | 0.88 | 0.87 |
Semester 2 OSCE | 0.78 | 0.66 |
Year 2 | ||
Written | 0.91 | – |
OSCE | 0.66 | – |
- OSCE = objective structured clinical examination
2009 MMI cohort
In total, 452 candidates sat the 2009 MMIs, of which 147 enrolled in the programme. MMIs were comprised of six traditional ‘one-to-one’ stations and four interactive task-based stations. Admissions and examination scores were matched for 140 of 160 (87.50%) and 128 of 158 (81.00%) Year 1 and Year 2 students, respectively. This represented 95.2% and 87.1% of all enrolled 2009 MMI candidates in Years 1 and 2 respectively. The remaining students were found to either have deferred entry from the previous admissions cycle, withdrawn from medical school or repeated a year. Post hoc power analysis confirmed that the sample size was sufficient to achieve a 76% and 74% power to detect a correlation effect size of 0.20 at the 0.05 probability level in Years 1 and 2, respectively.
2010 MMI cohort
In total, 477 candidates sat the 2010 MMIs, of which 150 enrolled in the programme. MMIs were comprised of five traditional ‘one-to-one’ stations and five interactive task-based stations. Admissions and examination scores were matched for 150 of 163 (92.00%) Year 1 students. This represented the entire cohort of enrolled 2010 MMI candidates. Similar to 2009, the remaining students were found to either have deferred entry from the previous admissions cycle, withdrawn from medical school or repeated a year. Post hoc power analysis confirmed that the sample size was sufficient to achieve an 80% power to detect a correlation effect size of 0.20 at the 0.05 probability level.
Consent was collected from applicants for educational research utilising their admissions data and confirmation was obtained from the University Ethics Committee (UREC 12166) that approval was not required for this analysis of routinely collected data.
Analysis
Data were analysed with spss 17.0 for Windows (SPSS, Inc., Chicago, IL, USA). Independent variables were UCAS academic, UCAS non-academic, UKCAT and MMI scores. The rare students with missing data were omitted from the statistical analyses involving that variable. Pearson's correlations were used to test relationships between pre-admissions variables, examination scores and demographic variables, namely gender and age.
After consideration of the issues surrounding family-wise error corrections as they relate to the multiple comparisons made, we elected not to correct.19, 20 Our results should be read in this context and actual p values are provided to allow the reader to consider the likelihood of type I error.
Histograms and plots were used to confirm that the data were linear and normally distributed. Correlations were used to select variables for multiple linear regression analysis to predict examination scores. A significance level of P ≤ 0.05 was required for a variable to be included in a multiple regression model.
Correlations were adjusted for range restriction and are referred to in this study as ‘unrestricted’ correlations. Statistical significance was determined prior to correcting the correlations. This adjustment is common in predictive validity studies and is carried out to counter correlation underestimates when the observed sample is not representative of the population of interest.21, 22 In the present study, scores from admissions tools were used to select medical school candidates for whom examination results were then compared. The range of scores was therefore restricted in this sample, as medical students' scores on admissions tools will, by definition, be higher than the overall applicant pool.
Forward stepwise multiple regression was performed on assessments where the predictor that has the highest simple correlation with the outcome is selected in step 1. If this predictor significantly improved the model's ability to predict the outcome, then it was retained in the model and the program searched for a second predictor with the largest semi-partial correlation with the outcome (step 2). This procedure allows us to see the contribution of each independent variable to the model's ability to predict assessment performance. Where there was only one significant predictor for an assessment, the result for the stepwise regression was equivalent to a simple linear regression. Levels of F to enter and F to remove were set to correspond to p levels of 0.05 and 0.01, respectively. To eliminate the possibility of suppressor effects, a backwards elimination was used to confirm that no significant relationships were missed by forward inclusion.23
Model statistics are provided for each regression analysis. Independent variables were admissions tools together with demographic variables, namely age and gender. Independent variables were included in the multiple regression analyses only if they correlated with assessment scores.
The strengths of correlations were compared using Cohen's effect size interpretations (small ≥ 0.10, medium ≥ 0.30, large ≥ 0.50)24 and the US Department of Labour, Employment Training and Administration's guidelines for interpreting correlation coefficients in predictive validity studies (‘unlikely to be useful’ < 0.11; ‘dependent on circumstances’, 0.11–0.20; ‘likely to be useful’ 0.21–0.35; ‘very beneficial’ > 0.35).25
Results
Table 2 shows Pearson's r correlations between admissions tools and examinations scores before and after correction for range restriction. Statistically significant correlations have been highlighted in bold.
UCAS academic | UCAS non-academic | UKCAT | MMI | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
r | r u | p | r | r u | p | r | r u | p | r | r u | p | ||
2009 Year 1 | Semester 1 written | −0.07 | −0.18 | 0.84 | 0.03 | 0.05 | 0.74 | 0.25 | 0.34 | 0.01 | 0.14 | 0.18 | 0.11 |
Semester 1 OSCE | −0.05 | −0.13 | 0.84 | 0.07 | 0.11 | 0.41 | 0.18 | 0.24 | 0.03 | 0.19 | 0.24 | 0.02 | |
Semester 2 written | −0.10 | −0.26 | 0.84 | −0.02 | −0.02 | 0.86 | 0.14 | 0.19 | 0.11 | 0.26 | 0.33 | 0.01 | |
Semester 2 OSCE | −0.02 | −0.05 | 0.84 | 0.02 | 0.03 | 0.83 | −0.01 | −0.01 | 0.94 | 0.34 | 0.43 | 0.01 | |
2009 Year 2 | Written | −0.09 | −0.23 | 0.30 | 0.02 | 0.02 | 0.86 | 0.05 | 0.07 | 0.54 | 0.18 | 0.23 | 0.04 |
OSCE | −0.11 | −0.28 | 0.23 | 0.05 | 0.08 | 0.58 | 0.12 | 0.16 | 0.17 | 0.27 | 0.35 | 0.01 | |
2010 Year 1 | Semester 1 written | 0.09 | 0.19 | 0.27 | −0.09 | −0.12 | 0.29 | 0.15 | 0.2 | 0.07 | −0.01 | −0.01 | 0.35 |
Semester 1 OSCE | 0.04 | 0.09 | 0.59 | −0.01 | −0.02 | 0.87 | −0.03 | −0.04 | 0.68 | −0.05 | −0.07 | 0.55 | |
Semester 2 written | 0.06 | 0.13 | 0.48 | 0.01 | 0.01 | 0.93 | −0.02 | −0.03 | 0.84 | 0.02 | 0.03 | 0.76 | |
Semester 2 OSCE | −0.03 | −0.06 | 0.71 | 0.05 | 0.07 | 0.55 | −0.03 | −0.04 | 0.70 | 0.35 | 0.50 | 0.00 |
- ru correlation corrected for range restriction.
- Statistically significant correlations are in bold.
- UCAS = Universities and Colleges Admissions Service; UKCAT = UK Clinical Aptitude Test; MMI = multiple mini-interview; OSCE = objective structured clinical examination
Table 3 shows multiple regression statistics for each assessment where there was a significant correlation with an admissions tool.
Cohort | Assessment | Step | Model statistics | Independent variables | |||||
---|---|---|---|---|---|---|---|---|---|
R 2 | F | p | Predictor | β | Stand β | p | |||
2009 Year 1 | Semester 1 written | – | 0.06 | 8.81 | 0.004 | UKCAT | 0.36 | 0.25 | 0.004 |
Semester 1 OSCE | Step 1 | 0.03 | 4.81 | 0.030 | UKCAT | 9.99−5 | 0.18 | 0.030 | |
Step 2 | 0.07 | 4.75 | 0.010 | UKCAT | 9.71−5 | 0.18 | 0.033 | ||
MMI | 1.79−3 | 0.18 | 0.034 | ||||||
Semester 2 written | – | 0.06 | 9.71 | 0.002 | MMI | 3.03−3 | 0.26 | 0.002 | |
Semester 2 OSCE | Step 1 | 0.11 | 17.61 | 0.000 | MMI | 2.56−3 | 0.34 | 0.000 | |
Step 2 | 0.17 | 13.78 | 0.000 | MMI | 2.61−3 | 0.34 | 0.000 | ||
Gender | −0.03 | −0.23 | 0.003 | ||||||
2009 Year 2 | Written | Step 1 | 0.05 | 6.21 | 0.014 | Gender | −3.54 | −0.22 | 0.014 |
Step 2 | 0.09 | 6.12 | 0.003 | Gender | −3.86 | −0.24 | 0.007 | ||
MMI | 0.18 | 0.21 | 0.018 | ||||||
OSCE | Step 1 | 0.07 | 9.99 | 0.002 | MMI | 0.14 | 0.27 | 0.002 | |
Step 2 | 0.15 | 10.72 | 0.000 | MMI | 0.15 | 0.30 | 0.000 | ||
Gender | −2.65 | −0.27 | 0.001 | ||||||
2010 Year 1 | Semester 2 OSCE | Step 1 | 0.12 | 21.02 | 0.000 | MMI | 2.00−3 | 0.35 | 0.000 |
Step 2 | 0.16 | 13.56 | 0.000 | MMI | 2.00−3 | 0.33 | 0.000 | ||
Gender | −0.02 | −0.18 | 0.021 | ||||||
Agea | ns | ns | ns |
- a Age did not meet the inclusion criteria.
- OSCE = objective structured clinical examination; UKCAT = UK Clinical Aptitude Test; MMI = multiple mini-interview; Gender was coded as female = 0 and male = 1
2009 MMI cohort: Year 1
Of the 140 matched students, 41.4% were male, 58.6% were female; the average age was 20.80 years (standard deviation [SD] = 2.40). Statistically significant correlations ranged from 0.18 to 0.34 and 0.24 to 0.43 unrestricted. UKCAT scores showed significant positive correlations with the Semester 1 written and OSCE. MMI scores showed significant positive correlations with three of four examinations. MMI correlation magnitudes were generally larger than those of all other admissions scores.
Multiple regression analysis revealed statistically significant predictors. UKCAT scores explained 6% of the variance in the Semester 1 and 2 written examinations. UKCAT and MMI scores explained 7% of the variance in the Semester 1 OSCE. MMI scores and gender explained 17% of the variance in the Semester 2 OSCE.
2009 MMI cohort: Year 2
Of the 128 matched students, 40.6% were male, 59.4% were female; the average age was 21.60 years (SD = 2.21). Statistically significant correlations ranged from 0.18 to 0.27 and 0.23 to 0.35 unrestricted. UCAS and UKCAT scores showed no significant correlations with examination scores. The MMI showed significant positive correlations with both examinations.
Multiple regression analysis revealed statistically significant predictors. MMI scores and gender explained 9% of the variance in the written assessment and 15% of the variance in the OSCE.
2010 MMI cohort
Of the 150 matched students, 43.3% were male, 56.7% were female; the average age was 20.80 years (SD = 2.70). There was a lone significant positive relationship between MMI and Semester 2 OSCE scores (r = 0.35, 0.50 unrestricted).
Multiple regression analysis revealed statistically significant predictors. MMI scores and gender explained 16% of the variance in the Semester 2 OSCE.
Discussion
This study had a number of limitations, namely the length of follow-up, the nature of outcome markers available and the reduced likelihood of finding positive associations due to range restrictions. Follow-up into clinical training is ongoing for these cohorts when it is hoped markers of professionalism can be added to academic outcome measures. It may also prove possible to link these data with more detailed school achievement records, but it remains worthwhile analysing the predictive validity of the actual scores used in selection to assess the utility of the process.
This study presents limited evidence for the validity of the UKCAT, whose scores showed significant positive correlations in only two of 10 assessments within 1 year. However, these results must be seen with caution, with potential type I errors as a result of multiple comparisons. It is notable that Semester 1 examinations are more focused on recall of knowledge than those in Semester 2, which test application of knowledge and clinical skills. This may explain the pattern with UKCAT scores and is consistent with results obtained by Wright and Bradley,15 who showed a positive relationship between UKCAT and knowledge-based examination scores, not OSCE, and which appeared to diminish over the early years.
Scores derived from the UCAS form appear to be less valid selection tools, with no significant positive correlations. The lack of statistically significant associations between UCAS non-academic and examination scores is consistent with the available evidence, which suggests that references and scores derived from the personal statement are not predictive of medical school outcomes. This calls into question whether their continued use is justified.
The lack of significant positive correlations between academic achievement and examination scores was somewhat surprising, but also possibly explicable. Although academic achievement has traditionally been the best predictor of medical school success, it may be difficult to detect when most students gain near maximum scores for this. It is therefore possible that continued reliance on measures of academic achievement provide reducing returns?14 The significant body of published literature demonstrates that measures of academic achievement are the most consistent predictors and dictates that further research is necessary before drawing conclusions based on our comparatively limited data.13, 26, 27 However, this highlights the need to review academic scoring in the UK context, especially with the introduction of A* grades at A-level.
However, this study does provide important evidence of the validity of the MMIs by demonstrating that it was the most consistent predictor of success in medical school examinations across two separate cohorts and years. MMI scores significantly correlated with six of 10 examination sittings, with magnitudes ranging from 0.24 to 0.50 (unrestricted), accounting for between 5.70% and 25.00% of variance in students’ examination scores. Multiple regression also confirmed that the MMIs remained the most consistent predictor of success, accounting for between 5% and 17% of the variance in assessment scores alone or in combination with candidates' gender. It is unsurprising that correlations were lower or absent in the first year where assessments (even the OSCE) were more highly knowledge orientated.
Although the size of these correlations can be described as moderate, it has been asserted that measures with even modest predictive validity could add considerable value to selection systems where the ratio of applicants to places is large and the importance of sound selection decisions is high.28 After adjusting for range restriction, these coefficients can be described as ‘likely to be useful’ or ‘very beneficial’.25 Correlations were largest in OSCE assessments, perhaps because certain components are common in both, such as communication skills, or even more generally an ability to ‘perform under pressure’.
Although the results of this research are compelling in favour of MMIs, further research in this area is necessary. The short-term duration of follow-up is the primary limitation of this study and continuous longitudinal studies of these and future cohorts will establish the utility of admissions measures across medical school years and into postgraduate study. The body of evidence investigating the predictive power of MMIs would also benefit from results from other medical schools, particularly those measuring different non-cognitive attributes. Finally, testing should also investigate the ability of the UKCAT and MMIs to predict specific cognitive and non-cognitive attributes for which they were designed, such as interpersonal communications skills.
This study demonstrates that it is possible to operate a reliable and valid MMI with the younger student population in the UK. It has demonstrated this with statistically robust assessments and relatively large sample sizes compared with previously published validity studies. It is hoped that as medical schools worldwide continue to adopt the MMI approach, more evidence will emerge to support its usefulness as a robust component of selection systems and increasingly refine its format.
Contributors
AH and JD both contributed to the study conception, data analysis, interpretation, drafting and revision of the paper and approved the final manuscript for publication
Acknowledgements
Ben Kumwenda for assistance with data collection.
Funding
none.
Conflicts of interest
none.
Ethical approval
not required.
Appendix 1
Medical Education
Examination | Description |
---|---|
Year 1 | |
Semester 1 written | Multiple choice question examination consisting of: anatomy, biomedical (this consists of biochemistry, physiology, pharmacology), disease mechanisms (this consists of pathology, immunology, microbiology, genetics), psychosocial (public health/behavioural science), safe medical practice (including ethics) and integrated teaching |
Semester 1 objective structured clinical examination | A 50-station, 1 minute per station, 1 question per station assessment of core, clinically relevant anatomy. Utilises pinned prosections, models, osteology specimens and imaging material (plain radiography, contrast studies, computed tomography and magnetic resonance imaging). Questions are designed to test applied anatomical knowledge and understanding within a context of clinical relevance |
Semester 2 written | Extended Matching Items consisting of dermatology, haematology, cardiovascular, psychosocial, ethics and safe medical practice |
Semester 2 objective structured clinical examination | 12 stations of 6 minutes each. Competencies: physical examination, history taking, communication, practical procedures. Situations: cardiovascular, doctors, patients and community (DPaC), haematology, safe medical practice |
Year 2 | |
Written | Endocrinology, gastroenterology, musculoskeletal, anatomy, rheumatology, musculoskeletal, orthopaedics, renal/urology, respiratory |
Objective structured clinical examination | 15 stations of 8 minutes each on: endocrinology, gastrointestinal, DPaC, musculoskeletal (orthopaedics), musculoskeletal (rheumatology), emergency medicine, renal/urology, respiratory |