Patients' global ratings of student competence. Unreliable contamination or gold standard?

Med Educ. 2002 Dec;36(12):1117-21. doi: 10.1046/j.1365-2923.2002.01379.x.

Abstract

Purpose: To determine whether global ratings by patients are valid and reliable enough to be used within a major summative assessment of medical students' clinical skills.

Method: In 11 stations of an 18-station objective structured clinical examination (OSCE), where a student was asked to educate or take a history from a patient, the patient was asked, 'How likely would you be to come back and discuss your concerns with this student again?' These 11 opinions were aggregated into a single patient opinion mark and correlated with other measures of student competence. The patients were not experienced in student assessment.

Results: A total of 204 students undertook the OSCE. Reliability of patient opinion across all 11 stations revealed a Cronbach alpha of 0.65. The correlation coefficient between the patient ratings and the total OSCE score was good (r = 0.74; P < 0.001) and was better than the correlation between any single OSCE station and the total OSCE score. It was also better than the correlation between the aggregated patient opinion and tests of student knowledge (r = 0.47).

Conclusion: It is known that patients can reliably complete checklists of clinical skills and that doctors can reliably provide global ratings of students. We have now shown that, by controlling the context, asking the right question and aggregating several opinions, untrained patients can provide a reliable and valid global opinion that contributes to the assessment of a student's clinical skills.

MeSH terms

  • Clinical Competence / standards*
  • Education, Medical, Undergraduate / standards*
  • Educational Measurement / methods*
  • Humans
  • Medical History Taking / standards*
  • New Zealand
  • Patient Satisfaction*
  • Professional-Patient Relations
  • Reproducibility of Results
  • Students, Medical