There are 2 test I can use to see if this patient has got cancer, which one is best? How do I know? How can I compare them?? These were just some of the thoughts going through the candidates mind as his stared at the paper in the academic viva in national selection!
If only they'd listened to Rob Radcliffe, who is on hand to explain how you do just that using receiver operating characteristic curves, a really easy way to compare the performance of tests and probably the most useful to medicine thing that had its origin in WW II radar technology.
Starting with a review of sensitivity and specificity (see http://schoolofsurgery.podomatic.com/entry/2014-05-02T00_31_49-07_00 for full revision) Rob shows how sensitivity and specificity vary with the cut off point for a test and demonstrates the best test you can design and the worst and shows you how to construct a ROC curve. Real life examples are discussed and how to compare test visually from their curves, and how this can be qualified (and so compared statistically to find the best performing test) using Area Under the Curve (AUC) is also explained.
This is the clearest explanation you will find anywhere for this commonly used comparison (check out the Wikipedia page on this if you don't believe me). Is is essential to know as ROC curve feature often in medical literature and often in exams and academic interviews.
Rob Radcliffe was a maths teacher in a former life and is now a trainee in Urology in the East Midlands, UK