The development of a Patient Reported Outcome Measure (PROM) is a lengthy process. It begins with a discussion with an expert panel, a patient panel and a review of existing instruments, creating a long list of possible questions for the clinical setting that is under study. Those questions are then tested with patient groups, initially to identify ambiguities of language and setting, and then to test the questions formally. Statistical analysis is used to identify questions that appear to examine the same aspects of the condition (redundancy), have poor test-retest agreement (unstable), or are unresponsive or internally inconsistent. Creating scoring systems demands that the instrument has adequate range, sensitivity to change and variability, without floor or ceiling effects (so that the range of scores is greater than required to accommodate most patients without bunching of scores at lower or upper limits of the score range). The instrument is further tested by expert panels of clinicians and patients to ensure that it is adequate and complete (face validity), but further field testing is required to confirm its value in practice, particularly with reference to existing instruments. For the interested reader, the papers that describe the development of the Musculoskeletal Function Assessment (Martin DP 1996) or of the Oxford Knee Score (Dawson J 1998) provide further insight into the process.
PROMs have, however, been around for long enough that the majority of readers of the JBJS [Br] will feel that they understand them well enough to assess research that utilises such instruments, without having to have addressed the details of how the instruments are created. When it comes to using them in clinical practice, however, it quickly becomes more difficult. It is all well and good seeing a certain intervention leading to a mean change in a certain outcome score that is significantly better than another intervention, or than placebo or natural history. That tells us that the intervention is probably good and we should consider using it. Or not.
But what is the relevance of an isolated score, or change, to an individual patient? At a personal level, it is this sort of information that the clinician would like to be able to use, and the usual complaint about PROMs is that they are fine for research, fine for registries and studies of large groups, but of no relevance to the individual clinician or the patients she is seeing in the clinic after a Total Knee Replacement (TKR) or Total Hip Replacement (THR).
This paper attempts to put these tools into the hands of clinicians. The authors examined the pre- and post-operative (six-month) Oxford Knee Score (OKS) for 1784 TKR patients and Oxford Hip Scores (OHS) for 1523 THR patients treated in a single, large NHS Treatment Centre in England. These represent around half of the joint replacement patients treated over the study period (2004-9), the remainder not having complete PROM sets. At six months, patients were also asked to answer a standard satisfaction question, using a visual analogue score from 0 (not satisfied) to 100 (very satisfied), taking a score of over 50 as "satisfied", in keeping with other studies. Statistical analysis allowed identification of the cut points of absolute score and change in score on OKS/OHS that represented a Patient Acceptable Symptom State (PASS).
Given that PROMs are supposed to reflect the patient's perception of their function, and that OKS/OHS are specific to patients with knee or hip problems, it is something of a surprise that there was only a moderate correlation between the patient satisfaction score and the OKS Total Score Spearman rank correlation 0.57) and the OHS (Spearman rank correlation 0.49), with similar correlations between satisfaction and change in OKS/OHS. So, although 89% of TKR patients and 93% of THR patients were declared satisfied, and those whose scores improved were more likely to be satisfied than those whose scores did not, satisfaction score per se does not correlate strongly with the PROM total scores or changes in PROMs scores. Given that 55% of patients whose OKS/OHS scores had remained unchanged or even got worse reported being satisfied, it is clear that satisfaction is a very nebulous concept. The authors acknowledge this challenge, and have demonstrated a critical approach to setting the level of this satisfaction threshold, with scores of 40 or 60 having little practical effect upon the total and change cut points in OKS/OHS, as opposed to a threshold of 50. Combining the satisfaction score with PROM data allows the assessment of satisfaction in the setting of a clinically relevant assessment.
By defining the expected improvement for a given initial score, the success of a procedure can be defined in a quantitative way for an individual patient. The tables allow surgeons and patients to consider expected total scores and changes in scores to be based upon the baseline score, and these expectations can revisited at six month review.
Are your patients getting the benefit that you and they might expect from their hip or knee replacement? Use of PROMs may allow you and your patients to assess this objectively, and this interesting paper supports this personalised approach, using the OHS/OKS. PROMs are not just for research.
AG Sutherland MD(Hons), FRCSEd(Tr&Orth), Senior Lecturer in Orthopaedics, Deakin University Medical School.