“Rethink questions to patients in general practice and focus more on improving primary care”

By Tommaso Manacorda

The “Friends and Family Test”, seeking patients’ views, has created anxiety among practices but shed little light on patients’ concerns. It’s time for a rethink, suggests our study.

The many ways in which patients can now feedback to GPs should offer a rich source of information for those keen to improve their services. However, when we evaluated the latest addition to patient consultation in England – the “Friends and Family Test” of patient satisfaction – we found that “more” did not necessarily equate with “better”.

We found that general mistrust of the FFT process, combined with inappropriate framing of its core question is undermining the initiative. But we also identified ways both to bolster trust and make patient feedback more useful for improving primary care.

Two important pieces of learning emerge from examining this initiative. First, the issues raised by the “Friends and Family Test” could usefully prompt a rethink about how views are gathered from patients about their primary care experiences. Our main recommendation is that the FFT should be revised, although discontinuing it should also be considered, as its reception within general practice has been largely negative.

Second, GPs need more encouragement and guidance to tackle inadequacies revealed by patients’ experience. We found that commitment to quality improvement is uneven across primary care and remains a low priority in some, often in those practices where patient participation groups (PPGs) meet rarely and patient surveys are infrequently conducted. Nurturing a rigorous improvement culture is at least as important as getting right the questions posed to patients.

Wording of the FFT
The “Friends and Family Test” puts a single question to patients about their general practice: “We would like you to think about your recent experience of service. How likely are you to recommend our GP practice to friends and family if they needed similar care and treatment?” Answers are recorded on a five point scale from “extremely likely” to “extremely unlikely”. Additionally, patients may be asked to comment on their reasons for the score they have given.

General practices across the NHS are required to make the FFT available to patients after every contact, collecting the data with the method that suits them best. The majority of them use handwritten cards, but tablet kiosks and online apps are also used.

Our evaluation, involving 42 practices and 118 interviews with clinicians, practice staff and patients’ representatives, found two sets of problems with the FFT. The first concerned the usefulness of the information it produced. The second problem lay with how general practices understood the FFT’s purpose and how they engaged with it.

The experiences of hospitals, where the FFT was first introduced, showed that the quantitative scores were statistically unreliable. So the metrics could not be used to compare providers for quality. That’s because the FFT does not involve a representative sample and is vulnerable to selection bias. One can’t tell whether the scores are representative of all patients. So, when the FFT question was subsequently rolled out in 2014 into primary care, the widespread awareness of these statistical limitations contributed to unease in general practices about the FFT approach.

In our study, the FFT question was deemed inappropriate by most interviewees. Many, particularly in rural areas, lacked choice of general practice, so the question of where they might send friends or family lacked reality. Also, patients found it difficult to compare their personal care with what someone else might receive, because that depends on individual factors such as age, sex and existing health conditions. Most interviewees suggested the use of a more straightforward question.

The additional space provided for further comment did potentially offer some useful feedback. However, staff in general practices felt that the anonymity of patients made it difficult to act on these comments. That’s because patients were often unclear which service they had received, from whom and the precise nature of issue that concerned them. For example, a patient complained about a “terrible phone service”. But staff said it was difficult to respond to this statement because they didn’t know who the complainant was, whom the patient had spoken to or what they needed.

Most patient feedback collected by practices was positive, but anonymity and vagueness made it difficult to identify and reward good practice. Positive feedback would often lift staff morale. However, the inability to act on complaints was often reported to be very frustrating for staff.

When additional comments gathered by the FFT pointed at some specific issues, their contribution was still considered to be of little value because the issues were already known to staff from other sources, such as practice surveys and patient participation groups.

Professionals mistrust the process
The second issue with the FFT concerned the negative reception that it received from primary care professionals. Some feared that it left them vulnerable to hostile patients. Someone might, for example, be correctly denied an antibiotic for sound clinical reasons but then score down the practice unfairly. The practice would have no way to question the scoring.

Monthly reporting of scores to the DH and NHSE added to professional concerns that the FFT process was, as some claimed, “a stick with which to beat General Practice”. Even though the Government had stated that it would not use FFT scores to rank practices, that fear of unfairness remained unassuaged, which disheartened hard-working professionals.

As a result, although all the practices in our evaluation had made the FFT available to patients, few practices felt committed to, or “owned”, the process. Even though practices had been assured that the test was intended for local quality improvement, not regulation (or criticism) by national bodies, they remained doubtful, particularly because of the monthly reporting. One GP, not realising that the scores were for meant for practice consumption, even asked their practice manager during a joint interview: “Do we have to open the box?”

Shift to reporting quality improvements
We recommend that monthly reporting of FFT scores should be stopped, and maybe replaced by a qualitative report on local quality improvements, perhaps submitted annually. That would help restore trust among practices that central bodies are not “spying” on them.

Particularly valuable aspects of the FFT are the chance for patients to comment briefly and quickly about their experiences, and the opportunity for practices to collect such feedback rapidly and easily. This comment facility could, for example, be kept within an FFT that had a less confusing core question, and patients could be encouraged to be more specific about particular services (e.g. phone consultations, clinics for chronic diseases, immunisation, etc.), making it easier to identify and address problems on aspects of care that they want to be improved.

The key issue in the long-term will be whether, and to what extent, patients’ views will contribute to making services better. Commitment to quality improvement was found to be uneven across practices. A revised FFT might play an useful role in addressing this problem, being easy to implement and thus a feasible option particularly for smaller practices with less capacity for data collection. But extra detailed guidance is needed on how to ensure that patient feedback leads to service improvement. It will be important to make clear that the Government’s priority is aligned with that of GPs in being focussed on securing higher quality primary care.

Tommaso Manacorda is a Research Fellow at the London School of Hygiene and Tropical Medicine. His report “Implementation and use of the Friends and Family as a tool for local service improvement in NHS general practice in England” is published by PIRU and co-authored by Dr Bob Erens, Professor Sir Nick Black and Professor Nicholas Mays.

This commentary summarizes an independent report commissioned and funded by the Policy Research Programme of the Department of Health for England, via its core support for the Policy Innovation Research Unit, with additional funding provided for data collection from the main sample of general practices. The views expressed are those of the authors and not necessarily those of the Department.