I recently hosted a Google Hangout on Air entitled Patient Reviews of Physicians: The Wisdom of the Crowd? (presented by The Harlow Group LLC in association with The Society for Participatory Medicine).
I spoke with Niam Yaraghi (Center for Technology Innovation, The Brookings Institution) and Casey Quinlan (Mighty Casey Media) following their interesting back-and-forth online on the question of whether and how patient reviews of physicians can add value. Please take a look at the posts that preceded the hangout. Here are the initial post and reaction: Niam’s post and Casey’s post – as well as Niam’s follow up post and Casey’s follow up post.
Please feel free to watch the Hangout on Air. Further below, I’ve shared a version of my introduction to the hangout, as well as some of the takeaways suggested by Niam and Casey (and taken away by me). We could use the wisdom of the crowd in refining the takeaways in the comments.
The challenge to the community is manifold. We need to do a better job of identifying:
- Types of cases in which the patient is an expert in the clinical sense (where we can rely on patient assessments of quality of clinical care)
- Useful measures (for both process and outcome)
- Better approaches to clarifying patient preferences (because we can’t score a process or an outcome if we don’t know what to score it against)
- On a related note, checklists for patients to use in making the most of encounters with the health care system
More on these points below.
The whole thing was kicked off by a post of Niam’s that included this passage:
Since patients do not have the medical expertise to judge the quality of physicians’ decisions in the short run and are neither capable of evaluating the outcomes of such decisions in the long run, their feedback would be limited to their immediate interaction with medical providers and their staff members. Instead of the quality of the medical services, patients would evaluate the bedside manners of physicians, decor of their offices and demeanor of their staff. This is why a physician’s online rating is not a valid measure of his or her medical expertise.
This, shall we say, inflamed Casey’s ire, as an engaged patient and patient activist. She noted that in many cases a patient with a chronic condition is in fact more expert in her condition — and certainly in the ins and outs of what works or doesn’t work for them in managing her condition — than a clinician new to the case. There is an oft-cited statistic that it takes 17 years for new medical science to filter from journal article to accepted everyday practice. Nobody wants to wait that long for her health care to catch up with the state of the art. An engaged patient is more likely to do the research, do the legwork, and surface ideas directly relevant to her case. Some clinicians, of course, are open to the notion that they can “Let Patients Help.”
Niam followed up with another post, noting that while some patients may be experts on their own conditions, others may not be — thus posing, essentially, the question of “how do I evaluate the reviewer?” This is a problem that should be familiar to any one of us who shops online for anything. The key issue Niam raised in his follow-up was this, though: An instrument is valid when it measures what it was intended to measure. He noted some studies that concluded that patient satisfaction is not necessarily tied to improved clinical outcomes.
(As an aside, there was a post on The Health Care Blog picking up on Niam’s perspective on patient reviews on Yelp, and showing that Yelp reviews are highly positively correlated with CAHPS results. With all due respect to the author, since both the reviews and the CAHPS surveys are largely based on patient experience — and not clinical quality process or outcome measures — the correlation does not seem to undercut Niam’s point. It does not address directly the broader question of whether a patient can be an expert on his or her own condition. The post does, however, point up the fact that physician-level predictive quality measures are as rare as hen’s teeth. This is in fact a problem that would be great to try and dig into.)
There are a whole lot of things that get measured and reported that are not necessarily tied to improved clinical outcomes — consider the reaction of top-tier medical centers every time some ranking is published showing that they are lower in quality than another provider, which is usually some variation on the following theme: “We serve sicker patients, so the results are skewed.”
Casey, in her follow-up post, confirmed that she is not suggesting that patient reviews should be the sole metric guiding choice of clinician … so I think she and Niam have at least some area of agreement.
What we need are metrics to guide rational choice of provider. Going with one’s gut is perhaps an imperfect approach, though for many of us it often seems to be the best we can manage. We certainly have a lot of measures and a lot of data on these measures rattling around out there — but they don’t necessarily enable us to better answer the question: Which doctor should I go to?
So, back to our questions, as outlined above. I’m seeding the post with a few stabs at answers, but I am throwing the questions open to comment — please pile on.
1. What are some types of cases in which the patient is an expert in the clinical sense (where we can rely on patient assessments of quality of clinical care)?
A couple of examples come to mind:
- The patient with a chronic condition who is more knowledgeable about her condition and the latest research regarding therapies and other approaches to managing the condition than is her new doctor.
- The patient whose condition had been misdiagnosed (and therefore effectively left untreated) by three doctors. Doctor #4 correctly diagnoses and treats the condition.
Each of the patients in these cases is qualified to review her doctor(s) not just in terms of bedside manner, but in terms of clinical quality of care. I would appreciate the wisdom of the crowd in identifying additional “cases” such as these. With a bank of such cases at hand we may be better able to build a framework for clinical quality ratings by patients.
2. What are some useful process measures to use?
There may be some value in standardizing the measures we use in reviewing physicians, even from the process perspective, or even the domains to consider. Niam suggested the following domains to consider:
- Quality of communication between patient and care provider
- Quality of teamwork between the members of medical team as observed by patient
- Following basic rules of infection control
- Reviewing prior medical records of patients
- Following up with patient and making sure that he/she has completely understood medical orders and can comply with them
- Listening to patients and addressing their concerns during the visits
What other measures like these should we be considering? These are perhaps a step up from the “I like my doctor because she is polite and on time” sort of reviews, but: Is there a correlation between good rankings on these metrics and good outcomes? I am on a life-long search for a handful of good measures that would prove to be predictive of everything else that matters; I am not convinced that these take us too far down the path in that direction.
3. What are useful outcome measures to use, and who is able to rate a clinician according to these measures?
The classic example of the disconnect between a clinician’s view of quality and the patient’s is the old saw about the orthopedic surgeon who pronounced a leg healed, ignoring the fact that the patient had died. (Sorry, orthopedic surgeons; it’s just a hypothetical example ….)
- Patient satisfaction with the outcome
In the end, the only metric that matters is patient satisfaction. Why? Because care must be delivered to address patient needs, patient preferences. The optimal treatment for two patients with similar clinical presentations may be entirely different, based on family issues and personal preferences (for example, treating a terminal illness differently for a patient who wants to walk his daughter down the aisle in six months vs. one who wants only to have a good death).
4. What combined process and outcome measures should we be using to rate quality?
The desired outcome ought to be determined by taking patient desires and preferences into account. In many situations, success in achieving clinical goals will be largely determined by whether the patient has had sufficient voice in determining those goals. We need better approaches to clarifying patient preferences before embarking on courses of treatment. We can’t score the process or the outcome unless we know the patient’s views on the process and outcome. (Consider the work of the Dartmouth Preference Lab). A basic step on the way to clarifying these preferences is ensuring that patients are making the most of encounters with the health care system, which may be enabled in some situations by the use of checklists (here are two examples offered by Casey — one and two).
We have a tool that can be used to assess the level of a patient’s activation and engagement: the Patient Activation Measure. But of course that activated patient needs to engage with a receptive clinician. There ought to be a parallel tool that we could use to measure the clinician’s receptivity to and engagement with an activated patient — a tool that should include a measure that can identify the clinician who is able to activate a patient.
We may have veered slightly away from the narrow question of whether patients can rate providers in a useful way, and into the broader question of what might be the most useful set of quality measures for providers.
Bottom line: Any global assessment of provider quality must take into account care goals identified through an examination of patient preferences. Please help flesh out our thinking on this subject by adding your voice to the conversation. Comment here or at e-patients.net.
The Harlow Group LLC
Health Care Law and Consulting
A version of this post first appeared on e-patients.net, the blog of the Society for Participatory Medicine. David Harlow chairs the Society’s public policy committee.
Sue Ann says
We want to be identified for our good quality and good communication – are competing against a big multispecialty that is notoriously unresponsive. But if everything goes smoothly with us, nobody says anything. It’s only when something gets fouled up (which when we have had complaints are – and I’m keeping track – about 90 percent the parent of the patient’s problem – insurance nonunderstanding, local politics, etc.) that they post an unhappy item on our facebook page or rating on a site.
Unfortunately, parents of our patients are better at identifying LACK of good quality care (including good communication, etc.), than identifying GOOD quality of care. That is substantiated by research. It seems we humans aren’t so good at appreciating the absence of a negative.
Gilles Frydman says
can you provide references for the research you are mentioning, please.
It doesn’t fit at all with what I have witnessed for close to 20 years in our online communities, where people just don’t have either the time or inclination to write negative comments about a specific doctor. OTOH there is never a lack of comment about good doctors or true specialists about a specific disease or even a subtype of a given disease. From my vantage point, patients and caregivers who belong to good communities are the best informed, most unbiased source of knowledge about the current state of clinical care. Such communities know all the real experts and know their specific strength and weaknesses.
Sue Ann says
Sure. I got it from my husband who quoted it from his pediatric CME. I will get a reference for you after I get it from him.
David Harlow says
From the mailbag:
I enjoy your blog and think this one has too many moving parts to avoid tossing a couple of ideas on the pile. Before taking my current role I spent the past 9 years teaching strategy inside healthcare and my wife is an NP (Geriatric specialty) who does house calls in the inner city of Indianapolis for home bound seniors. These thoughts are fairly random but all of them have some bearing on our ability to measure healthcare quality.
It will be very hard to get providers to participate in providing meaningful quality metrics voluntarily until they feel assured they won’t be used against them in a lawsuit. Tort reform should be front and center in this debate.
Without both cost and quality data I cannot measure value and there is almost no transparency to meaningful cost or quality numbers for patients (hence this discussion). The status quo in healthcare wants very much to keep it that way for as long as possible. Remember one person’s waste in healthcare is another person’s business model. If you doubt this ask yourself this question; “if providers and patients successfully drive down the cost of healthcare, who gets to keep that savings?” Why?
Information is power and the best keepers of information for the first 50 plus years of FFS medicine have been the insurance companies.
The current baby boomer providers have never been measured publicly and most of them are leading healthcare organizations as we transition from our current system to something different (financially this transition is a necessity). This will not be a natural process for them (and their peers) to embrace.
Patients will take a more aggressive look at quality when good data is available and it feels like they’re spending their own money (versus 3rd party payment). When was the last time you cared about the quality of something that seemed free (it’s feeling less and less free)?
How do you hold a provider accountable for results when patients are non-compliant for a variety of reasons (apathy, dementia etc.)?
These are just a few of the many issues to add to the giant mess we’ve created in a system that pays for every activity and discounts the enormous contributions of cognitive time with a patient by some of the most gifted people in our society, healthcare providers.
Thank you for all you do to help move the needle!
David Harlow says
Some additional comments from e-patient Carly Medosch, who I met IRL at MedCity ENGAGE in Bethesda, July 2015: https://storify.com/healthblawg/how-to-frame-patient-reviews-of-physicians