Published on in Vol 7, No 1 (2018): Jan-Jun

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/9350, first published .
The Validity of Online Patient Ratings of Physicians: Analysis of Physician Peer Reviews and Patient Ratings

The Validity of Online Patient Ratings of Physicians: Analysis of Physician Peer Reviews and Patient Ratings

The Validity of Online Patient Ratings of Physicians: Analysis of Physician Peer Reviews and Patient Ratings

Short Paper

1University of New Hampshire, Durham, NH, United States

2Graduate College, Kennesaw State University, Kennesaw, GA, United States

3Department of Urology, Weill Cornell Medicine, New York, NY, United States

*all authors contributed equally

Corresponding Author:

Yiyun Zhou, BSc, MA

Graduate College

Kennesaw State University

1000 Chastain Road, Kennesaw

Kennesaw, GA,

United States

Phone: 1 4043331288

Email: yzhou20@kennesaw.edu


Background: Information from ratings sites are increasingly informing patient decisions related to health care and the selection of physicians.

Objective: The current study sought to determine the validity of online patient ratings of physicians through comparison with physician peer review.

Methods: We extracted 223,715 reviews of 41,104 physicians from 10 of the largest cities in the United States, including 1142 physicians listed as “America’s Top Doctors” through physician peer review. Differences in mean online patient ratings were tested for physicians who were listed and those who were not.

Results: Overall, no differences were found between the online patient ratings based upon physician peer review status. However, statistical differences were found for four specialties (family medicine, allergists, internal medicine, and pediatrics), with online patient ratings significantly higher for those physicians listed as a peer-reviewed “Top Doctor” versus those who were not.

Conclusions: The results of this large-scale study indicate that while online patient ratings are consistent with physician peer review for four nonsurgical, primarily in-office specializations, patient ratings were not consistent with physician peer review for specializations like anesthesiology. This result indicates that the validity of patient ratings varies by medical specialization.

Interact J Med Res 2018;7(1):e8

doi:10.2196/ijmr.9350

Keywords



In a 2016 study, the Pew Research Center found that 84% of all adults in the United States use online ratings sites to inform their product or service purchase decisions [1]. The same is true for health care: patients increasingly access online ratings sites to inform their health care decisions, with online ratings emerging as the most influential factor for choosing a physician. In a 2017 study by the National Institutes of Health, 53% of physicians and 39% of patients reported visiting a health care rating website at least once [2]. Overall, physicians indicated that the numerical results from these ratings websites were valid approximately 53% of the time, while patients indicated that they thought the ratings were valid 36% of the time [2].

RateMDs.com, HealthGrades.com, and Vitals.com are three frequently visited health care provider ratings websites, with over 2.6 million, 6.1 million, and 7.8 million reviews, respectively [3-5]. For these three sites, numeric rating scales range from 1 (poor) to 5 (excellent) and cover perceptions of physician knowledge, helpfulness, punctuality, and staff. Most patients give physicians positive ratings: one study reported that over 90% of all ratings were positive [6] and another reported that as the frequency of ratings increased, the average mean rating increased [7].

Extending the findings of the study by the National Institutes of Health, we sought to determine the validity of online patient ratings through comparison with physician peer review, defined in this study through Castle Connolly Medical. Specifically, we tested whether mean online patient ratings for physicians, by specialty, are higher for those physicians who have been nominated by their peers as one of “America’s Top Doctors” or not, as reported by Castle Connolly Medical. If online patient ratings were consistent with Castle Connolly Medical, ratings for physicians listed would be higher than for those not listed, thereby providing support for the validity of physician online review sites to inform health care-related decisions.


The basis for physician peer review selected for the current study is Castle Connolly Medical, a private consumer research firm that distinguishes top providers both nationally and regionally through a peer nomination process that involves over 50,000 providers and hospital and health care executives. Castle Connolly Medical receives over 100,000 nominations each year and a physician-led research team awards top providers from these nominations [8]. Lists are generated for each health care specialty as well as most subspecialties.

Several studies have similarly selected physician peer review through Castle Connolly Medical as a basis to assess the validity and role of patient online ratings sites, including an assessment for hand surgeons in the United States [9], as well as a more general correlation of physician attributes and ranking of hospital affiliations with peer review results [10]. Other studies have found alternative domain-specific objective measures to corroborate online review sites with relevant tangible outcomes, like restaurant ratings with patron visits [11].


This study examined 223,715 reviews of 41,104 unique (nonduplicated) physicians from 10 of the largest cities in the United States (Atlanta, Boston, Chicago, Dallas, Washington DC, Los Angeles, Miami, New York, Philadelphia, and San Francisco). Reviews were extracted in January 2017. Of these physicians, 1142 were included as “America’s Top Doctors” in the Castle Connolly Medical rankings. The number of ratings and physicians evaluated makes this study the largest-scale evaluation of its kind, to date. The profile of the overall sample is provided in Table 1. Specific elements extracted included doctor name, rating (numeric), number of reviews, specialization, source (ratings site), city, and state. To mitigate issues related to “fake” reviews as well as influential observations, we excluded any physician with fewer than three reviews and specializations with fewer than five reviews. Of the total number of physicians with reviews, 16,525 had fewer than three reviews, making the final analyzed sample size of physicians 24,579 (Multimedia Appendix 1).

From Multimedia Appendix 1, four specializations demonstrated differences in online patient average ratings between those physicians included in Castle Connolly Medical’s listing of “America’s Top Doctors” and those not listed: allergists, family medicine, internists, and pediatricians. For each of these specializations, those physicians with a listing in Castle Connolly Medical received a higher rating than those physicians not listed. The remaining specializations exhibited little difference between physicians listed and those not listed.

Table 1. Rated physicians by source.
Ratings sourceNumber of physiciansaNumber of reviewsAverage rating (1-5)
HealthGrades17,385113,4273.97
RateMDs19,63172,2283.83
Vitals408838,0604.06
Total41,104223,7153.91

aNonduplicated, unique number of physicians.


Principal Findings

This study sought to determine the validity of patient ratings for physicians by evaluating the mean online ratings for physicians, by specialty, between those who had been nominated by their peers as one of “America’s Top Doctors” or not, as reported by Castle Connolly Medical. We found that four specializations demonstrated differences in ratings between those physicians included in Castle Connolly Medical’s listing of “America’s Top Doctors” and those not listed: allergists, family medicine, internists, and pediatricians. Specifically, our study found that the validity of patient online reviews of physicians varies by specialization. This finding has implications related to how patients make choices related to health care.

Physicians have been inundated with mandates for attaining the “triple aim” of reducing costs and increasing patient experiences and quality [12]. In doing so, many have moved to a model of “patient-centered care” which seeks to form continuous patient-physician relationships [13]. Thus, some practices have simultaneously begun to direct attention at both the nature of the relationship and the quality of that encounter. Given that these efforts appear to be primarily directed at more “primary care” and “in-office” settings, our finding that patient reviews are valid for specializations that could be characterized as primarily “in-office” settings is not unexpected.

Within the context of promoting competition, information transparency needs to be both complete and understood. This review would suggest that online patient ratings accomplish neither of these market objectives. In fact, there may be implications for shopping behavior to negatively influence quality of care outcomes; care continuity is associated with many positive health outcomes including decreased hospitalizations, fewer emergency room visits, lower health care costs, and improvements in the use of preventative care services [14]. Conversely, evidence indicates that patients who experience more fragmented primary care services also have patterns of care that more significantly deviate from determined best practice guidelines and result in higher overall health care costs. Negative reviews could thus promote “doctor shopping” based on incomplete or nonfactual information and lead to more fragmented care continuity, and potentially less optimal health outcomes [15,16].

Health systems have called for more holistic approaches to treating patients and placing measurable value on attributes such as trust and continuity of care [17]. In a recent edition of the Journal of the American Medical Association, physicians discussed the role that standardized quality assessment tools have on care practice and the need to be thoughtful when constructing such measures [18]. Physician rating websites have utility, but are imperfect proxies for competence [19,20]. If such questions have arisen about standard best practice measurement, even greater questions exist about unstandardized and undefined open assessments such as online patient reviews, particularly in specialties where the patient has limited direct experience with their health care provider (eg, Anesthesiology).

Limitations

The selected basis for physician peer review for this study–Castle Connolly Medical–is not immune to challenge; while the organization does not receive payments or petitions, physicians have publicly questioned the “lobbying” efforts that some colleagues undertake to be included in their lists. However, no objective truth in the determination of a “good” or “bad” physician has been established. Other studies have explored alternative assessments for physician performance (eg, clinical outcomes, costs to treat, board certifications) and have acknowledged a variety of issues and limitations related to associating reviews with performance [21,22].

The current study only incorporated average numerical results for physicians (rather than an individual numeric rating for each review) from the three ratings sources; text from reviews was not analyzed. While the patterns and general findings would likely not change based upon text analysis, the text may provide additional insights regarding frequently occurring terms or relevant patterns for interested researchers.

We were not able to ascertain details about the individuals providing the ratings. Specifically, this study did not consider the patients’ insurance type. This insurance type could affect how a patient experiences the service provided relative to perceived value; those with higher out-of-pocket direct costs via copays and/or high deductibles may be more cost sensitive and therefore more likely to “shop” for health care in the face these payments.

Conclusions

A deceptive review or set of reviews related to a hotel visit is an inconvenience, but decisions based on deceptive or poorly-informed patient reviews related to a health care provider could have dire consequences for an individual using these reviews to inform their health care-related decisions. The presence of online ratings sites will likely continue to grow and expand across all segments of the economy. The results of this large-scale study indicate that while patient ratings are consistent with physician peer review ratings for specialties like allergists and pediatricians, patient reviews were not consistent with medical peer review for specializations characterized by less patient contact (eg, anesthesiology). This result may indicate that patients are not sufficiently knowledgeable to provide informed physician ratings for some medical specializations, leading other information-seekers to potentially less-qualified providers.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Overall mean ratings by specialization for physicians listed and not listed in Castle Connolly Medical.

PDF File (Adobe PDF File), 95KB

  1. Smith A, Anderson M. Pew Research Center. 2016 Dec 19. Online shoppng and e-commerce: online reviews   URL: http://www.pewinternet.org/2016/12/19/online-reviews [WebCite Cache]
  2. Holliday AM, Kachalia A, Meyer GS, Sequist TD. Physician and patient views on public physician rating websites: a cross-sectional study. J Gen Intern Med 2017 Jun;32(6):626-631. [CrossRef] [Medline]
  3. RateMDs. 2017. Doctor reviews and ratings   URL: http://www.ratemds.com [WebCite Cache]
  4. Vitals. 2017.   URL: http://www.vitals.com [WebCite Cache]
  5. HealthGrades. 2017. Review your doctor   URL: https://www.healthgrades.com/review [WebCite Cache]
  6. Emmert M, Sander U, Pisch F. Eight questions about physician-rating websites: a systematic review. J Med Internet Res 2013 Feb 01;15(2):e24 [FREE Full text] [CrossRef] [Medline]
  7. Grabner-Kräuter S, Waiguny MKJ. Insights into the impact of online physician reviews on patients' decision making: randomized experiment. J Med Internet Res 2015 Apr 09;17(4):e93 [FREE Full text] [CrossRef] [Medline]
  8. Castle Connolly. 2018. Top Doctors   URL: https://www.castleconnolly.com [accessed 2018-03-26] [WebCite Cache]
  9. Trehan SK, DeFrancesco CJ, Nguyen JT, Charalel RA, Daluiski A. Online patient ratings of hand surgeons. J Hand Surg Am 2016 Jan;41(1):98-103. [CrossRef] [Medline]
  10. Wiley MT, Rivas RL, Hristidis V. Provider attributes correlation analysis to their referral frequency and awards. BMC Health Serv Res 2016 Mar 14;16:90 [FREE Full text] [CrossRef] [Medline]
  11. Gadidov B, Priestley JL. Does Yelp matter? Analyzing (and guide to using) ratings for a quick serve restaurant chain. In: Srinivasan S, editor. Guide to Big Data Applications. New York: Springer International Publishing; 2018:503-522.
  12. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med 2014;12(6):573-576 [FREE Full text] [CrossRef] [Medline]
  13. Detsky AS. What patients really want from health care. JAMA 2011 Dec 14;306(22):2500-2501. [CrossRef] [Medline]
  14. Frandsen BR, Joynt KE, Rebitzer JB, Jha AK. Care fragmentation, quality, and costs among chronically ill patients. Am J Manag Care 2015 May;21(5):355-362 [FREE Full text] [Medline]
  15. Frost A, Newman D. Health Care Cost Institute. 2018. Spending on shoppable services in health care   URL: http://www.healthcostinstitute.org/files/Shoppable%20Services%20IB%203.2.16_0.pdf [WebCite Cache]
  16. Desai S, Hatfield LA, Hicks AL, Chernew ME, Mehrotra A. Association between availability of a price transparency tool and outpatient spending. JAMA 2016 May 03;315(17):1874-1881. [CrossRef] [Medline]
  17. Friedberg M, Chen P, Van BK, Aunon F, Pham C, Caloyeras J, et al. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. Rand Health Q 2014;3(4):1 [FREE Full text] [Medline]
  18. Goitein L, James B. Standardized best practices and individual craft-based medicine: a conversation about quality. JAMA Intern Med 2016 Jun 01;176(6):835-838 [FREE Full text] [CrossRef] [Medline]
  19. Murphy GP, Awad MA, Osterberg EC, Gaither TW, Chumnarnsongkhroh T, Washington SL, et al. Web-based physician ratings for California physicians on probation. J Med Internet Res 2017 Aug 22;19(8):e254 [FREE Full text] [CrossRef] [Medline]
  20. Okike K, Peter-Bibb TK, Xie KC, Okike ON. Association between physician online rating and quality of care. J Med Internet Res 2016 Dec 13;18(12):e324 [FREE Full text] [CrossRef] [Medline]
  21. Trzeciak S, Gaughan JP, Bosire J, Mazzarelli AJ. Association between medicare summary star ratings for patient experience and clinical outcomes in US hospitals. J Patient Exp 2016 Mar;3(1):6-9 [FREE Full text] [CrossRef] [Medline]
  22. Liu J, Matelski J, Cram P, Urbach D, Bell C. Association between online physician ratings and cardiac surgery mortality. Circ Cardiovasc Qual Outcomes 2016 Nov;9(6):788-791 [FREE Full text] [CrossRef] [Medline]

Edited by G Eysenbach; submitted 07.11.17; peer-reviewed by S Nargundkar, M Aaron, S Bidmon, A Herrmann, F Rothenfluh, S Grabner-Kräuter; comments to author 17.12.17; revised version received 27.12.17; accepted 19.02.18; published 09.04.18

Copyright

©Robert J McGrath, Jennifer Lewis Priestley, Yiyun Zhou, Patrick J Culligan. Originally published in the Interactive Journal of Medical Research (http://www.i-jmr.org/), 09.04.2018.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Interactive Journal of Medical Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.i-jmr.org/, as well as this copyright and license information must be included.