The Value of Patients' Handwritten Comments on HCAHPS Surveys/PRACTITIONER APPLICATION [Journal of Healthcare Management]
(Journal of Healthcare Management Via Acquire Media NewsEdge) EXECUTIVE SUMMARY Some patients write comments on their Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys, but survey vendors do not record them, and the value of this anecdotal information is not well understood. However, many rating websites contain both numerical ratings and anecdotal comments from consumers who wish to share their experiences, and the option to write comments enhances the appeal of these survey forums. Recent research shows that numerical ratings do not sufficiently capture the range of consumer experiences and that comments contain additional information that complements survey responses. In this study, we investigate the contribution of anecdotal comments on HCAHPS surveys to the prediction of two global outcome measures: overall hospital rating and intention to recommend. HCAHPS surveys were collected retrospectively from 589 inpatients at two community hospitals, whose answers to the HCAHPS questions plus any handwritten comments were entered into a database. Nearly 20% of the surveys contained at least one written comment. A content analysis was performed, and comments were classified as positive, negative, neutral, or mixed. Regression analyses showed that negative comments significantly affected patients' overall hospital rating with and intention to recommend the hospital. After adjusting for their quantitative ratings on the HCAHPS questions, we found that patients who wrote negative comments gave the hospitals significantly lower satisfaction and intention scores. Consistent with prior research, our study showed that the information contained in numerical HCAHPS composite measures was enhanced by patients' commentary. In addition, quantitative HCAHPS ratings appear to underestimate the feelings of people who write negative comments, validating practices at hospitals that use surveys containing negative anecdotes in quality improvement initiatives.
INTRODUCTION The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey leaves no space for patients to write comments, yet in this study, we found that almost 20% of patients at two community hospitals wrote comments onto HCAHPS questionnaires following their inpatient stays. Hospital Compare, the public's means of access to provider quality data, does not display patient comments, and to our knowledge, comments written on these surveys are not recorded or analyzed. Thus, prior to this study, it was unknown whether comments provide important information about patients' experience that is not captured by the standard HCAHPS questions, or whether they merely lend context to the quantitative survey data.
Improving the patient experience has become increasingly important since the Centers for Medicare & Medicaid Services (CMS) began to link reimbursement to hospitals' HCAHPS scores as part of the Hospital Value-Based Purchasing (HVBP) program. However, of all the variables in HVBP, administrators believe that HCAHPS scores are the most difficult measures to improve (HealthLeaders, 2012). As the search for ways to improve patients' experience has become more urgent, healthcare executives have turned to information from patients' comments to more effectively pinpoint problems and lead to tangible suggestions for improvement by the medical, nursing, and support staffs (Green & McKeever, 2007). For years, comments on satisfaction surveys have been examined by hospital administrators, who have used them to gain insight into operational successes and failures that are sometimes not apparent from reading quantitative analyses of survey questions (Press Ganey, 2009). This activity suggests that anecdotal comments contain useful information above and beyond the quantitative scores these surveys provide. In this study, we examined the relative contribution of both qualitative comments and quantitative ratings to the prediction of overall hospital rating and intention to recommend the hospital. The importance of understanding patients' experiences goes beyond the need to achieve incentives offered through the HVBP program, as increased patient satisfaction has been linked to improved health outcomes (Sofaer & Firminger, 2005).
BACKGROUND Quantitative and Qualitative Assessments of Patients' Experience To involve consumers in choosing facilities that offer high-quality care, hospitals are required to publicly report quality information, including data from their HCAHPS surveys, which summarize patients' experience; online access to these findings is available through the Hospital Compare website (see http://www.cms.gov/Medicare/ Quality-Initiatives-Patient-AssessmentInstruments/HospitalQualitylnits /HospitalCompare.html). However, recent findings indicate that few consumers know about or use this information to make informed choices among alternative providers, despite significant investments made by CMS to collect, analyze, and publicly report the data (Dolan, 2008; Fung, Lim, Mattke, Damberg, & Shekelle, 2008; Hibbard, 2008). Even if consumers were aware of hospital quality data and wanted to use it in their decision-making process, they may find it inconsistent and difficult to understand (Rothberg, Morsi, Benjamin, Pekow, & Lindenauer, 2008).
At the same time, the popularity of online review sites has increased dramatically in recent years (Moe & Schweidel, 2012). These websites allow people to rate and comment on a variety of products and services, and when buying products such as cameras and smartphones or choosing service destinations such as hotels and vacation spots, consumers routinely search for information about others' experiences and use it to make decisions (Berger & Schwartz, 2011; Chevalier & Mayzlin, 2006). The popularity of these online resources has been enhanced by the inclusion of anecdotal feedback from users, which is encouraged so that consumers can share their opinions and rate the quality of their experiences (Berger & Schwartz, 2011; Chen & Xie, 2008; Chevalier & Mayzlin, 2006; Lagu, Hannon, Rothberg, & Lindenauer, 2010; Lee & Bradlow, 2011; Mudambi & Schuff, 2010; Pavlou & Dimoka, 2006; Reese, 2010).
Because consumers have access only to quantitative summaries of HCAHPS ratings on Hospital Compare, the absence of patients' anecdotal feedback differentiates hospital quality reports from other third-party review sites in the United States that post ratings of consumer goods and services. Notably, the United Kingdom's National Health Service program NHS Choices encourages people to submit comments about hospitals to a website that resembles a commercial online review site (see http://www.nhs.uk/servicedirectories/ Pages/HospitalCommentlnput.aspx? servicetype=hospital&searchtype=ho spitalcommentsearch). However, the representativeness of the responses is an issue, given the self-selected sample and small numbers of consumers rating each hospital. Like many operators of online review sites, the NHS allows contributors to add anecdotal comments to their answers to standard questions, but unlike many other third-party sites, NHS hospitals are permitted to respond to the comments.
Lagu and Lindenauer (2010) suggest that the limitations placed on patients' opportunity to provide anecdotal information hinders popular acceptance of healthcare quality reporting websites, and they recommend that Hospital Compare add a section for patients to provide qualitative feedback. Patients are showing more interest in reading others' feedback about their providers as well as contributing their own comments (Jain, 2010; Lagu et al., 2010), and if these commentaries can be shown to contain valid data, the argument for making them part of public quality reporting initiatives would take on greater urgency. However, to our knowledge, no investigations of the information contained in patients' anecdotal comments on HCAHPS surveys have been conducted.
Incremental Contribution of Patients' Anecdotal Comments Prior research studies conducted in a variety of settings have demonstrated significant correlations between quantitative satisfaction scale ratings and the tone and theme of consumers' written comments about products and services they have purchased (Decker & Trusov, 2010; Hansen, Kreiter, Rosenbaum, Whitaker, & Arpey, 2003; Hogarth & Hilgert, 2004; Santuzzi, Brodnick, RinehartThompson, & Klatt, 2009; Schweidel, Moe, & Boudreaux, 2012; Siegrist, 2011; Tranter, Grégoire, Fullam, & LafFerty, 2009). Generally, people who give favorable ratings say positive things, and vice versa, though evidence indicates that consumers with very positive or very negative attitudes toward a product or service are more likely to post comments online than are those whose ratings fall in the middle of the scale (Dellarocas & Narayan, 2006). In offline environments, Anderson (1998) found that consumers with the most negative opinions of a brand were more likely to engage in negative word of mouth. These findings suggest that consumers add anecdotal comments to explain their reasons for giving a particular rating to a product or service they experienced, and this practice has become more common as online product review sites and satisfaction surveys have proliferated. Decker and Trusov (2010) further suggest that quantitative ratings serve as numerical representations of words and phrases consumers use in qualitative evaluations, implying that structured analyses of textual data could supplant standard survey-based measures of consumer sentiment (Archak, Ghose, & Ipeirotis, 2011; Lee & Bradlow, 2011; Schweidel et al., 2012).
However, if patients' volunteered comments on the HCAHPS survey merely correlate with their quantitative ratings, they add little to our understanding of patient satisfaction or experience, other than helping to explain higher or lower numerical scores. Some researchers have found that ordinary quantitative measures in satisfaction surveys do not fully capture the customer's experience (Archak et al., 2011; Drain & Clark, 2004; Qu, Zhang, & Li, 2008) because the tools used to assess patients' experience lack the precision of those that measure providers' production and delivery of the service (Golder, Mitra, & Moorman, 2012). Similarly, the rating categories presented in the HCAHPS questionnaire items may not be sufficient to capture the patient's experience, especially if the stay involved multiple departments (e.g., emergency department and medical/surgical floor), several employees (e.g., day nurse and night nurse), or longer stays that necessitated complex treatment and interactions with many individuals at the hospital. For example, one respondent in our study checked two answers to HCAHPS Question 2 (During this hospital stay, how often did nurses listen carefully to you?) and wrote the following explanation: DayNever; NightAlways The HCAHPS scoring protocol calls for this patient's response to be discarded and coded as missing data because the patient gave two answers to the same question (CMS, 2012). However, the patient's response is genuine and offers an important insight, illustrating what Pavlou and Dimoka (2006, 398) term fine-grained information, which cannot be captured by numerical ratings, as the patient intended to give credit to the nurses who listened carefully and criticize those who did not.
As the preceding example demonstrates, adding qualitative feedback to structured survey responses reveals dimensions to the service experience that survey designers may not have considered or could not include in the questionnaire due to limitations on length and complexity (Lee & Bradlow, 2011). In a study of patients who interacted with three kinds of physicians (emergency medicine, hospitalist, and specialist), Wild et al. (2011) administered the physician communication questions from HCAHPS and also conducted indepth interviews, which revealed three additional themes that went beyond the HCAHPS items to provide a more complete picture of how different physician encounters affected patients' experiences. In particular, patients' anecdotal feedback revealed that system issues, which are not addressed in the HCAHPS survey, affect patients' ratings of physician communication.
That patients sometimes try to communicate relevant information beyond the standard survey questions suggests that some portion of the unexplained variance in regressions of outcome measures on quantitative survey ratings could be due to the limited capability of survey questions to assess the full range of patient experiences. Several studies have shown that while HCAHPS survey composite measures are significant predictors of two global outcome measures-overall hospital rating and intention to recommend-much variance remains unexplained (e.g., Elliott et al" 2012; AHRQ, 2003; Otani, 2006; Otani, Waterman, & Dunagan, 2012).
We hypothesized that patients' written comments add to the accuracy in predicting the two global outcome measures on the HCAHPS survey above and beyond the explanatory power of the numerical scores on individual HCAHPS questions. This study is unique in two important respects. First, most published analyses of HCAHPS data compare hospitals cross-sectionally; in this study, the patient is the unit of analysis, and we examine individual patients' experiences at two community hospitals. Second, to our knowledge, this is the first study to investigate qualitative comments that individual patients have voluntarily added to their HCAHPS surveys.
METHODS A sample of 589 HCAHPS surveys was obtained from two community hospitals of a health system in the northeastern United States. The surveys were administered during the second and third quarters of 2009. After completing data entry required for public reporting, the survey contractor shipped photocopies of all completed HCAHPS questionnaires to the hospitals. All patient identifying information was removed from the photocopied questionnaires before they were processed for this study. From these photocopies, the HCAHPS data were manually entered into a database, along with all comments patients wrote on the questionnaires. Of the 589 questionnaires received, 116 (19.7%) contained at least one handwritten word. Because no space is provided for comments, patients used the margins or the space between questions to write their anecdotal feedback.
Following commonly used content analysis procedures, two independent coders, working without access to the quantitative survey responses, coded the comments. The coders followed the common protocol used in textual analysis studies whereby the content is given a specific code representing theoretically derived categories (Krippendorff, 1980; Pang & Lee, 2008; Pavlou & Dimoka, 2006). In this study, we employed a categorical coding scheme used in prior research on patient satisfaction (Press Ganey, 2009; Santuzzi et al" 2009) whereby patients' comments were classified into one of four categories: 1. Positive 2. Neutral 3. Negative 4. Mixed A sample of comments by code classification appears in Table 1.
One comment score was created for every respondent. However, 58.6% of those who wrote comments made more than one comment on their surveys, necessitating coding rules for establishing one overall comment score for each patient. When multiple valenced (positive or negative) comments were consistent within each survey, the survey was scored in that direction (e.g., two positive comments resulted in a positive overall comment score for that patient). When a valenced comment occurred along with a neutral comment, the valenced comment outweighed the neutral comment and either a positive or a negative comment score was assigned. When a positive and a negative comment occurred on the same survey, a mixed comment score was given. The coders achieved 82% agreement; discrepancies were resolved by the principal investigator.
ANALYSIS AND RESULTS Descriptive statistics of the content coding are presented in Table 2. Respondents used fewer words when making neutral comments because many of the comments were simple explanations or clarifications that respondents wanted to communicate. For example, some respondents provided additional information about their ethnic status (e.g., "Italian," "Jamaican"), others explained why they did not answer a question (e.g., "don't remember"), and some provided additional detail (e.g., "1-night stay" in response to Question 8: During this hospital stay, how often were your room and bathroom kept clean?). Note that Question 27 asks respondents to specify the language they mainly speak at home if it is other than English or Spanish, and it is the only space provided for written answers on the HCAHPS survey. These answers were coded as neutral. Respondents who expressed negative or mixed comments used significantly more words than did those expressing neutral comments.
In several surveys, missing data were noted, sometimes because the patient simply failed to answer a question and at other times because the question became irrelevant depending on the patient's situation or need. An example of the latter occurred when patients were not given any new medication(s) in the hospital (HCAHPS Question 15), making the following two questions irrelevant, which patients left blank. Most of the surveys contained at least one missing response. One way to handle this problem is to delete all surveys that contain missing data, but as the preceding example indicates, this option would eliminate patients who have answered most or all of the relevant questions in good faith and we would be deprived of their responses. The problem was handled instead by employing a multiple imputation procedure (Graham, 2009) whereby missing values are imputed for each case, and thus a complete data set can be used.
The comment score variable is not normally distributed, so dummy codes were created for the analysis, with no comment coded as zero. Thus, the estimated coefficients resulting from the regression analysis effectively compared the positive, neutral, negative, and mixed codes to the no comment category (Cohen, Cohen, West, & Aiken, 2002). The HCAHPS data were aggregated into subscales, resulting in six composite measures: communication with nurses, communication with doctors, responsiveness of hospital staff, pain management, communication about medicines, and discharge information (Giordano, Elliott, Goldstein, Lehrman, & Spencer, 2010), along with two individual items relating to the hospital environment: clean room/ bathroom and quiet room. These measures were used as quantitative independent variables in a regression analysis. Ordinary least-squares regression was performed on the HCAHPS measures and the comment score dummy-coded variables. The two dependent variables were overall hospital rating and intent to recommend, which correspond to HCAHPS Questions 21 and 22. Such attitude and intention variables have been used as outcome measures in similar analyses performed in a variety of studies (e.g., Elliott et al" 2012; Otani, Kurz, Burroughs, & Waterman, 2003; Otani et al., 2012; Press Ganey, 2009; Qu et al., 2008; Rathert, May, & Williams, 2011).
Results from our study show that negative comments significantly affect the prediction of the two global outcome measures (Table 3). Coefficients for negative comments were significant for both the overall hospital rating (p < .011) and intent to recommend (p < .004) measures. Patients who wrote negative comments gave significantly lower overall hospital rating and intent to recommend scores than did those who wrote no comments. The coefficients for positive, neutral, and mixed comments were not significant, indicating that these comments did not contribute more information than quantitative ratings did. These findings are further illustrated in Figure 1 and Figure 2, which demonstrate that patients who wrote negative comments have a more negative overall opinion of the hospital.
The coefficients of some HCAHPS measures were significant in both models, including communication with nurses, clean room/bathroom, quiet room, and discharge information, while the coefficients of other measures were significant in one model but not the other. The coefficients for responsiveness of hospital staff and pain management were significant in Model 1 (overall hospital rating) but not in Model 2 (intent to recommend), while new medicines explained was significant in Model 2 but not in Model 1 and communication with doctors was not significant in either model. These discrepancies suggest that the two global metrics do not measure the same construct, though they are correlated (r = .74, p < .001).
DISCUSSION The results show that patients' anecdotal comments add to the prediction of overall hospital rating with and intention to recommend the hospital beyond the HCAHPS composite measures, supporting the hypothesis of this study. This finding is consistent with the notion that ( 1 ) anecdotal feedback contains information that numerical ratings do not capture and (2) rating scales do not completely assess people's experiences (Pavlou & Dimoka, 2006). Santuzzi et al. (2009) raised the concern that hospital staff may give too much credence to qualitative patient comments because they are more salient than quantitative ratings. The present study suggests that if anything, quantitative HCAHPS ratings understate the feelings of people who choose to report negative experiences and indicate that they are more dissatisfied than their responses to individual HCAHPS questions would indicate.
Relatively few people give negative ratings, and even fewer-5% in our study-write negative comments. Negative experiences, however, exert a stronger influence on overall satisfaction and intention than do positive experiences (Otani et al., 2003) and thus merit special attention. This study shows that beyond the negative experiences captured by low numerical scores on quantitative rating scales, the patients who wrote negative comments apparently attempted to communicate an even deeper level of dissatisfaction or frustration, and administrators need to take them seriously. For staff trying to improve patient care and service, negative comments on surveys prove useful because they give context to numerical data and provide powerful real-life examples of problems that staff can understand and address.
Research and Policy Implications In Hospital Compare, patients' anecdotal comments are not reported and thus not available for consumers to use when choosing among providers. However, standardized measures of hospital quality have limitations that discourage consumers from using them (Iagu & Lindenauer, 2010). Rothberg et al. (2008) found that hospital rankings published on several different websites varied and sometimes offered conflicting results, which can confuse consumers who hope to rely on the data to make decisions. Furthermore, the numerical ratings sometimes do not show much difference between hospitals, especially when ratings are combined and reported as aggregate scores, making it difficult for consumers to see the value in hospital report cards such as HCAHPS and discouraging them from using the information to make decisions about hospital providers (Hibbard, Greene, Sofaer, Firminger, & Hirsh, 2012). When such ambiguity exists, consumers engage in heuristic decision making, in which anecdotal information plays a more important role, and people rely on hearsay from friends and relatives (Fagerlin, Wang, & Ubel, 2005; Huppertz & Carlson, 2010).
Increasing evidence shows that people like to write comments when they give feedback on surveys and that consumers rely on those comments to make purchase decisions (Chen & Xie, 2008; Chevalier & Mayzlin, 2006). The fact that nearly 20% of respondents wrote comments suggests that a number of patients want to express themselves beyond the response options on the HCAHPS survey, and their comments have value. Given the results of this study, several policy questions arise: * Should space for comments be provided on HCAHPS surveys? * Should individual comments be shared online? * Should summaries be posted online? Further research should help inform the debate about these questions. This study suggests that the HCAHPS questionnaire should at least offer some space for patients' commentary, and researchers should apply systematic analytical tools to better understand how the comments relate to patient experiences. The more difficult question is whether to post comments (edited or unedited) in Hospital Compare, given (1) the expense and effort required to accomplish this task, (2) the difficulty of ensuring that individual institutions are not characterized unfairly, and (3) the need to guide consumers in using this information. However, given that such a small proportion of patients use the data currently displayed in Hospital Compare and considering the policy goal is to get consumers to use HCAHPS data to make informed decisions, this idea needs further study.
Managerial Implications The finding that patients' comments contribute to the prediction of global outcome measures should give administrators greater confidence to use anecdotal feedback and encourage them to take patients' negative comments seriously. This feedback can assist hospital staff in correcting problems and thus restoring consumer satisfaction and confidence in the provider (DeMatos, Henrique, & Rossi, 2007). However, hospitals do not receive timely information about a problem through written surveys such as HCAHPS because the information becomes available months after the patient has been discharged, so specific follow-up actions are all but impossible. From a hospital staff perspective, a particular negative patient experience is often not well remembered and circumstances are difficult to re-create, so the learning that could result in quality improvement is suboptimal. The ability to collect and respond to negative patient comments in a timely way is critical to improving the patient experience, but such actions usually occur outside the written HCAHPS survey of discharged patients.
Ideally, hospitals should receive feedback soon after the incident that prompted a negative comment to allow staff to accurately recall the problem experienced by the patient (Luxford, Safran, & Delbanco, 2011). HCAHPS surveys conducted by telephone allow organizations to capture anecdotal comments, and recorded patient comments collected during a telephone survey can be made available to the hospital soon after the interview and then forwarded to the patient representative or advocate. Patient representatives serve on grievance committees, which play a significant role in providing meaningful follow-up to patients' comments from satisfaction surveys (Charters, 1993).
In the two hospitals where this study was conducted, comments that meet predefined criteria (i.e., involve multiple departments, complex problems, or frequent themes) are forwarded to the appropriate managers. Furthermore, these comments are added to the next grievance committee agenda. This practice is important for two reasons: It provides the organizational infrastructure for reviewing anecdotal comments, and it confirms the value of negative anecdotal comments, given that they trigger the committee's attention, further extending implications of the present study's results. Many tools used by the hospital quality review committee are also used by the hospital grievance committee, promoting synergy between the efforts to improve both the patient experience and patient quality outcomes. Quality and patient experience no longer are treated as separate aspects of patient care and have become integrated under the overall umbrella of optimal patient outcomes. The best results occur in hospitals that create environments where responsibility for patient experience is owned by all, not just the patient representative (Matthews & Poole, 2012).
LIMITATIONS This study involved only two community hospitals and used HCAHPS data collected over a short duration. A larger sample of hospitals that includes academic, tertiary care, and specialty institutions would make the findings more generalizable. Furthermore, because patients undergo more highly involved procedures at academic medical centers and tertiary care facilities than at the community hospitals used in this study, the former set of patients may volunteer more written comments and thus provide a more complete picture of patients' experience.
In addition, our outcome measures consisted of only the global hospital rating and intention measures contained in the HCAHPS survey. While these are commonly used metrics in studies of this kind, future research should expand the range of outcomes investigated to include measures of patient compliance, changes in health status, readmission, and other indicators of care quality.
Finally, this study relied on HCAHPS surveys administered by mail, which contain no space for patients to handwrite comments, depressing the number of anecdotal comments received. Although we are aware of no published research on the incidence of anecdotal comments in HCAHPS surveys administered by telephone, the very nature of a person-to-person interview may prompt more patients to add commentary to their HCAHPS survey responses. These comments may provide additional insight into patient experience, and the present research should be expanded to incorporate such feedback.
PRACTITIONER APPLICATION Charles F. Bombard, RN, FACHE, CPHQ, director, Quality Improvement, Tampa General Hospital, Tampa, Florida The importance of survey comments is the central theme of Huppertz and Smith's research and analysis. I could not agree more that survey comments provide critical additional information to hospital staff, enabling decision making on the basis of both quantitative and qualitative information.
The importance that Tampa General Hospital's leaders place on survey comments is reflected in the fact that, for each type of survey our vendor distributes, we have a comment contract in place whereby the vendor supplies us with a report that cites each written comment that is made on a patient's returned survey, regardless of where the comment appears on the survey.
The patient experience coordinator at Tampa General reviews patient comment reports as they are received from the vendor and prepares a weekly report to the leadership team as well as a monthly report for managers with patient satisfaction responsibilities. Comments of a negative nature that require follow-up are investigated by the appropriate operational manager. If the patient has provided his/her name and telephone number on the survey (the vast majority do), the manager contacts the patient, obtains details of the problem, and works with staff to resolve the issue. The manager then prepares a report for his/her director or vice president summarizing the issue and action taken.
Results from the monthly patient satisfaction comment report are shared with staff in the relevant operational area. Staff members mentioned by name in a positive comment are awarded a $25 gift certificate to the hospital's gift shop, where they may purchase items or gift cards to local restaurants and merchants.
A separate weekly report on physician-specific comments that reflect either positive or negative physician encounters with patients is prepared and sent to the chief medical officer (CMO). These comments are shared with the physicians involved, and clinical or communication problems that come to light are investigated by the CMO or associate CMO.
Huppertz and Smith's point that the HCAHPS survey does not provide space for comments is worth noting. One action we took to overcome this perceived shortfall was to add a brief list of questions (developed with our vendor) at the end of the HCAHPS survey, followed by several blank lines and space for the patient's name and telephone number (clearly marked as optional) to prompt survey recipients to write about their hospital experience. This process falls within HCAHPS guidelines, and it has paid dividends for us, as up to 55% of our returned surveys contain some type of commentary.
Getting a complete picture of how patients view their hospital experience from both a quantitative and a qualitative perspective is valuable, as numbers do not tell the whole story. Patients' comments help us understand the "why" behind the survey's numerical response. While numbers can give a general feeling of the patient's perception of care, the patient's comments provide a more accurate representation of his satisfaction because he took the time and effort to write about it. If the patient feels so strongly about an issue that he writes it down, then we need to take it to heart.
REFERENCES Agency for Healthcare Research and Quality (AHRQ). (2003, December 23). HCAHPS three-state pilot study analysis results. Retrieved from http://www.cms.gov/ Medicare/Quality-Initiatives-Patient -Assessment-Instruments/HospitalQuality Inits/downloads/Hospital3State_Pilot_ Analysis_Final200512.pdf Anderson, E. W. (1998). Customer satisfaction and word of mouth, journal of Service Research, 1(1), 5-17.
Archak, N" Ghose, A., & Ipeirotis, P. G. (2011). Deriving the pricing power of product features by mining consumer reviews. Management Science, 57(8), 1485-1509.
Berger, J" & Schwartz, E. M. (2011). What drives immediate and ongoing word of mouth? journal of Marketing Research, 48(5), 869-880.
Centers for Medicare & Medicaid Services (CMS). (2012, March). Introduction to HCAHPS survey training [Slide presentation). http://www.hcahpsonline.org/files /March%202012%20HCAHPS%20 Intro%20Training%20Session%202%20 3-6-2012.pdf Charters, M. (1993). The patient representative role and sources of power. Hospital and Health Services Administration, 38(3), 429-442.
Chen, Y., & Xie, J. (2008). Online consumer review: Word-of-mouth as a new element of marketing communication mix. Management Science, 54(3), 477-491.
Chevalier, J. A., & Mayzlin, D. (2006). The effect of word-of-mouth on sales: Online book reviews, journal of Marketing Research, 43(3), 345-354.
Cohen, ]., Cohen, P., West, S. G" & Aiken, L. S. (2002). Applied multiple regression/ correlation analysis for the behavioral sciences (3rd ed.). Hillsdale, NJ: L. Erlbaum Associates.
Decker, R., & Tmsov, M. (2010). Estimating aggregate consumer preferences from online product reviews. International journal of Research in Marketing 27(4), 293-307.
Dellarocas, C. R" & Narayan, R. (2006). A statistical measure of a population's propensity to engage in post-experience online word-of-mouth. Statistical Science, 21(2), 277-285.
DeMatos, C. A., Henrique, J. L, & Rossi, C. A. V. (2007). Service recovery paradox: A meta-analysis, journal of Service Research, 10(1), 60-77.
Dolan, P. L. (2008). Patients rarely use online ratings to pick physicians. American Medical News, 51, 24.
Drain, M., & Clark, P. A. (2004, July/August). Measuring experience from the patient's perspective: Implications for national initiatives. /HQ Online. Retrieved from http://smartpatient.me/wp-content /uploads/2012/ 10/nahqce_artide214.pdf Elliott, M. N., Lehrman, W. G., Beckett, M. K., Goldstein, E" Hambarsoomian, K., & Giordano, L. A. (2012). Gender differences in patients' perceptions of inpatient care. Health Services Research, 47(4), 14821501.
Fagerlin, A., Wang, C" & Übel. P. A. (2005). Reducing the influence of anecdotal reasoning on people's health care decisions: Is a picture worth a thousand statistics? Medical Decision Making, 25(4), 398-405.
Fung, C. H., Lim, Y., Mauke, S., Damberg, C., & Shekelle, P. G. (2008). Systematic review: The evidence that publishing patient care performance data improves quality of care. Annals of Internal Medicine, 148(2), 111-124.
Giordano, L. A., Elliott, M. N., Goldstein, E" Lehrman, W. G., & Spencer, P. A. (2010). Development, implementation, and public reporting of the HCAHPS survey. Medical Care Research and Review, 67(1), 27-37.
Golder, P. N., Mitra, D., & Moorman, C. (2012). What is quality? An integrative framework of processes and states, journal of Marketing, 76(2), 1-23.
Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549-576.
Green, L, & McKeever, J. (2007). Monitoring the physician experience. Marketing Health Services, 27(2), 34-37.
Hansen, K., Kreiter, C. D., Rosenbaum, M" Whitaker, D. C" & Arpey, C. J. (2003). Long-term psychological impact and perceived efficacy of pulsed-dye laser therapy for patients with port-wine stains. Dermatologic Surgery, 29, 49-55.
HealthLeaders Media. (2012, July). Value-based purchasing: Facing the HCAHPS hurdle. Retrieved from http://www.healthleaders media.com/intelligence-impact-analysis /275699?utm+source=ONandutm_medium =emailandutm_campaign=EO 107921 Hibbard, J. H. (2008). What can we say about the impact of public reporting? Inconsistent execution yields variable results. Annals of Internal Medicine, 348(1), 160-161.
Hibbard, J. H" Greene, J., Sofaer, S" Firminger, K., & Hirsh, J. (2012). An experiment shows that a well-designed report on costs and quality can help consumers choose high-value health care. Health Affairs, 33(3), 560-568.
Hogarth, J. M., & Hilgert, M. A. (2004). Numbers versus words: A comparison of quantitative and qualitative data on satisfaction with complaint resolution efforts. Journal of Consumer Satisfaction, Dissatisfaction, and Complaining Behavior, 17, 103-116.
Huppertz, J. W" & Carlson, J. P. (2010). Consumers' use of HCAHPS ratings and word-of-mouth in hospital choice. Health Services Research, 45(6), 1602-1613.
Jain, S. (2010). Googling ourselves: What physicians can leam from online rating sites. New England Journal of Medicine, 362(1), 6.
Krippendorff, K. (1980). Content analysis: An introduction to its methodology. Beverly Hills, CA: Sage.
Lagu, T" Hannon, N. S., Rothberg, M. B., & Lindenauer, P. K. (2010). Patients' evaluations of health care providers in the era of social networking: An analysis of physician-rating websites. Journal of General Internal Medicine, 25(9), 942-946.
Lagu, T, & Lindenauer, P. K. (2010). Putting the public back in public reporting of health care quality. Journal of the American Medical Association, 304(15), 1711-1712.
Lee, T. Y" & Bradlow, E. T. (2011). Automated marketing research using online customer reviews. Journal of Marketing Research, 48(5), 881-894.
Luxford, K., Safran, D. G., & Delbanco, T. (2011) . Promoting patient-centered care: A qualitative study of facilitators and barriers in healthcare organizations with a reputation for improving the patient experience. International Journal for Quality in Health Care, 23(5), 510-515.
Matthews, R" & Poole, J. (2012, September). Improving the patient experience to build customer loyalty. Seminar presented at American College of Healthcare Executives, Atlanta (GA) Cluster program.
Moe, W. W., & Schweidel, D. A. (2012). Online product opinions: Incidence, evaluation, and evolution. Marketing Science, 31(3), 372-386.
Mudambi, S. M" & Schuff D. (2010). What makes a helpful online review? A study of customer reviews on Amazon.com. MIS Quarterly, 43(1), 185-200.
Otani, K. R. (2006). Enrollees' global rating process of health care with the national CAHPS benchmarking database. Health Care Management Review, 33(3), 205-212.
Otani, K., Kurz, R. S., Burroughs, T. E" & Waterman, B. (2003). Reconsidering models of patient satisfaction and behavioral intentions. Health Care Management Review, 28(1), 7-20.
Otani, K., Waterman, B" & Dunagan, W. C. (2012) . Patient satisfaction: How patient health conditions influence their satisfaction. Journal of Healthcare Management, 57(4), 276-293.
Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and TYends in Information Retrieval, 2(1-2), 1-90.
Pavlou, P. A., & Dimoka, A. (2006). The nature and role of feedback text in online marketplaces: Implications for trust building, price premiums, and seller differentiation. Information Systems Research, 3 7(4), 392-414.
Press Ganey. (2009). Hospital pulse report: Patient perspectives on American health care. Retrieved from http://www.press ganey.com/Documents_secure/Pulse%20 Reports/Hospital_Pulse_Report_2009 ,pdf?viewFile Qu, Z" Zhang, H., & Li, H. (2008). Determinants of online merchant rating: Content analysis of consumer comments about Yahoo merchants. Decision Support Systems, 46, 440-449.
Rathert, C., May, D. R., & Williams, E. S. (2011). Beyond service quality: The mediating role of patient safety perceptions in the patient experience-satisfaction relationship. Health Care Management Review, 36(4), 359-368.
Reese, S. (2010). Consumer experience provides worthwhile quality data. Managed Healthcare Executive, 20(2), 27-28.
Rothberg, M. B., Morsi, E., Benjamin, E. M., Pekow, M. S., & Lindenauer, P. K. (2008). Choosing the best hospital: The limitations of public quality reporting. Health Affairs, 27(6), 1680-1687.
Santuzzi, N. R., Brodnick, M. S., RinehartThompson, L., & Klatt, M. (2009). Patient satisfaction: How do qualitative comments relate to quantitative scores on a satisfaction survey? Quality Management in Healthcare, 18(1), 3-18.
Schweidel, D. A., Moe, W. W., & Boudreaux, C. (2012). Social media intelligence: Measuring brand sentiment from online conversations (Report No. 12-100). Cambridge, MA: Marketing Science Institute.
Siegrist, R. B., )r. (2011). What drives patient sentiment? [Web log post]. Retrieved from http://www.pressganey.com/improving HealthCare/improvingHCBlog/blog Post/11 -07-26/What_Drives_Patient_ Sentiment.aspx Sofaer, S., & Firminger, K. (2005). Patient perceptions of quality of health services. Annual Review of Public Health, 26, 513-559.
Tranter, M. A., Grégoire, M. B., Fullam, F. A., & Lafferty, L. J. (2009). Can patient-written comments help explain patient satisfaction with food quality? Journal of the American Dietetic Association, 109(12), 2068-2072.
Wild, D. M. G., Kwon, N., Dutta, S., TessierSherman, B., Woddor, N., Sipsma, H.,. .. Bradley, E. H. (2011). Who's behind an HCAHPS score? Joint Commission Journal on Quality and Patient Safety, 37(10), 461-467.
John W. Huppertz, PhD, associate professor, Union Graduate College, Schenectady, New York, and Robert Smith, FACHE, vice president, Integration and Coordination, St.
Peter's Health Partners, Troy, New York For more information about the concepts in this article, contact Dr. Huppertz at [email protected]
(c) 2014 Health Administration Press
Interview with Certeon
Interview with Derik Belair