IMPROVING THE QUALITY OF CONSUMER RESEARCH: REPORTING OF SURVEY RESEARCH

Author

Eric P. Brass

Afilliations

Department of Medicine, Harbor-UCLA Medical Center, Torrance, CA 90502

Abstract

Survey research may provide important insights into consumer behaviors, knowledge and attitudes. This information in turn can inform current decision making and support new hypotheses worthy of testing. Rigorous conduct and reporting of survey research is essential in order to maximize the utility of these studies to the research and clinical communities. Suggested best practices for reporting survey research are enumerated in this report. These include clear statement of the purpose of the survey, a detailed explanation of the sampling plan employed, description of the population studied, detailing of the survey tools used, and the complete, unbiased reporting of survey results. Acknowledgement of potential sources of bias and limitations of the work are integral components of discussing survey results.

INTRODUCTION

Survey methodologies are often used to better understand a group’s knowledge, perceptions, opinions or attitudes relevant to issues or topics of interest. Research employing survey instruments can be thought of broadly as a form of observational or qualitative research that is differentiated from interventional or experimental research. Survey research rarely involves hypothesis testing, but is nonetheless valuable in informing the design of future more in-depth research as well as more immediate decision making. As such, the publication of high-quality survey research results should be encouraged. At the same time, to maximize the utility of publications reporting survey data, they must contain key information to allow a reader to properly understand and use the results. Further, understanding of these same issues is important in the design of survey instruments and studies that use them.

The following is designed to introduce to those without formal training in survey research methodologies some of the key features of this type of work using the perspective of research reporting (Table 1), and is based largely on methodology reviews published by others (see Kuper et al1, Dixon-Woods et al2 , Stang3, Rubenfeld GD4, Boynton et al5,6, and references therein for some examples). It is hoped that this will help new investigators and increase the value of work based on this important research method for both authors and readers.

Survey table 1

STATE THE PURPOSE FOR CONDUCTING THE SURVEY

The authors’ intent in fielding the survey should be clear to the reader in order to provide a proper context. If a specific hypothesis is being tested, for example testing an hypothesized difference in responses in two groups of people, this should be explicitly stated. If the purpose is more general or descriptive, the basis for asking these questions, in the chosen cohort, should be explicit.

DEFINE THE PATIENT POPULATION SAMPLED

Survey results are highly dependent on the cohort surveyed. Thus, any paper reporting survey results must carefully define the population sampled. This is critical for the reader who must decide whether the results can be generalized, and if so, how narrowly or broadly. The description of the cohort begins with how participants were identified and recruited. Termed sampling7, the specific plan used to identify participants will influence all aspects of the research from the quantitative methods used to describe the results to the discussion of representativeness in the research report. It is often desirable to over-sample cohorts of special interest as determined by the research question. In this case, the rationale and method for oversampling should be presented, as well as whether any adjustments were made in the results to reflect this sampling plan. The use of incentives, if any, to encourage participation by contacted subjects should be described. The prospective sample size determination and its justification should also be clearly stated. The number of subjects contacted to recruit the number of respondents is also important to assessing how representative the participants are likely to be.

Key demographic descriptors of the population should be fully reported. Relevant demographic parameters will vary with the survey. For example, while age, race and gender may always be important, characteristics such as health insurance status may be important in surveys of self care behaviors. If possible, comparison of the recruited cohort with those that declined or the full population of interest can be valuable. This is important, as the sampled cohort may differ from that of broader interest (see for example Richiardi et al8) with obvious implications for the generalizability of the results. The increasing use of internet-based recruitment and survey implementation may increase the concerns about whether the cohort is representative of the general population of interest3,8. Important unique information for reporting internet surveys includes how participants were notified of the survey, complete response rates (including partially complete or blank surveys), and methods to verify the respondent’s identity. Clarity on these cohort description issues in manuscripts, as well as appropriate discussion, will decrease the likelihood of over-extrapolation of the survey results.

DESCRIBE IN DETAIL THE SURVEY INSTRUMENT

The methods of data acquisition should be clearly delineated. If a written survey or scripted interview were used, the full instruments should be available to the reader. If inclusion in the manuscript is impractical, then the full survey should be available as supplemental material through the journal’s web site or by commitment of the authors to make it available upon the request of readers. As the answers to individual items may change based on the context in which they are presented, only availability of the full instrument will allow proper use of the results by readers. For example, a long questionnaire likely decreases participation rates and affects the accuracy of answers3. Thus, one must consider that the test characteristics and response profile of a single item from such a questionnaire might change if used in a more focused instrument. Similarly, provision of the full instrument and specifics of item construction will allow previously unrecognized bias or cueing to be appreciated by the reader.

Cueing may occur when the wording of an item provides information or bias relevant to a subsequent item. For example, if an item asks explicitly if drug X has been used in the past 24 hours, subsequently asking an open-ended question about the use of all medications may now be biased towards the inclusion of drug X vs. other medications.

Health literacy is a major consideration when communicating with consumers on health-related issues9,10. This same issue applies to use of survey tools related to health topics. Investigators should consider formal evaluation of the literacy requirements for the survey instruments employed, particularly when they are self-administered by the respondent. Similarly, in some cases defining the health literacy of respondents may also be important.

Information on prior use of the instrument, external validity of the instrument (for example relationship to outcomes or established clinical assessments), internal validity (consistency of assessments within the instrument), accuracy, and test-retest validity should be provided when available. The absence of such information does not invalidate the results of a fielded survey, but the potential limitations resulting from the absence of such information should be discussed in any report.

Surveys will often ask the respondent to describe past behaviors (for example, medications used in the past week). Whenever responses are based on recall, the time frame that forms the basis of the desired response should be clear to the respondent, and included in the description of the survey. In general, the shorter recall period required of the respondent, the more reliable the responses will be considered.

Any ethical issues germane to the study should be articulated, such as study of vulnerable populations or use of potentially stressful questions. The approval of the study by appropriate Human Subjects Review Committees should be confirmed, as appropriate. In this context, sources of funding and any potential conflicts of interest should be enumerated in the manuscript.

DESCRIBE HOW THE SURVEY WAS ADMINISTERED

It should be clear how the survey administrator interacted with participants on issues such as responding to participants’ queries. Time limitations and environment in which the surveys were completed are also important in understanding the potential for biases in responses. The spoken language of participants and the language used in the survey are important, particularly in international research. Verbal presentation vs. pen and pencil vs. computer administration may affect the responses in different cohorts, and thus must be clearly stated.

The manuscript should also state explicitly the time period during which the survey was conducted. As survey results may be used by others, including for comparative purposes, recognition that the profile of survey participants and their responses are both likely time dependent3,11 makes this anchoring information essential.

PRESENT THE DATA IN AN UNBIASED MANNER

In general, survey results should be presented completely and objectively. Results from multiple choice, quantitative, visual analog scale or Likert-scale type questions can be readily summarized using response rates or descriptive statistics, such as mean, range or standard deviation as appropriate. Distribution histograms or discrete result tabulations can provide increased granularity for key items. The use of confidence intervals around the study’s point estimates can yield information on the estimates uncertainty based on the sample size, but not with respect to other sources of experimental variability.

Open-ended questions represent a challenge when presenting summary results. Optimally, prospective categorization of open-ended responses can be defined which then facilitates reporting. Importantly, methods for adjudicating the assignment of open-ended responses to pre-defined categories should be as unbiased as possible. Introduction of bias can be subtle. For example, if there are ‘good’ and ‘bad’ possible responses, strict criteria for categorizing a response as ‘bad’ will yield an increase in ‘good’ responses on a default basis. Once categories for scoring open-ended questions have been developed, it is often useful to allow scoring by two or more independent referees. Where scoring discrepancies exist, resolution methods should be defined (for example, discussion with development of consensus amongst referees, majority score used, or use of an additional referee).

A similar bias can be introduced when characterizing the responses to multiple choice items. If five options are offered to the respondent, and one is considered ‘good’ and four are negative or ‘bad’, post-administration grouping of the responses as good vs. bad may be biased towards a bad characterization. Thus, full reporting of the ungrouped responses in this situation allows for an unbiased description of the results.

Most survey-based studies face the challenge of missing data. The data may be missing because a participant did not respond to the question or their response could not be interpreted. When surveys are administered to subjects on multiple occasions over time, some subjects may miss follow-up sessions and thus not complete one or more post-baseline assessments. How missing data will be handled should be prospectively defined, particularly when comparisons between groups or over time are planned. To ensure clarity, the denominator (total number of responses) should be provided whenever data are presented, and the reasons for the denominator deviating from the recruited number of subjects explained.

Unless the study predefined a specific hypothesis and testing strategy, the use of formal statistical testing should be minimized in the reporting of survey results. When numerical differences are observed between groups of respondents there is an obvious temptation to see if these differences are ‘significant’. However, this type of post-hoc use of statistical inferences is fraught with risk given that numerical imbalances will inevitably be observed. As survey instruments typically will use many different items, the selective use of statistical testing without recognition of the implications for multiplicity of testing and the selective, biased (that is based on the observed difference) application of the test, risks mis-interpretation of the results. If this type of statistical testing is employed it should be acknowledged as descriptive or hypothesis-generating rather than formal inferential testing, and a statement made as to whether adjustments were or were not made for multiple comparisons.

DISCUSS THE DATA IN THE CONTEXT OF THE FIELD AND THE STUDIES LIMITATIONS

Few surveys are conducted in an intellectual vacuum and thus it is important that survey results be discussed in a manner that integrates the findings with previous work and the larger issues posed by the survey area. This includes citing previous relevant work, and comparing and contrasting the current and previous findings. Importantly, the impact of differential methodologies should be discussed. All studies have limitations and a critical discussion of a study’s limitations not only increases the likelihood that the results will be properly utilized but also enhances the credibility of the authors.

CONCLUSIONS

Studies employing survey instruments and other forms of qualitative research provide important information to the academic, business, regulatory and policy communities. However, mis-interpretation of study results may easily occur. Authors have a responsibility to employ best practices when conducting and publishing their work to ensure maximum utility of their research.
Correspondence to: Eric P. Brass, M.D., Ph.D. Center for Clinical Pharmacology, Harbor-UCLA Medical Center, 1124 W. Carson Street, Torrance, CA 90502. Phone: 310-222-4050  email: ebrass@ucla.edu

Disclosures: The author is a consultant to several pharmaceutical companies on issues related to nonprescription and prescription drug development.

References

  1. Kuper A, Lingard L, Levinson W. Critically appraising qualitative research. BMJ. 2008;337: a1035.
  2. Dixon-Woods M, Shaw RL, Agarwal S, Smith JA. The problem of appraising qualitative research. Qual Saf Health Care. 2004;13: 223-225.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1743851/?tool=pubmed
    Reference Link
  3. Stang A. Appropriate epidemiologic methods as a prerequisite for valid study results. Eur J Epidemiol. 2008;23: 761-765. https://www.ncbi.nlm.nih.gov/pubmed/19016334
    Reference Link
  4. Rubenfeld GD. Surveys: an introduction. Respir Care. 2004;49: 1181-1185. https://www.ncbi.nlm.nih.gov/pubmed/15447800
    Reference Link
  5. Boynton PM, Greenhalgh T. Selecting, designing, and developing your questionnaire. BMJ. 2004;328: 1312-1315.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC420179/?tool=pubmed
    Reference Link
  6. Boynton PM. Administering, analysing, and reporting your questionnaire. BMJ. 2004;328: 1372-1375.
  7. American Association for Public Opinion Research. Best Practices. https://www.aapor.org/Best_Practices/1480.htm (Accessed July 8, 2010).
    Reference Link
  8. Richiardi L, Baussano I, Vizzini L, Douwes J, Pearce N, Merletti F. Feasibility of recruiting a birth cohort through the Internet: the experience of the NINFEA cohort. Eur J Epidemiol. 2007;22: 831-837.
  9. Gazmararian JA, Baker DW, Williams MV, et al. Health literacy among Medicare enrollees in a managed care organization. Jama. 1999;281: 545-551. https://www.ncbi.nlm.nih.gov/pubmed/10022111
    Reference Link
  10. Davis TC, Federman AD, Bass PF, 3rd, et al. Improving patient understanding of prescription drug label instructions. J Gen Intern Med. 2009;24: 57-62. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2607498/?tool=pubmed
    Reference Link
  11. Tolonen H, Helakorpi S, Talala K, Helasoja V, Martelin T, Prattala R. 25-year trends and socio-demographic differences in response rates: Finnish adult health behaviour survey. Eur J Epidemiol. 2006;21: 409-415.

Keywords

, , ,