Which section of the research article is the best to locate information about participants and procedures?

The function of this section is to summarize general trends in the data without comment, bias, or interpretation. The results of statistical tests applied to your data are reported in this section although conclusions about your original hypotheses are saved for the Discussion section.

Tables and figures should be used when they are a more efficient way to convey information than verbal description. They must be independent units, accompanied by explanatory captions that allow them to be understood by someone who has not read the text. Do not repeat in the text the information in tables and figures, but do cite them, with a summary statement when that is appropriate.  Example:

Incorrect: The results are given in Figure 1.

Correct: Temperature was directly proportional to metabolic rate (Fig. 1).

Please note that the entire word "Figure" is almost never written in an article.  It is nearly always abbreviated as "Fig." and capitalized.  Tables are cited in the same way, although Table is not abbreviated.

Whenever possible, use a figure instead of a table. Relationships between numbers are more readily grasped when they are presented graphically rather than as columns in a table.

Data may be presented in figures and tables, but this may not substitute for a verbal summary of the findings. The text should be understandable by someone who has not seen your figures and tables.

1. All results should be presented, including those that do not support the hypothesis.

2. Statements made in the text must be supported by the results contained in figures and tables.

3. The results of statistical tests can be presented in parentheses following a verbal description.

Example: Fruit size was significantly greater in trees growing alone (t = 3.65, df = 2, p < 0.05).

Simple results of statistical tests may be reported in the text as shown in the preceding example.  The results of multiple tests may be reported in a table if that increases clarity. (See Section 11 of the Statistics Manual for more details about reporting the results of statistical tests.)  It is not necessary to provide a citation for a simple t-test of means, paired t-test, or linear regression.  If you use other tests, you should cite the text or reference you followed to do the test.  In your materials and methods section, you should report how you did the test (e.g. using the statistical analysis package of Excel). 

It is NEVER appropriate to simply paste the results from statistical software into the results section of your paper.  The output generally reports more information than is required and it is not in an appropriate format for a paper.

This is the second of a two-part series on qualitative research. Part 1 in the December 2011 issue of Journal of Graduate Medical Education provided an introduction to the topic and compared characteristics of quantitative and qualitative research, identified common data collection approaches, and briefly described data analysis and quality assessment techniques. Part II describes in more detail specific techniques and methods used to select participants, analyze data, and ensure research quality and rigor.

If you are relatively new to qualitative research, some references you may find especially helpful are provided below. The two texts by Creswell 2008 and 2009 are clear and practical.1,2 In 2008, the British Medical Journal offered a series of short essays on qualitative research; the references provided are easily read and digested.3–,8 For those wishing to pursue qualitative research in more detail, a suggestion is to start with the appropriate chapters in Creswell 2008,1 and then move to the other texts suggested.9–,11

To summarize the previous editorial, while quantitative research focuses predominantly on the impact of an intervention and generally answers questions like “did it work?” and “what was the outcome?”, qualitative research focuses on understanding the intervention or phenomenon and exploring questions like “why was this effective or not?” and “how is this helpful for learning?” The intent of qualitative research is to contribute to understanding. Hence, the research procedures for selecting participants, analyzing data, and ensuring research rigor differ from those for quantitative research. The following sections address these approaches. table 1 provides a comparative summary of methodological approaches for quantitative and qualitative research.

Data collection methods most commonly used in qualitative research are individual or group interviews (including focus groups), observation, and document review. They can be used alone or in combination. While the following sections are written in the context of using interviews or focus groups to collect data, the principles described for sample selection, data analysis, and quality assurance are applicable across qualitative approaches.

Quantitative research requires standardization of procedures and random selection of participants to remove the potential influence of external variables and ensure generalizability of results. In contrast, subject selection in qualitative research is purposeful; participants are selected who can best inform the research questions and enhance understanding of the phenomenon under study.1,8 Hence, one of the most important tasks in the study design phase is to identify appropriate participants. Decisions regarding selection are based on the research questions, theoretical perspectives, and evidence informing the study.

The subjects sampled must be able to inform important facets and perspectives related to the phenomenon being studied. For example, in a study looking at a professionalism intervention, representative participants could be considered by role (residents and faculty), perspective (those who approve/disapprove the intervention), experience level (junior and senior residents), and/or diversity (gender, ethnicity, other background).

The second consideration is sample size. Quantitative research requires statistical calculation of sample size a priori to ensure sufficient power to confirm that the outcome can indeed be attributed to the intervention. In qualitative research, however, the sample size is not generally predetermined. The number of participants depends upon the number required to inform fully all important elements of the phenomenon being studied. That is, the sample size is sufficient when additional interviews or focus groups do not result in identification of new concepts, an end point called data saturation. To determine when data saturation occurs, analysis ideally occurs concurrently with data collection in an iterative cycle. This allows the researcher to document the emergence of new themes and also to identify perspectives that may otherwise be overlooked. In the professionalism intervention example, as data are analyzed, the researchers may note that only positive experiences and views are being reported. At this time, a decision could be made to identify and recruit residents who perceived the experience as less positive.

The purpose of qualitative analysis is to interpret the data and the resulting themes, to facilitate understanding of the phenomenon being studied. It is often confused with content analysis, which is conducted to identify and describe results.12 In the professionalism intervention example, content analysis of responses might report that residents identified the positive elements of the innovation to be integration with real patient cases, opportunity to hear the views of others, and time to reflect on one's own professionalism. An interpretive analysis, on the other hand, would seek to understand these responses by asking questions such as, “Were there conditions that most frequently elicited these positive responses?” Further interpretive analysis might show that faculty engagement influenced the positive responses, with more positive features being described by residents who had faculty who openly reflected upon their own professionalism or who asked probing questions about the cases. This interpretation can lead to a deeper understanding of the results and to new ideas or theories about relationships and/or about how and why the innovation was or was not effective.

Interpretive analysis is generally seen as being conducted in 3 stages: deconstruction, interpretation, and reconstruction.11 These stages occur after preparing the data for analysis, ie, after transcription of the interviews or focus groups and verification of the transcripts with the recording.

  1. Deconstruction refers to breaking down data into component parts in order to see what is included. It is similar to content analysis mentioned above. It requires reading and rereading interview or focus group transcripts and then breaking down data into categories or codes that describe the content.

  2. Interpretation follows deconstruction and refers to making sense of and understanding the coded data. It involves comparing data codes and categories within and across transcripts and across variables deemed important to the study (eg, year of residency, discipline, engagement of faculty). Techniques for interpreting data and findings include discussion and comparison of codes among research team members while purposefully looking for similarities and differences among themes, comparing findings with those of other studies, exploring theories which might explain relationships among themes, and exploring negative results (those that do not confirm the dominant themes) in more detail.

  3. Reconstruction refers to recreating or repackaging the prominent codes and themes in a manner that shows the relationships and insights derived in the interpretation phase and that explains them more broadly in light of existing knowledge and theoretical perspectives. Generally one or two central concepts will emerge as central or overarching, and others will appear as subthemes that further contribute to the central concepts. Reconstruction requires contextualizing the findings, ie, positioning and framing them within existing theory, evidence, and practice.

Within qualitative research, two main strategies promote the rigor and quality of the research: ensuring the quality or “authenticity” of the data and the quality or “trustworthiness” of the analysis.8,12 These are similar in many ways to ensuring validity and reliability, respectively, in quantitative research.

 1. Authenticity of the data refers to the quality of the data and data collection procedures. Elements to consider include:

  • Sampling approach and participant selection to enable the research question to be addressed appropriately (see “Selecting Participants” above) and reduce the potential of having a biased sample.

  • Data triangulation refers to using multiple data sources to produce a more comprehensive view of the phenomenon being studied, eg, interviewing both residents and faculty and using multiple residency sites and/or disciplines.

  • Using the appropriate method to answer the research questions, considering the nature of the topic being explored, eg, individual interviews rather than focus groups are generally more appropriate for topics of a sensitive nature.

  • Using interview and other guides that are not biased or leading, ie, that do not ask questions in a way that may lead the participant to answer in a particular manner.

  • The researcher's and research team's relationships to the study setting and participants need to be explicit, eg, describe the potential for coercion when a faculty member requests his or her own residents to participate in a study.

  • The researcher's and team members' own biases and beliefs relative to the phenomenon under study must be made explicit, and, when necessary, appropriate steps must be taken to reduce their impact on the quality of data collected, eg, by selecting a neutral “third party” interviewer.

 2. Trustworthiness of the analysis refers to the quality of data analysis. Elements to consider when assessing the quality of analysis include:

  • Analysis process: is this clearly described, eg, the roles of the team members, what was done, timing, and sequencing? Is it clear how the data codes or categories were developed? Does the process reflect best practices, eg, comparison of findings within and among transcripts, and use of memos to record decision points?

  • Procedure for resolving differences in findings and among team members: this needs to be clearly described.

  • Process for addressing the potential influence the researchers' views and beliefs may have upon the analysis.

  • Use of a qualitative software program: if used, how was this used?

In summary, this editorial has addressed 3 components of conducting qualitative research: selecting participants, performing data analysis, and assuring research rigor and quality. See table 2 for the key elements for each of these topics.

JGME editors look forward to reading medical education papers employing qualitative methods and perspectives. We trust these two editorials may be helpful to potential authors and readers, and we welcome your comments on this subject.

Joan Sargeant, PhD, is Professor in the Division of Medical Education, Dalhousie University, Halifax, Nova Scotia, Canada.

1. Creswell JW. Research Design: Qualitative, Quantitative and Mixed Methods Approaches. 3rd ed. Thousand Oaks, CA: Sage; 2009. [Google Scholar]

2. Creswell JW. Educational Research: Planning, Conducting and Evaluating Quantitative and Qualitative Research. 3rd ed. Upper Saddle River, NJ: Pearson/Merrill Prentice Hall; 2008. [Google Scholar]

3. Kuper A, Reeves S, Levinson W. An introduction to reading and appraising qualitative research. BMJ. 2008;337:a288. [PubMed] [Google Scholar]

4. Lingard L, Albert M, Levinson W. Grounded theory, mixed methods, and action research. BMJ. 2008;337:a567. [PubMed] [Google Scholar]

5. Hodges BD, Kuper A, Reeves S. Discourse analysis. BMJ. 2008;337:a879. [PubMed] [Google Scholar]

6. Reeves S, Albert M, Kuper A, Hodges BD. Why use theories in qualitative research. BMJ. 2008;337:a949. [PubMed] [Google Scholar]

7. Reeves S, Kuper A, Hodges BD. Qualitative research methodologies: ethnography. BMJ. 2008;337:a1020. [PubMed] [Google Scholar]

8. Kuper A, Lingard L, Levinson W. Critically appraising qualitative research. BMJ. 2008;337:a1035. [PubMed] [Google Scholar]

9. Corbin J, Strauss A. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. 3rd ed. Thousand Oaks, CA: Sage; 2008. [Google Scholar]

10. Liamputtong P, Ezzy D. Qualitative Research Methods. 3rd ed. Victoria, Australia: Oxford University Press; 2009. [Google Scholar]

11. Miles MB, Huberman AM. Qualitative Data Analysis. 2nd ed. Thousand Oaks, CA: Sage Publications; 1994. [Google Scholar]

12. Patton M. Q. Qualitative Research and Evaluation Methods. 3rd ed. Thousand Oaks, CA: Sage; 2002. [Google Scholar]