A Brief Introduction to Grounded Theory Research Methods
Grounded Theory approaches to research are so called because contributions to knowledge are not generated from existing theory, but are grounded in the data collected from one or more empirical studies. In this chapter, I have described Grounded Theory as an approach rather than a method, as there are many alternative methods that may be employed. In Figure 1, a guiding process for Grounded Theory is presented, adapted from Lowe (1995), Pidgeon and Henwood (1996), and Dey (1999). The process model of Grounded Theory given in Figure 1 is presented as a reflexive approach because this process is centered around surfacing and making explicit the influences and inductive processes of the researcher.
Figure 1: A Reflexive, Grounded Theory Approach.
The Grounded Theory approach (Glaser, 1978, 1992; Glaser & Strauss, 1967; Strauss, 1987; Strauss & Corbin, 1998) is designed "to develop and integrate a set of ideas and hypotheses in an integrated theory that accounts for behavior in any substantive area" (Lowe, 1996, p.1). In other words, a Grounded Theory approach involves the generation of emergent theory from empirical data. A variety of data collection methods may be employed, such as interviews, participant observation, experimentation, and indirect data collection (for example, from service log reports or help desk e-mails).
The uniqueness of the Grounded Theory approach lies in two elements (Glaser, 1978, 1992; Strauss & Corbin, 1998):
Theory is based upon patterns found in empirical data, not from inferences, prejudices, or the association of ideas.
There is constant comparison between emergent theory (codes and constructs) and new data. Constant comparison confirms that theoretical constructs are found across and between data samples, driving the collection of additional data until the researcher feels that "theoretical saturation" (the point of diminishing returns from any new analysis) has been reached.
In the context of this chapter, there is not space for a thorough introduction to all of the many techniques for Grounded Theory analysis. The Grounded Theory approach is complex and is ultimately learned through practice rather than prescription. However, there are some general principles that categorize this approach and these are summarized here. For further insights on how to perform a Grounded Theory analysis, some very insightful descriptions of the process are provided by Lowe (1995, 1996, 1998) and Urquhart (1999, 2000). Most descriptions of Grounded Theory analysis employ Strauss's (1987; Strauss & Corbin, 1998) three stages of coding: open, axial, and selective coding. These stages gradually refine the relationships between emerging elements in collected data that might constitute a theory.
Data Collection
Initial data collection in interpretive, qualitative field studies is normally conducted through interviewing or observation. The interview or recorded (audio or video) interactions and/or incidents are transcribed: written in text format or captured in a form amenable to identification of sub-elements (for example, video may be analyzed second-by-second). Elements of the transcribed data are then coded into categories of what is being observed.
Open Coding
Data is "coded" by classifying elements of the data into themes or categories and looking for patterns between categories (commonality, association, implied causality, etc.). Coding starts with a vague understanding of the sorts of categories that might be relevant ("open" codes). Initial coding will have been informed by some literature reading, although Glaser and Strauss (1967) and Glaser (1978) argue that a researcher should avoid the literature most closely related to the subject of the research because reading this will sensitize the researcher to look for concepts related to existing theory and thus limit innovation in coding their data. Rather, the researcher should generate what Lowe (1995) calls a "topic guide" to direct initial coding of themes and categories based upon elements of their initial research questions. Glaser (1978) provides three questions to be used in generating open codes:
What is this data a study of?
What category does this incident indicate?
What is actually happening in the data? (p. 57)
For example, in studying IS design processes, I was interested in how members of the design group jointly constructed a design problem and defined a systems solution. So my initial coding scheme used five levels of problem decomposition to code transcripts of group meetings: (1) high-level problem or change-goal definition, (2) problem subcomponent, (3) system solution definition, (4) solution subcomponent, and (5) solution implementation mechanism. I then derived a set of codes to describe how these problem-level constructs were used by group members in their discussions. From this coding, more refined codes emerged to describe the design process.
The unit of analysis (element of transcribed data) to which a code is assigned may be a sentence, a line from a transcript, a speech interaction, a physical action, a one-second sequence in a video, or a combination of elements such as these. It is important to clarify exactly what we intend to examine in the analysis and to choose the level of granularity accordingly. For example, if we are trying to derive a theory of collective decision-making, then analyzing parts of sentences that indicate an understanding, misunderstanding, agreement, disagreement, etc. may provide a relevant level of granularity, whereas analyzing a transcript by whole sentences may not. A useful way to start is to perform a line-by-line analysis of the transcribed data and to follow Lowe (1996), who advises that the gerund form of verbs (ending in -ing) should be used to label each identified theme, to "sensitize the researcher to the processes and patterns which may be revealed at each stage" (Lowe, 1996, p. 8). Strauss (1987) suggests that the researcher should differentiate between in vivo codes, which are derived from the language and terminology used by subjects in the study, and scientific constructs, which derive from the researcher's scholarly knowledge and understanding of the (disciplinary, literature-based) field being studied. This is a helpful way of distinguishing constructs that emerge from the data from constructs that are imposed on the data by our preconceptions of what we are looking for.
Axial Coding
Axial coding is the search for relationships between coded elements of the data. Substantive theories emerge through an examination of similarities and differences in these relationships, between different categories (or subcategories), and between categories and their related properties. Strauss (1987) suggests that axial coding should examine elements such as antecedent conditions, interaction among subjects, strategies, tactics, and consequences. Strauss and Corbin (1998) liken this process to fitting the parts of a jigsaw puzzle together. They argue that, by asking the questions who, when, where, why, how, and with what consequences, the researcher can relate structure to process. Glaser (1978) suggests applying the "six Cs": causes, contexts, contingencies, consequences, covariances, and conditions. Whichever approach is taken (we are not limited to just one), we should carefully note the emergence of insights and explicitly reflect on how these insights are bounding the research problem through selecting some categories and not others. This can be achieved through the generation of theoretical memos.
Theoretical Memos
Theoretical memos "are the theorizing write-up of ideas about codes and their relationships as they strike the analyst while coding" (Glaser, 1978, p. 83). They reflect emerging ideas concerning relationships between data categories, new categories, and properties of these categories, cross-category insights into the process, mention of relevant examples from the literature, and many other reflections. They provide a way to capture those insights that we want to explore further and should be treated as a resource, triggering further constant comparison. Glaser (1978) recommends that a researcher should always interrupt coding to memo an idea that has just occurred to them. But constructs and relationships noted in theoretical memos must be related to other data in other samples for verification. At the end of the day, theoretical insights must be supported by further data analysis, or there is no theory—just speculation.
Selective Coding
"Selective coding is the process of integrating and refining categories" (Strauss & Corbin, 1998, p. 143) so that "categories are related to the core category, ultimately becoming the basis for the Grounded Theory" (Babchuk, 1996). Glaser (1992) emphasizes the importance of "core" categories: categories which lie at the core of the theory being developed and "explain most of the variation in a pattern of behavior" (p. 75). The Grounded Theory analysis process often involves moving up and down levels of analysis, to understand one core category at a time (Lowe, 1996). It is important to explicitly state the research analysis objectives before and during coding. Detailed objectives of the analysis—as distinct from the overall research problem—may well change as emerging insights become significant. A search for different types of theoretical models will lead to different category structures. For example, a process model involves stages of action, so the core categories would reflect these stages, with subcategories and properties reflecting elements such as process stage-triggers, or states by which it is judged that the process is ended. A factor model, on the other hand, would focus on cause and effect: core categories that reflect antecedent conditions, influences on, and consequences of the construct being explored.
Research Iteration and Constant Comparison
Unlike more pre-designed research, data collection and analysis are interrelated: the analyst "jointly collects, codes and analyzes his data and decides what data to collect next and where to find them, in order to develop his theory as it emerges" (Glaser & Strauss, 1967, p. 45). This process is referred to as "theoretical sampling" (Glaser & Strauss). Grounded Theory generation is highly iterative, constantly cycling between coding, synthesis, and data collection. The generation of theory is achieved through constant comparison of theoretical constructs with data collected from new studies. Constant comparison lies at the heart of the Grounded Theory approach and differentiates a rigorous Grounded Theory analysis from inductive guesswork. The researcher must continually ask whether the analysis of new data provides similar themes and categories to previous data or whether other patterns emerge. Constant comparison requires continual research into the meaning of the developing categories by further data collection and analysis. The researcher may interview new respondents, study the situation in a different group of people, or observe the same group over a different period of time. As the analysis proceeds, new themes and relationships emerge and the researcher will find themselves recoding earlier data and reconceptualizing relationships between data elements. Urquhart (1999) provides an especially useful description of how codes and categories evolve and change to reflect reconceptualizations of core theoretical elements. It may be found that some of the ideas or relationships that constitute a part of the theory may originate from other sources, such as insights from readings, or a eureka flash of inspiration. Strauss and Corbin (1998) also suggest that literature (such as reports of other studies) may be used as a source of data for analysis. Whatever the source of the inspiration, Glaser and Strauss (1967) note that:
"The generation of theory from such insights must then be brought into relation with the data, or there is great danger that theory and empirical world will mismatch" (p. 6).
Grounded Theory closure is guided by the concept of saturation. Theoretical saturation is reached when diminishing returns from each new analysis mean that no new themes, categories, or relationships are emerging and new data confirm findings from previous data. At this point, it should be possible to abstract a formal theory from the findings.
Progress from Substantive to Formal Theory
Glaser and Strauss (1967) differentiate substantive theory from formal theory by associating substantive theory generation with empirical research, whereas formal theory is associated with theoretical or conceptual work. Substantive theories are seen as emergent—by saturating oneself in the analysis of appropriate data, where the direction and quantity of data collection are driven by emerging patterns in the data, rather than by predetermined research "design," one can generate original theories concerning human behavior (Glaser & Strauss, 1967). The ultimate end of Grounded Theory research, however, is to generate formal theories: theories that may be generalizable at an abstract level. A formal theory can only emerge from sufficient data analysis in sufficient cases for the researcher to be sure that they are not merely describing the case in a single situation. A single Grounded Theory research study would not be expected to generate formal theory. Formal theory emerges over time (Glaser, 1978) and with reflection (Strauss & Corbin, 1998). It derives from the conceptual abstraction of a substantive theory across multiple research studies.
So the process of grounded theory analysis moves:
from an open coding of data to axial coding through the identification of core categories of the data,
through the use of theoretical memos to capture insights on how categories are related,
to the analysis of "networks" of interactions between categories (and their properties),
to the construction of substantive theory, through a rigorous analysis of how core categories (and network models) fit with new data.
Over a period of time (often years), enough studies may be conducted to justify the proposal of a formal theory.
No comments:
Post a Comment