Issues, Controversies and Problems of Grounded Theory
Inductive Theory Generation and Emergence
One of the major criticisms of Grounded Theory is that it is not "scientific" (deductive) in its analysis of the data but based on inductive conclusions from a superficial analysis of collected data. But research in psychology tells us that all human reasoning is a balance of deductive and inductive reasoning (Simon, 1957). It is through inductive inference, based on our experience of the world, that we survive. If we put our hand on the stove and it is burned, we learn that hot stoves will burn us. But then it is through deduction from empirical evidence that we can identify and avoid hot stoves (this is the expected shape for a stove and it is turned on). Learning depends upon inductive-deductive cycles of analytical thinking.
So inductive research techniques are not indefensible per se. In fact they form the basis for most of the qualitative coding methods used, for example, in qualitative case study analysis. Inductive analysis is treated as suspect because it introduces subjectivity into research and so the findings can be challenged, from a positivist perspective, as not measured from, but subjectively associated with the situation observed. Strauss and Corbin (1998) recognize the role of inductive reasoning in Grounded Theory generation and deal with it as follows:
We are deducing what is going on based on data but also based on our reading of that data along with our assumptions about the nature of life, the literature that we carry in our heads, and the discussion that we have with colleagues. (This is how science is born.) In fact, there is an interplay between induction and deduction (as in all science). … This is why we feel that it is important that the analyst validate his or her interpretations through constantly comparing one piece of data to another. (pp. 136–137)
The use of constant comparison between emerging theoretical constructs and new data can be used to switch from inductive to deductive thinking to "validate" our constructs. But, as Glaser (1992) observes, there are two parts to constant comparison. The first is to constantly compare incident to incident and incident to theoretical concept. The second is to ask the "neutral" coding question: "What category or property of a category does this incident suggest?" (Glaser, 1992, p. 39). From the use of the word "neutral," Glaser obviously views this as a deductive process. But Pidgeon (1996) questions the assumption that qualitative researchers can directly access their subjects' internal experiences and so derive an objective coding scheme from the subjects' own terms and interpretations. He observes that some inductive use of existing theory is required, particularly at the beginning of analysis, to guide the researcher's understanding of the situation and so to guide them in what data to collect. The "emergence" of theory thus results from the constant interplay between data and the researcher's developing conceptualizations—a "flip-flop" (Pidgeon, 1996) between new ideas and the researcher's experience (deductive ↔ inductive reasoning). This process is better described as theory generation than theory discovery. Although the issue of familiarity with the literature in one's field is contentious, Dick (2000) makes an interesting point. In an emergent study, the researcher may not know which literature is relevant, so it is not always feasible to read relevant literature until the study is in progress. In acknowledging the emergence of findings, it is important to understand that most non-groundedtheory approaches are not as planned and linear as they would appear when their findings are published. Many researchers are highly critical of any approach that is not "guided" by a planned schema (a research instrument). But an incredibly useful insight on research in general is best summarized by a quote from Walsham (1993):
The actual research process did not match the linear presentation of this book whereby theory is described first, empirical research happens next, results are then analyzed and conclusions are drawn. Instead, the process involves such aspects as the use of theoretical insights at different stages, the modification of theory based on experience, the generation of intermediate results that lead to the reading of a different theoretical literature and the continuing revision or new enactment of past research results. (p. 245)
Judging Theoretical Saturation
One of the consequences of employing a highly iterative (and sometimes recursive) approach to data analysis and synthesis is the inability to judge when to stop. In the generation of Grounded Theory, data analysis is not an end of itself (as in other research approaches) but drives the need for further investigation, instigating new research questions and directions. It is very easy to fall into a state of hopeless confusion or, paradoxically, to terminate data collection and analysis before any rigorous support for theoretical insights has been obtained (in which case, the approach provides inductive insights rather than Grounded Theory). The point at which theoretical saturation (Glaser & Strauss, 1967) is achieved is best described as the point at which diminishing returns are obtained from new data analysis or refinement of coding categories. The point of diminishing returns comes when (and only when) theoretical constructs fit with existing data and the comparison of theoretical constructs with new data yields no significant new insights. Grounded Theory is continuing to gain acceptance in the IS field. But criticisms and suspicions of Grounded Theory are often well-deserved. Many analyses appear to have been terminated because of publication deadlines, boredom, or exhaustion. Such studies only serve to undermine efforts to formalize rigor differently for qualitative research.
Formalization of Data Coding and Analysis
At the core of the debate between Glaser (1992) and Strauss (1987; Strauss & Corbin, 1998) is the notion of whether theory emerges from flexible, inductively guided data analysis or whether theory is derived as the result of applying structured, analytical methods. Glaser (1992) argues that the generation of Grounded Theory emerges from categories and patterns suggested by informants and by socially constructed realities. Glaser views Strauss's method of applying a specific coding method (the categorization of causal conditions, context, action/interactional strategies, and consequences) as "forcing" theoretical constructs and challenges the resulting theories as being more descriptive than processual or structural. Strauss emphasizes "canons of good science" (Babchuk, 1996) to data analysis and coding, while Glaser argues that the codes should emerge from the data. To be fair to Strauss's position, Strauss (1987) does argue that his procedures should be considered as rules of thumb to be used heuristically. He advises researchers to modify the scheme as required. But Glaser (1992) makes the point that, in an endeavor to make Grounded Theory "rigorous," the researcher may well filter out elements within the data that might lead to a theory that would change the way we view the world.
Both authors appear to agree that the emergence of theory from data is central to employing a Grounded Theory approach. So the two authors are not diametrically opposed; the issue appears to be one of how to ensure rigor in the process of data analysis and selection. Glaser emphasizes the emergent, inductive nature of Grounded Theory generation and recommends constant comparison with the data and self-reflection (reflexivity on our role and influences in the research process) as a way of ensuring quality. Strauss emphasizes the need to apply rigorous, repeatable methods to data selection and analysis and recommends the structuring of method around formal coding schemes as a way of ensuring consistency and quality. The debate appears to boil down to whether the researcher believes that their work should be defended from a positivist or interpretive perspective. This is discussed in the next section.
The Objectivist-Subjectivist Debate
To employ Grounded Theory rigorously, it is important to understand that, like the case study method, this approach may be used successfully to support both positivist and interpretive research. The main area of debate between the positivist and interpretive positions lies in their respective definitions of "reality"—the objectivist-subjectivist debate (Burrell & Morgan, 1979; Walsham, 1993). The positivist position argues that reality is "out there," waiting to be discovered, and that this reality is reflected in universal laws that may be discovered by the application of objective, replicable, and "scientific" research methods. The interpretive position argues that the world is subjective and reality is socially constructed (Lincoln & Guba, 2000). The phenomena that we observe are only meaningful in terms of individual experience and interpretation: one person's shooting star may be another person's alien spacecraft. "Truth" is constructed within a community of research and practice interests, across which "knowledge" is defined and valued (Latour, 1987). This "consensus theory" thus reflects a shared reality (Lincoln & Guba, 2000). The distinction between the two worldviews of positivism and interpretive research is particularly critical when deriving Grounded Theory, as it is based in empirical data collection and analysis. In Glaser and Strauss (1967), the authors talk of the "discovery" of Grounded Theory. The authors clearly view these laws as "out there," waiting to be discovered (a positivist perspective). But it is apparent from both authors' later work that they have questioned and modified this view to some extent. Strauss and Corbin (1998, pp. 157–158) give an example where one of the authors found that "something seemed awry with the logic" of her theory concerning the management of high-risk pregnancies by mothers-to-be. The researcher realized that she was defining risk from her perspective as a health professional and understood that she needed to define risk intersubjectively, from the point of view of her subjects, in order to understand their behavior. Her research subjects perceived their level of risk differently than her own assessment and often assessed the same risk differently at different times during their pregnancy. This understanding reflects an interpretive research position: that a phenomenon (or research "variable," to use positivist language) cannot be defined objectively, according to a set of absolute criteria, but must be defined from a specified point of view. Phenomena need to be understood both externally and internally to a situation for a theory to be internally consistent. This distinction is critical for the Grounded Theory researcher performing interpretive, qualitative field studies and forms the basis of the reflective, inductive-deductive research cycle that is required for learning (Sch�n, 1983).
The existence of multiple perspectives is an important issue for interpretive research (Klein & Myers, 1999). We must be sensitive to different accounts of "reality" given by different participants in the research rather than trying to discover universal laws of behavior by fitting all the accounts to a single perspective. Often, the interesting element of social theories derives from accounting for differences between accounts of a process rather than from similarities. Strauss and Corbin (1998) stress the importance of internal consistency. A theory should "hang together" and make sense not to an "objective" external observer but to an observer who shares, intersubjectively, in the meanings of phenomena as perceived by the research subjects. To achieve this, we must report our findings in context, consistently, and with sufficient detail to allow our readers to share the subjects' experiences of the phenomena that we report.
Grounded Theory involves the generation of theory from an analysis of empirical data. We need to be absolutely clear, as researchers, about our beliefs about the nature of those theories, to guide appropriate data collection and analysis. If we use the positivist criteria of external validity to guide a qualitative study, we must apply "objective" definitions of the phenomena under study; this will exclude subjects' own perceptions of the phenomena. But if we abandon positivist criteria, we must substitute alternative notions of rigor that are equally demanding and that reflect the same notions of quality as those used in positivist research.
The distinction between positivist and interpretive worldviews represents two extremes of a spectrum that may be considered incommensurable: people experiencing one of these "life-worlds" can never understand the perspectives of the other. Different researchers strive in different ways to overcome the incommensurability of the two philosophical positions. But it must be said that there are some very muddled or unexamined views concerning the nature of Grounded Theory research to be found in the IS literature. By abandoning a positivist research method, many researchers appear to believe that they can abandon the rigorous application of method completely. Many "Grounded Theory" studies appear to report loosely associated, inductive insights that cannot be justified by any notion of rigor or evidence. The interpretive Grounded Theory researcher must consider the defensibility of their work more deeply than the positivist researcher, as interpretivism does not yet have a body of knowledge and tradition embedded into formalized procedures for how to perform rigorous, interpretive research.
Quality and Rigor in Qualitative Grounded Theory Research
Lincoln and Guba (2000) argue that qualitative research cannot be judged on the positivist notion of validity but should rather be judged on an alternative criterion of trustworthiness. This assertion is justified on the basis that the positivist worldview is incommensurable with the interpretive worldview. Thus different criteria of rigor and quality need to be developed to reflect the very different assumptions that interpretive researchers hold about the nature of reality and appropriate methods of inquiry. Interpretive alternatives to the four traditional quality measures used in positivist research are developed and summarized in Table 1, developed from those suggested by Miles and Huberman (1994) and Lincoln and Guba (2000). The criteria for rigor discussed here do not constitute an exhaustive set but are selected on the basis of agreement across some reputable, knowledgeable, and reflective references on qualitative research.
|
|
|
---|---|---|
| Objectivity: findings are free from researcher bias. | Confirmability: conclusions depend on subjects and conditions of the study rather than the researcher. |
| Reliability: the study findings can be replicated independently of context, time or researcher. | Dependability/Auditability: the study process is consistent and reasonably stable over time and between researchers. |
| Internal validity: a statistically-significant relationship is established, to demonstrate that certain conditions are associated with other conditions, often by "triangulation" of findings. | Internal consistency: the research findings are credible and consistent to the people we study and to our readers. For authenticity, our findings should be related to significant elements in the research context/situation. |
| External validity: the researcher establishes a domain in which findings are generalizable. | Transferability: how far can the findings/conclusions be transferred to other contexts and how do they help to derive useful theories? |
The substitution of alternative criteria for rigor in interpretive studies is not intended to imply that rigor is to be abandoned in favor of "interpretation." On the contrary, the interpretive criteria of confirmability, auditability, authenticity, and transferability become paramount to making any claim to rigor. At every stage of the process, the researcher should subject their findings to both personal and external views on the basis of these criteria. Each of these issues is taken, in turn, to discuss criticisms of the Grounded Theory approach when it is used in qualitative field studies and to understand how quality and rigor may be maintained in interpretive, qualitative Grounded Theory generation.
Objectivity vs. Confirmability
We have discovered that the generation of Grounded Theory is not and cannot be totally objective. An important question to ask, therefore, is whether this makes theory generated in this way more or less confirmable (and therefore useful) than that generated by deductive, hypothesis-based research methods. One response is that, while the weakness of qualitative, inductive approaches to research lies in the data-analysis stage of the research life cycle, quantitative, hypothesis-based approaches are weakest in the research initiation and data selection stages. Even if the quantitative researcher is rigorously objective in their application of a consistent coding scheme and in the statistical analysis of data, inductive reasoning is involved in the selection of the research instrument and the selection or design of an appropriate coding or measurement scheme to operationalize the research instrument.
As Silverman (1993) observes:
No hypotheses are ever "theory free." We come to look at things in certain ways because we have adopted, either tacitly or explicitly, certain ways of seeing. This means that, in observational research, data-collection, hypothesis-construction and theory-building are not three separate things but are interwoven with each other. (p. 46)
The claims to truth and knowledge provided by prior literature are socially constructed and so remain unquestioned (Latour, 1987). Overall, qualitative inductive approaches are no more subjective than quantitative deductive approaches. Subjectivity is merely introduced at a later, more visible stage of the research life cycle than with hypothesis-testing research approaches. The formalized ways by which we manage subjectivity are only problematic as they are based on positivist assessments of rigor. We need to substitute reflexive self-awareness for objectivity.
Reliability vs. Dependability/Auditability
Let me pose a question:
If two researchers are presented with the same data, will they derive the same results if they use the same methods, applied rigorously?
To answer this question, it is important to question our assumptions about reality. If we understand reality as being "out there"—that what we see and measure when we collect "data" is what exists independently of our interpretation of the situation (or of the influence that our presence imposes)—then we would naturally answer "of course they would." If we understand reality as being socially constructed—that what we see is our interpretation of the world and that what others report to us is their interpretation—then we would answer "of course they would not." In that "of course" lies the internal conflict that we all tussle with as researchers. Because the problem is that all of us understand the world in both ways at once.
So far, I have treated positivist and interpretive worldviews as though they are opposing and incompatible. Intellectually, they are incommensurable. The problem is that humans are subjective, inconsistent beings, who are quite capable of taking different positions at different times on different issues without realizing the inherent contradictions. So, to ensure dependable and authentic findings, we need to establish clear and repeatable procedures for research and to reflect on the position that we take as we perform them. In that way, we can at least minimize the impact of subjectivity on the process. This does not mean that we have to have highly structured procedures based on inflexible, preexisting theoretical frameworks. But we do need to understand (and to be able to define) what our data selection, analysis, and synthesis procedures actually are. We need to constantly reflect on and record the means by which we reach our theoretical constructs and the detailed ends that these means achieve.
Internal Validity vs. Internal Consistency
It is probably in a rejection of the notion of internal "validity" that interpretive research garners its most virulent critics. Validity in deductive hypothesis-based research is ensured by statistically testing correlations between data variables and by ensuring a statistically significant sample population. Such notions of mathematical proof have no equivalent in qualitative interpretive research because (a) collected data represent social constructs rather than measurable physical phenomena, and (b) data analysis is recognized as subjective and inductive-deductive rather than as deductively objective. However, the idea of internal consistency may be used instead (Strauss & Corbin, 1998) to ask, Do all the parts of the theory fit with each other and do they appear to explain the data? As a way of answering this question, the criteria of credibility and authenticity may be substituted for internal validity (Miles & Huberman, 1994).
While rigor is viewed as a quality to be desired in positivist research, the interpretive position on positivist views of rigor can be summarized by the Webster dictionary definition of the term: "the quality of being unyielding or inflexible." It is important to avoid just falling into a hierarchical coding scheme by default, as this type of scheme is too often used to fit the data to an individual's preconceived notions of how it should relate (see Alexander, 1966, for a fascinating discussion of this tendency in architectural planning). Additionally, Urquhart (2000) reinforces (with feeling) the Glaser and Strauss (1967) observation that lower level categories tend to emerge relatively quickly, with higher level categories emerging much later through the integration of concepts. A hierarchical coding scheme discourages the reordering of concepts and tends to act as a disincentive to think radically about reconceptualization of the core categories previously identified.
To achieve credible research, we need to constantly question where the theoretical constructs that we have adopted have come from. Whichever approach we take to the coding and analysis of data, we need to implement it reflectively and to reexamine it critically. We need to employ representational techniques that permit an explicit examination of the relationships between data elements on a periodic basis and to constantly question the assumptions that led us to search for those relationships.
External Validity vs. Transferability
Eisenhardt (1989) comments that the objective of hypothesis-testing (positivist) research is to randomly test samples from a large population, while the aim of Grounded Theory research is to deliberately select specific samples (cases) that will confirm or extend an emerging theory. So it should be understood that Grounded Theory claims to generalizability do not even reside in the same universe, never mind reflect the same worldview, as those of deductive hypothesis-based research. Taking an interpretive Grounded Theory approach leaves us with a significant question of how widely our theory can be applied, given that the process is interpretive and, as we saw above, subjective. How can we can make a claim to be generating generalizable theory from an external reality that we do not believe exists independently of ourselves? One of the best resolutions of this issue lies in understanding the detailed objectives of our analysis, for which Lowe (1998) provides a wonderfully comforting description:
The social organization of the world is integrated. This means that everything is already organised in very specific ways. The grounded theorist's job is to discover these processes of socialisation. There is no need for preconceived theorising because all the theoretical explanations are already present in the data. (p. 106)
As interpretive researchers, we reject the "universal laws" (positivist) notion of reality in favor of discerning socially constructed norms and relationships that are located in a particular culture or context. Claims for transferability and fit between contexts must therefore arise through identifying similarities in factors that are part of the theoretical model and are consistent between different contexts for which the theory fits. Ultimately, we need to recognize that interpretive researchers cannot make the same claims to generalizability as positivist researchers and that to do so opens our research to attack because then we defend our research from a different worldview than that which governed the way in which it was performed. In using the language of positivism (e.g., claims to "triangulation" of findings or making claims for validity or universal generalizability), we lay ourselves open to criticisms of not following positivist methods to ensure these criteria are met. Positivism and interpretivism have no common language of quality or rigor. The findings from multiple data samples may be compared across contexts (for example, using multiple case studies for which contextual factors are similar). However, once any part of the method is admitted to be inductive, it becomes difficult to make claims for generalizable findings without investigating very large numbers of samples (case studies) across which findings can be compared statistically. But this may take years with such labor-intensive studies. Statistical correlations between intersubjectively-defined constructs are also meaningless, from both a positivist and an interpretive perspective. This issue is often fudged in publication; the generally acceptable minimum number of case studies for comparison appears to be four, which is indefensible from either worldview on any grounds except pragmatism (or a huge number of quantitative samples for each study, which is rarely the case). As a replacement for external validity, in qualitative research we could substitute the notion of external consistency. We need to adopt the discourse of transferable findings rather than that of generalizable results.
No comments:
Post a Comment