Wednesday, October 5, 2022

Integrating the Quantitative and Qualitative

1994

One way of resolving the quantitative versus qualitative methods dispute is to recognize that evaluation has a special content and that this content is more important than the methodologies.

Integrating the Quantitative and Qualitative

Ernest R. House

Evaluation has now reached a certain maturity. The field has established itself as a worthwhile profession, valuable for what it can contribute to modern societies. I expect its role, activities, and influence to be much greater in the next fifty years. Furthermore, evaluation has become a discipline, two essential features of which are forums for critical debates within the field and an elaborated conceptual structure in which key issues can be "joined," fitted together in such a way that they can be debated productively. Scholars do not have to agree on issues, but they must be able to discuss them extensively in their disagreements. Otherwise, it is not possible for the disciplinary discourse to progress (House, 1993).

One particularly contentious issue has been the dispute over quantitative and qualitative methods. If we have a discipline, we should be able to debate this issue productively. I do not expect such debates to be unemotional, only that they be reasoned, so that progress can be made. Certainly, there is a long history to this dispute, going back many decades (Smith, 1983; Bannister, 1987; Hammersley, 1989; Ross, 1991). What prospects are there for resolution? I am going to argue that methodology depends primarily on the subject matter of what is investigated, plus certain background assumptions that are made. Debates about methodology are productive only if the subject matter is considered first. In evaluation, this subject matter is the determination of the merit or worth of something. Findings from quantitative and qualitative methods come together in the content of what is said, which is represented in the narrative of the study. Content is most important. Historically, methodology has been greatly over-emphasized, at the expense of content. Obsession with the quantitative-qualitative dispute indicates a continued fixation on methods. Methods are important, but they should play a facilitative role. Hence, the quantitative-qualitative dispute is dated and directs attention away from other important issues.

The Subject Matter of Evaluation

Research methodology depends primarily on the nature of the subject matter of the discipline, the content, the object of what one is trying to investigate. In astronomy, for example, it is rather difficult to do intervention experiments. The subject matter precludes it. On the other hand, the subject matter does lend itself to precise prediction. As the content under investigation changes, as black holes are discovered, for example, the methods of investigation may also change: new methods may be needed to deal with reformulated content.

What is the subject matter of evaluation? What is there to find out? What there is to find out is how good something is. Evaluation is the determination of the merit or worth of something, according to a set of criteria, with those criteria (often but not always) explicated and justified. Scriven (1980) has provided the basic logic: X is good, bad, better than Y, and so on, in the following way, according to these criteria along these dimensions, for these reasons. This statement and its variations are the core logic of the discipline, regardless of what approach one employs. Of course, such statements can be difficult to construct and complex in their embodiments. How to arrive at justifiable evaluative judgments is the knowledge and craft of the field.

I have suggested also that evaluations are constructed and presented as arguments and that these arguments can vary significantly, including those that feature quantitative or qualitative data (House, 1977; Dunn, 1982). Evaluators, like scientists, use facts, numbers, logic, stories, and metaphors (the latter often construed as models) to construct their arguments (House, 1983; McCloskey, 1990). The arguments vary in part according to the audiences to whom they are addressed, so that evaluative argument is not only about a subject matter but also for particular audiences. There is an analytic distinction between the form of the argument and the content, which in evaluation is a complex version of "X is good, bad, in this way." There is a content to be conveyed, a real world condition to be determined, even though the methods of determination, the statement of the condition, and the arguments may differ. There is a content to evaluation that is no more arbitrary than that in other disciplines. To say something is good is no more arbitrary than to say something is big, although it may be more difficult to defend. An elephant is big compared to other animals (usually the implicit comparison) but small compared to an office building. Establishing evaluative judgments requires defining the context of the judgment, as well as the dimensions of merit and standards of performance, though often the context is assumed or implicit.

Of course, these judgments often involve multi-dimensional criteria and conflicting interests. For example, the judgment, "The Reagan economic policies were very good for the upper 1% of the population but very bad for the bottom 10%," is probably defensible, but it requires considerable work to justify. One would have to show that not only did the bottom 10% not do well economically, but that the gains of the top 1% did not help the bottom 10% in the long run either. Programs and policies can be good or bad simultaneously, taking different criteria, perspectives, points of view, interests, and time frames into consideration. But this amounts to refinement and elaboration of evaluative reasoning, not its abnegation.

The things that we evaluate also differ in content. Policies are not the same as products, and neither is the same as persons or programs. Evaluating economic policies addresses different content than evaluating television sets. Different expertise is required, and probably different methods as well. In program evaluation, exactly how one conceives and defines a program makes a difference. Matters of content make a difference in which methods should be employed. Nonetheless, the basic evaluative reasoning holds.

What also holds is that the evaluator should strive to reduce biases in making such judgments. Some bias reduction techniques are the same for both quantitative and qualitative approaches, but other techniques differ. Quantitative evaluators must be especially careful with sampling and statistics, since so much depends on them. Similarly, qualitative evaluators must be careful with interviews and narratives. On the other hand, quantitative evaluators cannot totally neglect the narratives of their studies, any more than qualitative evaluators can be oblivious as to whom they interview.

Underlying Assumptions

Methodology also depends on ontological and epistemological assumptions that are made about the nature of reality and the best ways of gaining access to that reality, so that knowledge about it can be formulated. These background assumptions provide a framework, much of it implicit, for making methodological choices (Shadish, Cook, and Leviton, 1991). There are choices about exactly what to investigate, how open-ended the study should be, how important the views of participants are, how to collect, analyze, and interpret data, what arguments to employ, and how to present results. (Of course, most of us employ the methodology we learned in graduate school, not aware of the assumptions and history behind it.)

The quantitative-qualitative dispute has been most strongly registered by some as an irreconcilable conflict between the assumptions of the positivist and interpretivist paradigms (Guba and Lincoln, 1989). An interpretivist summarized the differences this way: "One approach takes a subject-object position on the relationship to subject matter; the other takes a subject-subject position. One separates facts and values, while the other perceives them as inextricably mixed. One searches for laws, and the other seeks understanding. These positions do not seem to be compatible given our present state of thinking" (J. K. Smith, 1983, p. 12). These differences play out as dichotomies of objectivity versus subjectivity, fixed versus emergent categories, outsider versus insider perspectives, facts versus values, explanation versus understanding, and single versus multiple realities. Conceived this way, the two approaches are not compatible. However, neither of these so-called paradigms is fully adequate. The positivist tradition is correct to stress that there are causal tendencies at work in social life and to insist that these tendencies may be opaque to the agents' spontaneous understanding. Where it errs is in the reduction of these tendencies to empirical regularities and in the account it gives of how to discover them (Bhaskar, 1979). The view that scientific inferences involve no extra-theoretical or extra-observational judgments, but are facts methodically inferred about an uninterpreted reality is not correct.

The interpretivist tradition, on the other hand, is correct to point out that the social sciences deal with a pre-interpreted reality which is already understood through the concepts of intentional social actors, through material similar to that in which researchers will grasp it. Where it errs is in reducing knowledge to just the modalities of this relationship, as if there were nothing more (Bhaskar, 1979). It neglects external causes and conditions, unintended consequences, and the internal contradictions of social beliefs, rules, and actions (Howe, 1985, 1992).

This dichotomy between these two traditions rests on a misunderstanding of science (Toulmin, 1982). One must take account of both the beliefs and rules of participants and the causes of social practices. There must be both insider and outsider perspectives. A better set of assumptions would include the following:

  • The real world is complex and stratified so that one is always discovering more complex layers of reality to explain other layers.
  • Society does not exist outside individual actions; rather, social actors produce and reproduce social structures, consciously and unconsciously, which influence their actions in turn.
  • Human action is intentional, including a capacity for monitoring and second-order monitoring (the monitoring of monitoring, that is, evaluation).
  • There are no incorrigible foundations for science, such as sense impressions or pristine facts. Rather, knowledge is social and historical.
  • Scientific explanation is explanation of how causal structures of different kinds produce events.
  • The regularity theory of causation based on assumptions of invariant regularities is incorrect.
  • Social science knowledge depends on understanding the meaningful social world of participants.
  • There is no rigid fact-value distinction: value claims can be established in ways similar to factual claims.
Integration of Methods

The investigation of such a complex social reality sometimes leads to using multiple research methods, of which the quantitative and qualitative constitute entire families (Fetterman, 1988). Quantitative studies are more precise, explicit, and predetermined, and assume that the relevant variables can be identified in advance and validly measured. They direct attention to variables of interest, reduce distractions, permit fine discriminations, and facilitate concise analysis and management of data (Howe, 1985, 1988). They use mathematical models as simplified representations of substantive problems, so that results depend not only on proper analysis, but also on the fit between the model and problem (House, 1977; Trochim, 1986; see Cordray, 1986, for an example).

Qualitative studies rely on more provisional questions, data collection sites, people to interview, and things to observe. They assume less in advance, including which variables are relevant, are more open-ended, sensitive to context, and likely to be focused on the intentions, explanations, and judgments of participants (Howe, 1985). In sociology, for example, Giddens (1984) has said that qualitative methods help elucidate the frames of meaning of the actors and investigate the context of action, while quantitative methods help identify the bounds of knowledgeability of the actors and specify the institutional order, the more structural aspects of social life.

Nonetheless, even though the methods are distinct, the findings from them blend into each other in the content. When examined closely, quantitative data turn out to be composites of qualitative interpretations, though these may be hidden by extensive data processing (Giddens, 1984). According to Campbell (1974), qualitative knowledge of the local context is necessary for generating plausible alternative explanations, describing the program, constructing a narrative history, presenting data collection procedures, and summarizing results, even in quantitative studies. Reichardt and Cook (1979, p. 23) say, "Quite simply, researchers cannot benefit from the use of numbers if they do not know, in common sense terms, what the numbers mean."

Furthermore, at the level of inference, any conceptual theory, scheme, or hypothesis presupposes substantive qualitative beliefs that play an inescapable role in drawing conclusions (Howe, 1985). Inferences depend on substantive relationships. If one examines the content of any particular evaluation, only a portion of it will be derived from the methods themselves. All approaches rely heavily on common sense, prior experience, and the logic of the situation (Huberman, 1987). If one examines the content of any particular evaluation, only a portion will be derived from the methods themselves.

For example, consider an evaluation of a reading program in which mixed methods are used. Standardized test scores are collected in a randomized, control group design, and students are interviewed after the program as to what they have learned. Suppose the test score comparison indicates there is no learning gain, but the interviews indicate that the new program students have a deeper understanding of the subject matter. Does the evaluator say, "Well, quantitatively the program is a failure, but qualitatively, it was successful." Or, "The program is a failure in the quantitative world-view, but a success in the qualitative world-view." This hardly seems satisfactory.

Rather, the evaluator will search for reasons for the discrepancy. Did the test measure the deeper understandings? Were the interviews biased because of the way the questions were presented, analyzed, or interpreted? Were different criteria used? The evaluator would invoke basic evaluative reasoning and relevant content to reconcile the conflicting findings. The findings would be integrated to produce an overall conclusion (which does not mean that they would converge necessarily). A serious discrepancy requires justification.

How is it possible for these things to blend together? The findings from whatever methods come together in the content, which is presented as an argument, a narrative, or even a story. Kidder and Fine (1987, p. 69) contend that, "All research is a form of story telling, some more obvious than others. Randomized experiments are the least obvious.... Nonetheless, beneath the technical language is a story about how people behave under various conditions." Stories have at least three events joined in such a way that the first precedes the second in time, the second precedes the third, and the second causes the third (McCloskey, 1990). "John was poor, won the lottery, and became rich." Compare this to, "The treatment causes the effect for persons (or units) X in condition Y," which is the generic causal statement of validity typologies (Mark, 1986, p. 50).

Furthermore, the particular arguments, narratives, and stories of evaluators must concern good and bad, and they must be true. Of course, we don't call our studies, "stories," with the unreliability the term suggests. Rather, we call them "experiments" or "plausible interpretations" or "case studies." I believe that they are in fact arguments, even when they are in story form. In any case, the content is presented in the narrative, with narrative styles varying from the scientific to the literary. And when we simply present numbers or other uninterpreted data, such as test scores, audiences provide their own narratives as to what the data mean.

There is a further question as to whether employing different methods leads an evaluator to different findings. For example, the qualitative evaluator will assemble interview data which contains content that a set of test scores does not, and vice versa. The evaluators will have arrived at different information to which they are led by their respective approaches. On the other hand, this information must be analyzed and interpreted within the logic of evaluation as to whether the program is good or bad, and the findings should fit together, unless the evaluators have used different criteria of merit.

Within the logic of evaluation, if evaluators use different criteria, they will arrive at different conclusions. Do different methods lead evaluators to different criteria? I am inclined to say "yes," in the sense that those who want to emphasize criteria derived from participants will more likely employ qualitative techniques. However, this is a difference in the formulation and content of the evaluation and should be discussed according to whether participant criteria should be employed, not submerged in differences in methods. Methodology is the wrong focus. Where interactions between content and method do obtain, that would argue for multiple, complementary methods, not for the existence of multiple realities.

Clearly, I do not believe that quantitative and qualitative methods represent distinct paradigms which incorporate incommensurate world views. Qualitative methods have opened new areas of content, but this content is part of the same world. Without elaborating, let me state the accepted philosophical position on incommensurability: "The dominant metaphor of conceptual relativism, that of differing points of view, seems to betray an underlying paradox. Different points of view make sense only if there is a common coordinate system on which to plot them; yet the dominant coordinate system belies the claim of dramatic incomparability" (Davidson, 1982, p. 67).

There is also the important practical problem of how to combine methods in studies. Reichardt and Cook (1979) contend that combining methods strengthens individual studies, and Smith (1986) has delineated circumstances in which combining methods is particularly appropriate. Light and Pillemer (1984) have suggested specific ways of combining such results in literature reviews, and Greene and her colleagues (1989) have studied features of mixed method studies. Linn has suggested that quantitative and qualitative methods could be related to each other iteratively, with the researcher going back and forth, progressively clarifying the findings of one with those of the other. We need more good examples (for example, Smith and others, 1976, 1986; Trend, 1979; Maxwell and others, 1986). We are restricted here only by our ingenuity and resources, although there are significant limitations to multiple method studies (Mark and Shotland, 1987; Shotland and Mark, 1987). They are not panaceas.

So, in evaluation, the findings of quantitative and qualitative methods are integratable at the level of content. In the social sciences, findings from different methods are integratable at the level of theory, with each social science having its own methods of investigation (Howe and Eisenhart, 1990). Hence, the choice does not have to be between a mechanistic science or an intentionalist humanism, but rather one of conceiving science as the social activity that it is, an activity that involves considerable judgment, regardless of the methods employed. Even the natural sciences are critical interpretations of their subject matter, but this does not mean that they cannot be rational and objective (Toulmin, 1982).

The Paradigm Wars

Finally, how can we account for our own actions in the "paradigm wars?" As with all wars, there is a history. Early in their development, the American social sciences shied away from certain issues of content because of strong political, social, and ideological pressures (Ross, 1991). Instead, they focused on methodology as the way to value-free, politics-free, and trouble-free findings that were consistent with an implacable belief in American exceptionality, the idea that America was so special it would not have the same social problems as other countries. This position gave rise to a virulent scientism, a fixation on methods as the center of social research (Bannister, 1987).

Science was conceived as the employment of certain methods and procedures, a methodological activity, rather than an intellectual, communal one. When particular methods were revealed as inadequate, which they always were, the reaction was to invent new methods. The result was a turning away from issues of content (and value) to an over-emphasis on methodology. Chomsky once captured this turn by calling behaviorist psychology a methodology without a subject matter: "Many people think of psychology in terms of its tests and experimental methods. But one should not define a discipline by its procedures. It should be defined, in the first place, by the object of its investigation. Experimental or analytic procedures must be devised in order to shed light on this object. Behaviorist psychology, for example, excels in its experimental techniques, but it has not properly defined its object of inquiry, in my opinion. Thus it has excellent tools, very good tools...but nothing very much to study with them" (Chomsky, 1977, p. 46). This misplaced emphasis on method carried over to the new field of evaluation. Thirty years ago, quantitative methods alone were deemed sufficiently objective for evaluation.

The reaction to the mistakes and excesses of positivism was interpretivism, with its own excesses. Over-emphasis on method led to definition by opposition: if one method was quantitative, the other was qualitative; if one was objective, the other was subjective (Guba and Lincoln, 1989). This schism was reinforced by strong emotion: the difficulty of establishing the legitimacy of qualitative methods in the face of formidable resistance increased the stridency with which they were advanced (Lincoln, 1990, 1991; Sechrest, 1991). Like any group which feels suppressed, qualitativists advocated their position passionately and often in excess.

Historically, the legitimacy of qualitative approaches was secured by distinguishing between the two approaches, attacking the quantitative and defending the qualitative. The success of this strategy makes it more difficult to abandon the dispute now that qualitative methods are legitimated. On the other side, some quantitative evaluators have been somewhat disingenuous in pretending that the establishment of qualitative methods was anything other than a long, hard-fought struggle.

Now there are those who believe that qualitative methods are the way to the promised land. Some believe that qualitative methodology will lead to a new world of promise and joy. This is the same millennial hope transferred to new methodologies, reaffirming the long-standing belief in the transformative powers of methodology. Our obsession with the quantitative-qualitative dispute reflects our continued fixation on method. In fact, all research methods are everyday work tools, likely to get your hands dirty. Methodology is important, but is no substitute for content. There is no guaranteed methodological path to the promised land. There is nothing mystical nor transformative about methods of any kind. You can kiss a frog if you want, hoping it will turn into a handsome prince, but when you open your eyes, you will find you are kissing a frog.

References

Bannister, R. C. Sociology and Scientism: The American Quest for Objectivity 1880-1940. Berkeley, CA: California, 1987.

Bhaskar, R. The Possibility of Naturalism. New Jersey: Humanities Press, 1979.

Campbell, D. T. "Qualitative Knowing in Action Research." Paper presented at American Psychological Association, New Orleans, September, 1974.

Chomsky, N. Language and Responsibility. New York: Pantheon, 1977.

Cordray, D. S. "Quasi-experimental Analysis: A Mixture of Methods and Judgment." In W. M. K. Trochim (ed). Advances in Quasi-Experimental Design and Analysis. New Directions in Program Evaluation, no. 31, 1986.

Davidson, D. "On the Very Idea of a Conceptual Scheme." In M. Krausz and J. W. Meiland (eds.). Relativism: Cognitive and Moral. Norte Dame, IN: University of Notre Dame Press. 66-80, 1982.

Dunn, W. N. "Reforms as Arguments." In E. R. House, S. Mathison, J. Pearsol, H. Preskill (eds.) Evaluation Studies Review Annual, no. 7. Beverly Hills, CA: Sage. 83-116, 1982

Fetterman, D. M. "Qualitative Approaches to Evaluating Education." Educational Researcher, 1988, 17, 8,17-22.

Giddens, A. The Constitution of Society. Berkeley, CA: California, 1984.

Greene, J. C., Caracelli, V. J., and Graham, W. F. "Towards a Conceptual Framework for Mixed-Method Evaluation Designs." Educational Evaluation and Policy Analysis, 1989, 11, 3, 255-274.

Guba, E. G. and Lincoln, Y. S. Fourth Generation Evaluation. Newbury Park, CA: Sage, 1989.

Hammersley, M. The Dilemma of Qualitative Method: Herbert Blumer and the Chicago Tradition. London: Routledge, 1989.

House, E. R. The Logic of Evaluative Argument, no. 7, Los Angeles: UCLA Center for the Study of Evaluation, 1977.

House, E. R. "How We Think About Evaluation." In E. R. House (ed.) Philosophy of Evaluation. New Directions for Program Evaluation, no. 19, 1983, 5-25.

House, E. R. Professional Evaluation: Social Impact and Political Consequences. Newbury Park: Sage, 1993.

Howe, K. R. "Two Dogmas of Educational Research." Educational Researcher, 1985, 14, 8, 10-18.

Howe, K. R. "Against the Quantitative-Qualitative Incompatibility Thesis or Dogmas Die Hard." Educational Researcher, 1988, 17, 8 10-22.

Howe, K. R. "Getting Over the Quantitative-Qualitative Debate." American Journal of Education, 1992, 236-256.

Howe, K. R. and Eisenhart, M. "Standards for Qualitative (and Quantitative) Research: A Prolegomenon." Educational Researcher, 1990,19, 4, 2-9.

Huberman, A. M. "How Well Does Educational Research Really Travel?" Educational Researcher, 1987, 16, 1, 5-13.

Kidder, L. H. and Fine, M. "Qualitative and Qualitative Methods: When Stories Converge." In M. M. Mark and R. L. Shotland (eds.). Multiple Methods in Program Evaluation. New Directions for Program Evaluation, no. 35, 57-75, 1987.

Light, R. J. and Pillemer, D. B. Summing Up: The Science of Reviewing Research. Cambridge, MA: Harvard, 1984.

Lincoln, Y. S. "The Making of a Constructivist: A Remembrance of Transformations Past." In E. G. Guba (ed.). The Paradigm Dialog. Newbury Park, CA: Sage. 67-87, 1990.

Lincoln, Y. S. "The Arts and Sciences of Program Evaluation." Evaluation Practice, 1991, 12, 1, 1-7.

Mark, M. M. "Validity Typologies and the Logic and Practice of Quasi-Experimentation." In W. M. K. Trochim (ed). Advances in Quasi-Experimental Design and Analysis. New Directions in Program Evaluation, no. 31, 47-66, 1986.

Mark, M. M. and Shotland, R. L. "Alternative Models for the Use of Multiple Methods." In M. M. Mark and R. L. Shotland (eds.). Multiple Methods in Program Evaluation. New Directions for Program Evaluation, no. 35, 95-100, 1987.

Maxwell, J. A., Bashook, J. A., Sandlow. "Combining Ethnographic and Experimental Methods in Educational Evaluation: A Case Study." In D. M. Fetterman and M. A. Pittman (eds.). Educational Evaluation: Ethnography in Theory, Practice, and Politics. Beverly Hills, CA: Sage. 121-143, 1986.

McCloskey, D. N. If You're So Smart: The Narrative of Economic Expertise. Chicago: University of Chicago Press, 1990.

Reichardt, C. A. and Cook, T. D. "Beyond Qualitative Versus Quantitative Methods. " In T. D. Cook and C. D. Reichardt (eds.). Qualitative and Quantitative Methods in Evaluation Research. Beverly Hills, CA: Sage. 7-32, 1979.

Ross, D. The Origins of American Social Science. Cambridge: Cambridge University, 1991.

Scriven, M. The Logic of Evaluation. Pt. Reyes, CA: Edgepress, 1980.

Sechrest, L. "Roots: Back to Our First Generation." Evaluation Practice, 1992, 13, 1, 1-7.

Shadish, W. R. Jr., Cook, T. D., and Leviton, L. C. Foundations of Program Evaluation. Newbury Park, CA: Sage, 1991.

Shotland, R. L. and Mark, M. M. "Improving Inferences from Multiple Methods." In M. M. Mark and R. L. Shotland (eds.). Multiple Methods in Program Evaluation. New Directions for Program Evaluation, no. 35, 77-94, 1987.

Smith, J. K. "Quantitative versus Qualitative Research: An Attempt to Clarify the Issue." Educational Researcher, 1983, 12, 3, 6-13.

Smith, M. L. The Whole is Greater: Combining Qualitative and Quantitative Evaluation Approaches in Evaluation Studies." In D. D. Williams (ed.). Naturalistic Evaluation. New Directions for Program Evaluation, no. 30, pp. 37-54, 1986.

Smith, M. L., Gabriel, R., Schott, J., and Padia, W. L. "Evaluation of the Effects of Outward Bound." In G. V Glass (Ed.). Evaluation Studies Review Annual, Vol. 1. Beverly Hills, CA: Sage, 1976.

Toulmin, S. "The Construal of Reality: Criticism in Modern and Postmodern Science." Critical Inquiry, 1982, 9, 1, 93-111.

Trend, M. G. "On the Reconciliation of Qualitative and Qualitative Analyses: A Case Study." In T. D. Cook and C. S. Reichardt (eds.). Qualitative and Quantitative Methods in Evaluation Research. Beverly Hills, CA: Sage. 68-86, 1979.

Trochim, W. M. K. Editor's Notes. In W. M. K. Trochim (ed). Advances in Quasi-Experimental Design and Analysis. New Directions in Program Evaluation, no. 31, 1-7, 1986.

Thanks to Anne Colgan, Margaret Eisenhart, Ken Howe, Bob Linn, Felix Rasco, Sharon Rallis and Chip Reichardt for helpful comments.

Ernest R. House is professor of education at the University of Colorado at Boulder.

No comments:

Post a Comment

Coherence and Credibility: The Aesthetics of Evaluation

1979 Ernest R. House. (1979). Coherence and Credibility: The Aesthetics of Evaluation, Educational Evaluation and Policy Analy...