Tuesday, October 11, 2022

2012

WORK MEMOIR
Ideas and Influences

Ernest R. House

In 1967, I was sitting in a university office talking to Gene Glass, my statistics instructor, when Bob Stake walked in. Glass said, “Do you know Ernie House?” Stake said, “He doesn’t know I know him, but I know who he is.” Glass said, “Ernie has a gold mine here, a big evaluation project.” Thus began my evaluation career. I was finishing graduate study at the University of Illinois and had been asked to evaluate the statewide Illinois Gifted Program.

How did I get here? I began by trying to change education, convinced by my experiences that there was something amiss in the education system. My scholarly career has been marked by two themes: educational innovation and evaluation, including the processes, politics, policies, and values of both. The two merge in evaluating education innovations. Gradually, I moved more into evaluation.

The purpose of this chapter is to identify the influences on my ideas. Memory is a biased instrument, not balanced in this case by other accounts of events. To approach the task, I have outlined my career activities and identified what influenced them at that time. It’s one way of putting distance between now and then, to reduce the danger of rewriting history by projecting current ideas backward. The approach hasn’t been as successful as I had hoped. In a multidecades career, you interact with hundreds of people, engage in dozens of projects, and write far too many papers. My first draft wasn’t very readable—too many names, too many places, too many ideas. The early version did prove one thing: The influences were numerous, varied, and complex. To make this chapter more readable, I have omitted many, and there is much that I regret leaving out.

I have identified several types of influences. First, there are those scholars a generation or so ahead of me in the field. Second, there are scholars from other disciplines, especially philosophy, political science, history, and economics. Third, there are a few friends and colleagues who shaped my ideas by reviewing my work in the early drafts. Fourth, there are colleagues I met a few times a year and had productive exchanges with. Fifth, there are the diffuse effects of spending time in other countries.

During my career, the social context—indeed, society itself—changed greatly. Evaluation gained its impetus in 1965 with passage of the Great Society legislation, which mandated evaluation for some education programs for the first time. Through the 1960s and 1970s, the role of evaluation was to legitimate government activities by evaluating them. (“We aren’t sure this will work, but we will evaluate it to see.”) In the 1980s, Reagan reversed 50 years of the New Deal and Great Society by privatizing, deregulating, and discrediting government endeavors. The private sector could do things better, he maintained. In the 1990s, these trends continued, with Clinton trying to convert government’s role to managing rather than producing services. In the new century, Bush embarked on even more radical privatizing and deregulating of policies. Hence, while evaluation began in 1965 by serving the public interest, by 2010, evaluation itself was being privatized to serve private interests in some cases. These are remarkable changes, against which my career played out. To simplify the analysis, I have divided the era into four periods corresponding roughly to the decades, each period typified by a different ethos.

LEARNING THE CRAFT AND CREATING NEW IDEAS (1967–1980)

In 1967, I prepared for the Illinois evaluation by putting all the evaluation papers I could find in a cardboard box and reading them in a month. There wasn’t much. I established an advisory panel, including Stake, Egon Guba, and Dan Stufflebeam. Their advice proved invaluable. I used Stake’s (1967) “countenance” model of evaluation to plan the study. In 4 years, I learned evaluation from the bottom up, aided by a talented team consisting of Steve Lapan, Joe Steele, and Tom Kerins. We worked with program managers, hundreds of school districts, including Chicago, the state education agency, and the Illinois legislature. These interactions convinced me that evaluation was highly political, not a common idea at the time. Following that, Stufflebeam, Wendell Rivers, and I conducted a review of the Michigan Accountability Program, touted as a national model. This reinforced my sense of the ubiquity of politics.

In fact, I asked myself, was it all politics? The possibility disturbed me. Surely, there must be a way to adjudicate what evaluators did. I saw a review of Rawls’s (1971) work on social justice. Maybe this was what I needed. If politics was about who got what, an ethical framework might help evaluators grapple with the politics. I wrote an article on justice in evaluation. It’s difficult to imagine the incredulity of people in the field. What could justice possibly have to do with evaluation? The two terms didn’t belong in the same sentence. Some did see the relevance. Don Campbell sent for copies, and Glass included the article in the first Annual (Glass, 1976).

Rawls’s theory hypothesizes an “original position” in which people decide what principles of justice they should adopt. He arrives at one principle securing basic civil liberties and another stipulating that if inequalities are allowed, these inequalities should benefit the least advantaged. This conception was more egalitarian than the dominant utilitarian view. I discussed how the principles might apply to evaluation. The import was to bring social justice into consideration. Even if evaluators disagreed with Rawls, they needed to think about how what they were doing affected others, particularly the disadvantaged. Evaluation was more than politics.

During the 1970s, the quantitative–qualitative debate heated up. Along with others, I defended the legitimacy of qualitative studies. Again, I looked for a broader perspective and found a work reviving the classical discipline of rhetoric (Perelman & Olbrechts-Tyteca, 1969). I conceived that evaluations were arguments in which evaluators presented evidence for and against and that in making such arguments they might use both quantitative and qualitative data. Evaluation was more than methods. These ideas gained quick acceptance. I received personal messages from Lee Cronbach and Guba that the ideas had changed their thinking. Cronbach recast the validation of standardized tests as arguments, and Guba advanced naturalistic evaluation much further in work with Lincoln.

Meanwhile, our innovative center at Illinois, led by Stake and Tom Hastings, was experimenting with case studies, influenced by Barry MacDonald at East Anglia. In the Illinois evaluation, we had collected 40 different kinds of information on a stratified random sample of local gifted programs. How should we put that together? Bob encouraged us to combine these data into “portrayals.” After writing some cases, I gave a folder of data to a colleague, who said, “From what angle do I write this?” I said you don’t need an angle, just read the material and put it together. The result was incoherent. I realized you must have a point of view to make sense.

The framework this time was to see evaluations as using voice, plot, story, imagery, metaphor, and other literary elements, based on ideas from literary theory, linguistics, and cognitive science. I applied these concepts to case studies and scientific studies. Even scientific studies tell a story. One example was a sociological analysis of research on drunk driving, showing how the studies had changed the image of drunk drivers from those who have one drink too many to that of falling down habitual drunks. This change in image prompted strong legislation. Such elements I called “the vocabulary of action.” They motivate people to act. The deeper idea is one of coherence and meaning, of conveying powerful, shared values through metaphors, images, and nonliteral means. Evaluation is more than literal truth.

In the 1970s, the field expanded rapidly. There were at least 60 evaluation models. Examining them, I saw that they were similar. I posited eight basic approaches and analyzed how these differed in methods and assumptions. From there, I critiqued the approaches with the criteria—meta-evaluation. I included all these papers in my 1980 validity book, generalizing that truth, beauty, and justice were three broad criteria by which evaluations could be judged (House, 1980). Evaluations should be true, coherent, and just. Untrue, incoherent, and unjust evaluations are invalid. You need adequacy in all three areas. In each case, I had encountered a practical problem and looked to other disciplines to provide insights.

What about change in education? The Illinois Gifted Program was a complex, very effective innovation. In 1974, I published a book on the politics of educational innovation, drawing on the Illinois study and on quantitative geography about how innovations spread (House, 1974). Educators do not respond to new ideas as rationalistic research and development models of change anticipate. Teachers blend new ideas with old practices, heavily influenced by colleagues around them. The distinction is between reforms that enhance teacher skills and replacing teacher practices with techniques from authorities—craft versus technology. Adding to the craft perspective, Lapan and I wrote a book of advice for teachers (House & Lapan, 1978). In our view, the key to educational innovation was to influence teacher thinking and, through that, teacher practice. It wasn’t advisable to ignore how teachers conceive their work.

In those early years, I participated in several projects that influenced me in the long term. One was a study of change in a Chicago school by a team that included Dan Lortie and Rochelle Mayer. This study deepened my insights about how complex school social structures are and how that affects reform. Another project was a 4-month visit to East Anglia, where I established connections with MacDonald and his colleagues, who were working on democratic evaluation via case studies. A third was a critique of the Follow Through program evaluation with Glass, Decker Walker, and Les McLean. Our panel concluded that the Follow Through findings depended on how closely programs fit narrowly defined outcome measures rather than broader criteria. Our conclusion: There was no simple answer as to which early childhood program was the best. Our critique dealt a blow to the presumption that government could conduct large evaluations to determine definitive answers for everyone everywhere. Evaluation findings don’t generalize that easily.

META-EVALUATION: CRITIQUING POLICIES AND PRACTICES (1980–1990)

After his election in 1980, Reagan began privatizing and deregulating many government functions. Concern about the public interest began giving way to private interests, backed by claims that the private sector would be more effective. I began the 1980s by conducting two high-profile meta-evaluations. The New York City mayor’s office asked me to “audit” the evaluation of their controversial Promotional Gates Program, in which students were retained at grade level if they did not achieve prescribed scores on standardized tests. Those doubting the program’s efficacy insisted on an outside audit of the school district’s evaluation. Political pressures were intense. As I testified at a city council meeting, “No one in New York City seems to trust anyone else.” Bob Linn, Jim Raths, and I wrote confidential reports for the chancellor and mayor’s offices. The district evaluation had problems like failing to account for regression to the mean, thus claiming test gains when there were none. After a few rocky encounters, the district administrators decided that we were trying to help, and the evaluators corrected the errors. Eventually, the Village Voice obtained our confidential reports and featured them in a front-page story. The second meta-evaluation was a critique of the evaluation of Jesse Jackson’s PUSH/Excel program. Eleanor Farrar and I thought that the evaluators imposed an inappropriate program model on PUSH/Excel that was too rationalistic for a motivational enterprise. PUSH/Excel was like a church or coaching program, featuring loosely connected inspirational activities. Indeed, athletic teams rely heavily on similar motivational activities. (Also, the headlines generated by the evaluation—Jesse Jackson took government money and did not do what he said—did not match the findings.) I later wrote a book about the PUSH/Excel program, emphasizing the central issue of race (House, 1988).

The chief evaluator for PUSH/Excel was Charles Murray, who published Losing Ground (Murray, 1984) a few years later. This work claimed that Great Society programs made their beneficiaries worse off rather than better. Murray estimated the effects of Great Society programs by comparing before and after data in several areas. Unfortunately, Murray’s data analyses were badly flawed. In the education analysis, I discovered that he had used nonstandardized means for his critical measures, an egregious error. Bill Madura and I demonstrated that his analysis of unemployment was incorrect and misleading, accomplished by leaving out key employment data. Murray’s analyses seemed shaped to fit his message rather than the other way around. In spite of severe scholarly shortcomings, the book’s message attracted raves among neoconservatives and a White House eager to discredit the Great Society efforts.

Losing Ground set the tone for the coming decades of ideological studies purporting to be scholarly. Neoconservatives found that they could publish findings that did not meet rigorous standards in political journals and that the media would interpret these findings as social science—especially if the studies had lots of numbers. Journalists did not have the capacity to assess the statistics. Privately funded conservative think tanks became major sources of reform ideas. Education reforms became increasingly punitive, imposed on teachers and students and justified by pseudo–social science.

During the 1980s, I extended the craft perspective by writing papers on teacher thinking, teacher appraisal, and how to improve the insights of teachers as they direct their classrooms, coauthored with Lapan, Sandra Mathison, and Robin McTaggert. Cronbach (1982) influenced how we construed the validity of teacher inferences. Ultimately, I did not extend this work as far as intended, which was to integrate the craft perspective with evaluation thinking. Seeking educational improvement through enhancing teacher skills was being supplanted. Coercing teachers with standardized test scores became the reform focus for the next several presidents.

Looking back on the 1980s, perhaps I spent too much time fighting neoconservative ideas. In retrospect, many of these scholars were not influenced by rational argument. Rather, they were funded to produce certain findings, and they did. Ideological positions are not affected by discordant data. The privatizing, deregulating, and de-professionalizing policies they supported are taking their toll now in financial crises, a deteriorating infrastructure, and increasing social discord and stratification. Sometimes you have to take a stand even when you know your view won’t prevail.

EXPLORING EVALUATION FRONTIERS (1990–2000)

During the Clinton years, privatization and deregulation continued—for example, repealing the Glass–Steagall Act separating investment banking from other banking activities and refusing to regulate the burgeoning derivatives trade—which led directly to afinancial crisis in 2008 (Roubini & Mihm, 2010; Stiglitz, 2010). Clinton and Gore also tried “reinventing” government by making it the manager rather than the producer of social services. In such a scheme, evaluators would supply information to managers.

During this decade, I explored the institutional nature of evaluation. I spent 3 months in Spain, a culture different from any I had experienced. Curiosity led to the Annale historians, particularly Braudel’s (1981, 1982, 1983) history of capitalism as an institution developing over centuries. These ideas provided a map across time and societies in which I could place my own society and evaluation. I portrayed evaluation as a developing social institution in Professional Evaluation (House, 1993). My idea was that at some stage of capitalist development, government activities must be further justified and that professional evaluation emerges to play a legitimating role (which is how mandated evaluation of Great Society programs began).

In 1993, Sharon Rallis and Chip Reichart attempted to end the quantitative–qualitative dispute and asked me to talk on this at the American Evaluation Association (AEA). I had used scientific realism as a framework for integrating approaches (Bhaskar, 1975; House, 1991). If there is a substantive real world (and not just different perceptions), quantitative and qualitative inquiries must be ways of looking at the same thing and hence compatible at some level. There is no reason to claim the superiority of one method over another. Methods of inquiry depend on which aspect of reality one is investigating. Methods differ depending on the substance explored, but there is one complex reality of which evaluators are a part. Being immersed in that reality affects how people think about it. Indeed, actively participating enables people to think about it.

A new adventure began when Ken Travers at the National Science Foundation asked me to assist his research and evaluation unit. I considered the National Science Foundation the best federal agency and was not disappointed. I served on committees, interviewed staff, and became a participant observer of how evaluation works inside the agency. In addition to its own evaluations, the unit oversaw the first review of science, math, and technology education across all federal departments. Practical problems like finding contractors led to transaction cost economics as an explanation for how evaluation markets function. I developed a framework to appraise prospective innovations by using factors that characterize transaction costs in some markets, bounded rationality, opportunism, and asset specificity, based on Williamson’s (1985) work, which won a Nobel Prize in 2008 (House, 1998).

At the end of the decade, I concentrated on values and democratic evaluation. During the 1990s, Ove Karlsson spent considerable time in Colorado discussing evaluation politics. Continued contacts in Sweden and Norway reinforced Scandinavian egalitarian ideas. In 1999, Ken Howe and I published a book on values in evaluation, bringing together ideas on social justice, the Karlsson Scandinavian egalitarianism, the pragmatism of Dewey and Quine, the British ideas of MacDonald, and work on deliberative democracy by political scientists and philosophers (House & Howe, 1999). Among evaluators, Scriven’s (1976) influence was particularly strong regarding the objectivity of value judgments. Many evaluators view value judgments as subjective. In our conception, evaluators can arrive at (relatively) unbiased evaluative conclusions by including the views, perspectives, and interests of relevant stakeholders; conducting a dialogue with them; and deliberating together on the results. Evaluative findings can be “objective” in the sense of being relatively free of biases, including stakeholder biases as well as more traditional biases.

An additional rationale for the approach derives from considering hundreds of years of racism in the United States. Racism has not gone away; it has gone underground. In my experience, in a racist democracy racism takes disguised forms because citizens do not want to admit discrimination even to themselves. Treating minority students as explicitly “different” is no longer acceptable in most places. What happens is that policies and programs are promulgated that purport to help the students but, in fact, disadvantage them further. At some level, they are treated as different in ways that are damaging. In other words, there is considerable self-deception. One remedy is to have minority interests represented in evaluations to guard against such possibilities.

SEMIREFLECTING IN SEMIRETIREMENT (2000–2010)

In the new century, Bush pushed through even more radical privatizing and deregulating policies. Attention to private interests, rather than the public interest, became paramount. In education, privatization, deregulation, and de-professionalization crossed new boundaries. Private foundations and other agents of concentrated wealth promoted and sponsored many of these changes. As income and wealth distribution became increasingly unequal, those with power found it important to differentiate education to match an increasingly stratified social class structure. Even evaluation began to be privatized and controlled by private entities for their own ends (for additional influences, see Glass, 2008).

I began the century at the Center for the Advanced Study in the Behavioral Sciences, introducing evaluation to colleagues there by explaining how changing conceptions of causes, values, and politics had shaped the field. I handled causes and values analytically, but I presented politics in a case study, a storytelling technique I transformed into fiction by writing a novel about evaluation politics (House, 2007). I portrayed the political and ethical challenges evaluators face in what I call an educational novel, fiction deliberately constructed to educate students on substantive issues while entertaining them. My major evaluation project of the decade was monitoring the Denver bilingual program. Denver schools were under federal court order to provide Spanish language services for 15,000 immigrant children who did not speak English. Judge Matsch needed someone to monitor the implementation of the program agreed to by the school district and the plaintiffs—the Congress of Hispanic Educators and U.S. Justice Department. I anticipated an intensely political evaluation that might employ deliberative democratic principles.

I established a committee representing the contending parties. As I collected data from schools, I fed this information to the committee. We discussed progress in implementing the program, and when we had significant disagreements, we collected more data to resolve them. As evaluators, we insisted on standards for data collection and analysis, but the stakeholders shaped the evaluation in part. In my view, the findings should be more accurate since we tapped the knowledge of those in and around the program, as well as traditional data sources.

During the study, acrimony among stakeholders lessened, and implementation proceeded in an orderly manner, albeit slower than planned. At the end, there were still disagreements, but we also had a successful implementation informed by data. For a few years, I had been considering semiretirement so that I could spend more time overseas, do other writing, and focus more on financial investing. I began investing in the early 1990s, when I first thought about retirement (following a long meeting in which faculty members complained about not being appreciated). There are remarkable similarities between evaluation and investing, and I have derived many insights about evaluation from the finance and economics literature. Like evaluation, investing requires controlling emotions and evaluating situations in which there is overwhelming yet incomplete information. (I also wanted to leave something for my descendants other than several filing cabinets of reprints.)

At the same time, the evaluation community was important to me. It had shaped my working and social life. Staying involved with a few articles, speeches, and activities helped me stay in touch. Each year, I spend several months overseas—for example, northern winters in Australia, a place I admire for its egalitarianism, levelheadedness, and laid-back lifestyle. In 2006, Gary Henry and Mel Mark asked me to talk at AEA about the consequences of evaluation. I had been wondering about the frequent renunciations of findings from pharmaceutical drug evaluations. What was wrong with these studies? On investigation, I discovered that drug companies had gained control over many aspects of the evaluations and used their influence to produce findings favorable to their drugs, sometimes producing incorrect findings. Conflict of interest of the evaluators had become a threat—in fact, a serious threat to the field. This was another effect of privatization and unrestrained self-interest.

At the end of the decade, Leslie Cooksy, president of AEA, chose the quality of evaluation as the 2010 conference theme, citing my 1980 validity book on truth, beauty, and justice. The occasion enticed me to look back at work I had done over the years, reflections I have elaborated here. Truth, beauty, and justice are still appropriate as criteria for judging the validity of evaluations, even drug evaluations done 30 years later, though the social context has changed and the meaning of truth, beauty, and justice has shifted.

LOOKING BACK AND LOOKING AHEAD

Looking back, what influenced my ideas? I built directly on the ideas of some scholars, both those within evaluation, like Stake and Scriven, and those outside, like Rawls, Braudel, and Williamson. A few friends and colleagues shaped my ideas by reacting directly to my work, notably Glass, Lapan, and Howe, my Colorado colleague, and on occasion MacDonald and Karlsson overseas. The sociologist Dave Harvey, a hometown friend, provided valuable guidance over the years by reminding me where I came from. The influence of these people is greatly underestimated in this account. My work would have been much worse without questions like “This doesn’t make sense,” even if some of it still does not make sense. There were also useful discussions with certain colleagues, including Marv Alkin, editor of this volume (for a sample, see Alkin, 1990). And there was the influence of spending time overseas, especially in England, Spain, Sweden, and Australia.

As I become older, I find it important to listen to younger scholars. Having a long career means that you have made many mistakes, learned many lessons, and solved many problems. However, as the social context changes, these lessons become less relevant. This effect is noteworthy in finance. Having learned the secrets of investing success in a U.S.-centric world, investing gurus are having a difficult time adjusting to a global economy focused on Asia. Cronbach (1982) said that generalizations decay. The trouble is you’re not sure which ones.

One of my traits has been a strong interest in new ideas, especially new concepts that explain puzzling phenomena, and arranging those concepts into coherent patterns. Seeking coherence in explanations, in the meaning of phenomena, and in the meaning of life has been a driving motive. How do these things fit together? What do they mean? Once found, I tend to lose interest and move on (not a good scholarly trait). These tendencies are matters of personality as much as mind. Introducing me at the Canadian Evaluation Society in 2004, Alan Ryan said, “Throughout his long and distinguished career, Ernest House has continuously stressed the moral responsibility of evaluators. His social activist perspective has time and again alerted us to the dangers of being seduced by the agendas of those in power.” This personality trait comes from my family. My mother was the best person I ever knew. My father and his four brothers were the toughest. Sometimes I see things others don’t see and will say things others are afraid to say.

Of course, as we know, being outspoken comes at a cost. Keynes (1936/1997) wrote, “Worldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally” (p. 158). Career risk is a major vulnerability of professionals. Professionals fear damaging their careers by taking stands different from their colleagues or contrary to those wielding power. I have been threatened with lawsuits and loss of my job and offered thinly veiled bribes. No doubt I would have won more prizes, had better jobs, and made more money if I had played along. But that’s not who I am.

As I look at those colleagues who shaped evaluation in its early decades, many have been people similarly willing to risk their careers by exploring uncharted ideas and, most important, taking a principled stand against those subverting evaluation’s integrity. Looking ahead to an ethically challenged era in which private interests trump the public interests, the pressures to compromise evaluations will intensify. Defending the integrity of the field will require more than intellect; it will require character.

REFERENCES

Alkin, M. C. (1990). Debates on evaluation. Newbury Park, CA: Sage.

Bhaskar, R. (1975). A realist theory of science. Sussex, England: Harvester Press.

Braudel, F. (1981, 1982, 1983). Civilization and capitalism: 15th-18th century. New York, NY: Harper & Row.

Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass.

Glass, G. V. (1976). (Ed.). Evaluation studies review annual (Vol. 1). Beverley Hills, CA: Sage.

Glass, G. V. (2008). Fertilizers, pills, and magnetic strips. Charlotte, NC: Information Age.

House, E. R. (1974). The politics of educational innovation. Berkeley, CA: McCutchan.

House, E. R. (1980). Evaluating with validity. Beverly Hills, CA: Sage. (In Spanish, Evaluacion, etica y poder, 1994, Madrid, Spain: Morata, 1994. Reprinted 2010, Charlotte, NC: Information Age)

House, E. R. (1988). Jesse Jackson and the politics of charisma: The rise and fall of the PUSH/Excel program. Boulder, CO: Westview Press.

House, E. R. (1991). Realism in research. Educational Researcher, 20(5), 21–26.

House, E. R. (1993). Professional evaluation: Social impact and political consequences. Newbury Park, CA: Sage.

House, E. R. (1998). Schools for sale: Why free market policies won’t improve America’s schools and what will. New York, NY: Teachers College Press.

House, E. R. (2007). Regression to the mean: A novel of evaluation politics. Charlotte, NC: Information Age.

House, E. R., & Howe, K. R. (1999). Values in evaluation and social research. Thousand Oaks, CA: Sage.

House, E. R., & Lapan, S. G. (1978). Survival in the classroom. Boston, MA: Allyn & Bacon.

Keynes, J. M. (1997). The general theory of employment, interest, and money. Amherst, NY: Prometheus Books. (Original work published 1936)

Murray, C. (1984). Losing ground: American social policy 1950–1980. New York, NY: Basic Books.

Perelman, C., & Olbrechts-Tyteca, L. (1969). The new rhetoric: A treatise on argumentation. Notre Dame, IN: University of Notre Dame Press.

Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

Roubini, N., & Mihm, S. (2010). Crisis economics. New York, NY: Penguin Books.

Scriven, M. (1976). Evaluation bias and its control. In G. V. Glass (Ed.), Evaluation studies review annual (pp. 119–139). Beverley Hills, CA: Sage.

Stake, R. E. (1967). The countenance of educational evaluation. Teachers College Record, 68, 523–540.

Stiglitz, J. E. (2010). Freefall. New York, NY: W. W. Norton.

Williamson, O. E. (1985). The economic institutions of capitalism: Firms, markets, and relational contracting. New York, NY: Free Press.

Note: Thanks to Steve Lapan and Gene Glass for helpful comments on this chapter.

No comments:

Post a Comment

Coherence and Credibility: The Aesthetics of Evaluation

1979 Ernest R. House. (1979). Coherence and Credibility: The Aesthetics of Evaluation, Educational Evaluation and Policy Analy...