Wednesday, October 5, 2022

July 15, 2013

Democratic Evaluation, Multiculturalism, and Realpolitik

Ernest R. House

Paper delivered to Aotearoa New Zealand Evaluation Association, Rotorua, NZ

It’s tough to do evaluations in settings in which there are strong conflicting perspectives, values, and interests. And perhaps nowhere are these problems more acute than where there are class, cultural, and ethnic differences. One way of dealing with these differences is to include them in the evaluation. I’ll discuss an evaluation in which such differences were pronounced.

In 1999, I received a phone call from a lawyer in Boston representing the Congress of Hispanic Educators. Years before, this group had sued the Denver Public Schools (DPS) for segregating minority students. The original lawsuit involved bussing students, but had evolved into a suit about language and culture. The federal court now required the Denver schools to provide native language instruction to students who did not understand English. The US Justice Department joined the suit as co-plaintiff. Each group--the school district, the Hispanic educators, and the Justice Department--was represented by lawyers.

The new court agreement specified educational services the district should provide. Judge Richard Matsch (who presided over the Oklahoma City bombing trial) needed someone to monitor whether the district was fulfilling its obligations. After consulting all parties, the judge appointed me court monitor. I envisioned a monitoring evaluation.

Language, Class, and Cultural Politics

Denver had a school population of seventy thousand students; fifteen thousand could not speak English. These were mostly Spanish speakers, immigrants from Mexico and Latin America. Many were illegal immigrants who had come to Denver during the boom economy of the 1990s, as the Colorado population increased thirty percent. Their parents built houses, cooked food, washed cars, and performed basic manual labor. It is the policy of most American schools to accept students who appear, not question their citizenship.

The city itself was dominated by an Anglo business establishment, and Anglos displayed ambivalent attitudes towards these immigrants. Attempts had been made to declare English the state’s official language to curtail Spanish language use, but these efforts had failed so far. The Denver program was named the English Language Acquisition (ELA) program to signal that its purpose was to teach English, not maintain Spanish.

The old Latino part of town had become so crowded that Latinos were moving to other parts of the city. African-Americans—long established residents--were being pushed out of their neighborhoods. Tensions between Blacks and Latinos were high since some Blacks saw the Latinos taking affordable housing and under-cutting them for jobs. Political power was shifting also as tens of thousands of Latinos moved in. When the project began, the governing school board was dominated by Anglos, but two Latinas had just been elected.

Furthermore, many teachers and administrators in the Denver schools were Latinos who had come from southern Colorado and northern New Mexico, descendants of the old Santa Fe culture. Santa Fe, founded in 1610, is the oldest capital city in the US. These people have a cultural identity predating Anglo settlement by centuries and consider themselves Spanish-Americans, not Mexican. Other teachers and administrators were Chicanos, US-born descendants of Mexicans whose ancestors had come from Mexico generations before, usually as migrants to pick crops.

These two Latino groups spoke both English and Spanish and staffed many educational positions. Although they identified with the new immigrants, they also saw them as different. The immigrants came from poor rural villages and were uneducated in any language. Ethnic, cultural, and class differences among the Latinos generated misunderstandings. For example, immigrants often took their children out of school for weeks to return to home villages in Mexico for fiestas, a practice that infuriated the professional educators, who saw loss of a month of school time as serious. Some immigrant parents wanted their children to go directly into English classes so they could learn English, quit school, find jobs, and help the family. Most wanted their children in Spanish classes first, then English. According to the court agreement, parents had the choice as to what their children should do.

Over the years the lawsuit had been enjoined, school officials and plaintiffs had deepened their distrust of each other. Each side considered the other suspect. Some school personnel suspected the plaintiffs wanted to build a Latino political base in Denver; some plaintiffs thought the schools did not want to provide services for these students. In my early encounters, these hostile attitudes came through forcefully. I was told that the other side was untrustworthy. Personalities rubbed each other the wrong way: Such-and-such was “unprofessional,” “a snake in the grass.” Much stronger language was expressed. When the monitoring began, passions were inflamed.

The Evaluation Plan

My evaluation plan was to reduce distrust by involving stakeholders in the evaluation and making my actions transparent. I did not want any group to see me as siding with the others or being duplicitous. Circumstances were ripe for misunderstanding. When I announced I was trying to make the monitoring transparent, one administrator told me that was a mistake. Why didn’t I just act with the authority of the court? The other side had no choice but to accept the findings. In any case, they were not going to change.

I brought the representatives of the contending parties face-to-face twice a year to discuss the evaluation findings and allow the parties input into the process. Since many participants were lawyers, adversarial by occupation (and some would say nasty by disposition), the meetings had some contentious encounters. Although I set the agenda, I could not anticipate what would occur when they met. I structured interaction around information and issues we and they thought significant. In general, the sessions were cordial, whatever people said about each other privately.

In the beginning I intended using data from the district’s new management information system to find schools that appeared deficient. However, the data system fell far behind schedule. Instead, from the court agreement, I constructed a checklist based on key elements of the program. By visiting schools, I could judge each school as being in compliance. I submitted the checklist to all parties to ensure these features were most significant. People made useful suggestions, and I visited a few schools to see what collecting data would be like. With more than one hundred schools, there was no way I could collect the data myself. Sampling schools did not seem viable either since whether each school was in compliance was an issue.

I hired two retired school principals from Denver to visit and rate the schools using the checklist. It would have been easier to use graduate students, but grad students would have little credibility with administrators and teachers, and it would be easy for principals to fool the students. By contrast, the former principals knew how the schools worked. When they were fed a line, they could sense it; they had been in similar positions, knew the program, personnel, and students. Since they were former principals, the central office trusted them. Since they were Latinas, spoke fluent Spanish, and had supported the ELA program from the beginning, the plaintiffs also trusted them.

The principals did lack research experience, and sometimes they offered advice to the schools, reverting to their former roles as helpers. To counter this, I held regular meetings to discuss their findings school by school and remind them they were evaluators. It is surprising how well people can assume a role once the expectations are clear. To help separate their opinions from the monitoring, I added a section where they provided their professional observations, with the understanding that these comments did not have to be based on the legal agreement. That helped them (and me) sort out their ratings from other (often invaluable) insights. They felt better because their insights did not go unrecorded or unrecognized.

Discussing what was going on with the former principals helped me construct an image of how the program was functioning. I had insights into what was happening and why. For example, we might discover that a principal was undercounting the number of eligible students deliberately. Why would the principal do that? My colleagues might suggest that the principal was concerned about losing veteran teachers who had been with her a long time. The legal agreement stipulated that when numbers of eligible students reached a certain level, Spanish language teachers must be introduced to provide instruction, which could mean that regular teachers would have to transfer. The principal was protecting her veteran teachers. Although we couldn’t solve such a problem, we could seek solutions with the school district.

Enlisting these two former principals was one of the best things I did. Not only could they communicate with immigrant parents, teachers, and administrators, they were able to detect when things were awry when I had no idea. I could not have obtained this knowledge by traditional methods. As a check on our site visits, I encouraged ELA program staff to challenge our findings. They were forthcoming when they thought we were wrong, and we hashed out disagreements face-to-face. Eventually, ELA staff developed their own checklist so they could anticipate which schools had problems.

As the management information system improved, I also developed quantitative indicators of implementation based on district data. I discussed these indicators with all parties until everyone accepted them as measures of progress. The development of the information system was slow and tortuous, reflecting how difficult it is to obtain accurate information in such organizations. Data had to be collected at the school level, entered into the data system, and aggregated. Errors plagued the process every step of the way. It cost the district a huge effort to obtain reliable data, but they did manage it eventually.

When the data were reasonably accurate, our indicators showed gradual improvement year by year. Improvement was slower than anyone anticipated. The indicators also showed which schools were in trouble. By combining our on-site checklists with the school-by-school indicators, we had a cross check on where things stood. When schools looked bad, we revisited them to look again, and the district sent staff members to these schools to tackle problems.

Constant Change

Constant change in the school district was a complicating disturbance. School principals were retiring, resigning, and being replaced or promoted. New principals meant a new situation, and we revisited schools that had new principals. Students dropped out, moved midterm, went back to Mexico, or disappeared from the rolls. Some schools had more than 100% student turnover.

More surprising was the change in district superintendents. In the first six years of the monitoring, there were five different superintendents in overall charge of the Denver school district. Principals were accustomed to running their buildings without interference, and new superintendents disrupted long-standing patterns of behavior. Each superintendent had a different style and goals. Each reorganized the district. With each, I had to establish a professional relationship, give them time to understand my role, and figure out how their plans would affect the monitoring. ELA program directors also changed. There were four. I admired these people and the difficult challenges they faced. They were responsible for implementing the agreement, yet had no line authority. They could persuade or refer problems up the chain of command, but they could not order principals or teachers to do anything. Yet, when things went wrong, they were held accountable. Some lasted a long time; some not. Establishing a working relationship with them was critical. Meanwhile, I met with interested groups in the community, including the militant, both those opposed to the program and those wanting Spanish in all schools. I listened, responded to their concerns, and included some of their ideas in my investigations. I followed up on information these groups provided about individual schools. I turned down no one wanting to offer views, though I did not accept their information at face value. For example, the most militant Latino group wanted cultural maintenance of Spanish, as well as English. I met with the leader in a café that served as political headquarters and listened to her concerns. There was little I could do about cultural maintenance since the court agreement precluded it. However, I did investigate practices that reinforced her view the district was insincere. I also considered holding open public hearings to allow anyone to offer opinions but decided against it. I was afraid such hearings would degenerate into shouting matches.

My periodic, written reports went to the court. As court documents, the reports were public information the media seized on. I asked the district and plaintiffs how they thought I should handle the media. Bilingual education was such a hot topic I knew the reporters would be after comments. All preferred that I not talk to the media. In their view, it would inflame the situation and make implementation more difficult. I took their advice, referred inquiries to the parties, and made no comments outside my reports. The media accepted this stance reluctantly and regularly quoted my reports in the newspapers.

After six years of monitoring, the program was almost fully implemented. The conflict seemed defused, at least for the time being. The opposing parties could meet in a room without casting insults at each other. I am not saying the groups loved each other, but they could manage their business together rationally. The strife and distrust was much less than when we started. The school district had established its own monitoring system. Even so, the plaintiffs were reluctant to let the district out of the court ruling.

District politics had shifted, with more Latinos elected to the school board. In fact, the daughter of the man who had sponsored the original lawsuit became chair of the governing board, and the last superintendent (a lawyer who had been the Denver mayor’s chief of staff) adopted a strong pro Latino attitude. Under these circumstances, one plaintiff lawyer and the district lawyer thought the monitoring no longer necessary. The plaintiff lawyer had long disagreed with us about whether the district was forcing students into mainstream classes inappropriately. We could find no evidence for that, but he thought we didn’t look in the right places. It seemed advisable to end the monitoring since one side had gained the upper hand politically, and we had already been evaluating for three years longer than planned. I had a mixed reaction to ending the monitoring. On one hand, I wanted to resolve the lawsuit in the court, but that wasn’t my assignment, and the district had no motivation to take the case back to court. More positively, the district and plaintiffs had reached a new level of understanding and cooperation. They could work things out without a third party. Time to quit.

Deliberative Democratic Evaluation

The evaluation approach employed in Denver is called deliberative democratic evaluation, I think it’s best to regard the approach as exploratory rather then definitive. Its three principles are inclusion of all relevant stakeholder views, values, and interests; dialogue among evaluators and stakeholders so they understand one another thoroughly; and deliberation with and by all parties (House and Howe, 1999). The approach encourages the participation of major stakeholders, with stakeholder views tested against other views and the available evidence. The legitimacy of the approach rests on fair, inclusive, and open procedures for deliberation, where discussion is not intimidated or manipulated (Stein, 2001). (See checklist at end or on the website of the Evaluation Center at Western Michigan University: www.wmich.edu/evalctr/checklists.)

The first principle is inclusion of all relevant major interests. It would not be right for evaluators to provide evaluations only to the most powerful or the highest bidder. That would bias evaluations towards special interests. Nor would it be right to let sponsors revise findings and delete conclusions they don’t like to maximize their own interests. Inclusion of all major interests is mandatory. Otherwise, we have stakeholder bias, which usually means bias in favor of the most powerful. This principle does not mean evaluators must take all stakeholder views at face value. No doubt some views are better founded. Evaluation should contribute to public consideration on the basis of merit, not power.

The second principle is dialogue. Evaluators should not presume to know what others think without engaging them in dialogue. Too often, evaluators take the perspectives of sponsors as definitive or presume they know how things stand. Too often, evaluators do not know when they think they do. One safeguard against such error is to engage in dialogue with all stakeholders. This admonition comes from minority and feminist spokespeople in particular, who have said repeatedly, “You only think you know what we think. You don’t!” Again, evaluators need not take all views at face value. But they should first hear and understand all views in order to assess them.

A second task of dialogue is to discover “real” interests. Evaluators should not assume what the interests of the parties are nor take those interests as set in stone. Stakeholders may change their minds about where their interests lie after they examine other views. There is a serious concern that engaging in extensive dialogue will cause evaluators to be biased towards some stakeholders, perhaps be too sympathetic to program developers or sponsors (Scriven, 1973). Certainly, that is a significant danger, but being ignorant of stakeholder views or misunderstanding their views are also dangers.

The third principle is deliberation. Deliberation is a cognitive process grounded in reasons, evidence, and valid arguments, including the methodological canons of evaluation. The special expertise of evaluators plays a critical role. Value claims (beliefs) are subject to rational analysis and justification. Perspectives, values, and interests are not taken as fixed or unquestioned, but are examined through rational processes. Deliberation requires that participants act in good faith and reason in ways others will find acceptable, while being open to revising their own views (Howe and MacGillivary).

If inclusion and dialogue are achieved, but deliberation is not, we might have authentic interests represented, but have the issues inadequately considered. If inclusion and deliberation are achieved but dialogue is inadequate, we might misrepresent participant interests and views, resulting in conclusions based on false interests. Finally, if dialogue and deliberation are achieved, but not all stakeholders are included, the evaluation may be biased towards special interests—stakeholder bias. The democratic aspiration is to arrive at unbiased conclusions by processing all relevant information from all parties.

No doubt, such an approach extends the evaluators’ role beyond the traditional. Since many views, values, and interests are considered, the expectation is that conclusions from the evaluation will be sounder and more acceptable. The approach employs traditional data collection techniques and analysis, as well as procedures for dealing with stakeholders. These procedures may be as familiar as focus groups or as unusual as involving stakeholders in constructing conclusions. No particular techniques are required. What works in one place to facilitate involvement, dialogue, and deliberation might not work in another.

The approach is derived from past experiences with politicized evaluations, evaluation theory, and philosophy and political science (e.g., Gutmann and Thompson, 1996; Elster, 1998; Dryzek, 2000). Compatible or similar ideas have been advanced by MacDonald (MacDonald and Kushner, 2004), Simons (1987), Kushner (2000), and Saunders in the UK, MacDonald being the first to develop a conception of democratic evaluation; by Karlsson (1996, 2003), Segerholm (2003), Hanberger (2001), Murray (2002), and Vedung in Sweden, Krogstrup (2003) in Denmark, and Monsen in Norway, the Scandinavians having carried democratic concepts farther than anywhere else. In Australia, Elsworth and Rogers have introduced such ideas into their work. In Canada, Cousins and Whitmore (1998) have stressed participatory evaluation, and in the US Greene (2003), Schwandt (2003), King (1998), Ryan (Ryan and DeStefano, 2000), Mark, Henry, and Julnes (2000), and Patton (2002) have expressed similar concerns. And Stafford Hood (Hood) has long advocated culturally relevant evaluations based on experience with African American communities. I will not attempt to summarize Hood’s approach; it would be better to have him address his approach in person. Certainly, I have learned something from all these people, though perhaps not enough.

Ten Quick Points

Cultural acceptability—There is no sense trying to do such evaluations in settings that are not democratic. It’s difficult enough without having the culture work against you. Democratic evaluation requires the underpinning of a democratic culture, and even within democratic societies there are significant differences. MacDonald’s original conception of democratic evaluation from the UK did not fit easily into American culture. Karlsson’s conception of democratic evaluation originated in Sweden, arguably the most democratic of all societies, where stakeholder involvement is more widely accepted than in the US or UK.

The deliberative democratic conception that Ken Howe and I developed was strongly influenced by MacDonald and Karlsson, but formulated in American circumstances. In the US politics and policies are driven increasingly by wealthy elites. Much of this elite influence is exercised through advertising, publicity, and control of agendas without consideration by the public. Resulting policies and programs favor special interests rather than the public interest. Scholars have addressed this problem by stressing deliberative democratic processes as ways of testing ideas. Howe and MacGillivary () have analyzed how deliberative democratic evaluation fits this political and philosophic framework.

When one steps outside democratic societies, cultures are too different to implement democratic approaches. For example, Karlsson (Karlsson and Segerholm ) tried to employ deliberative democratic procedures in Russia and discovered that in Russian organizations information not coming from the top down has no legitimacy. Indeed, the very concept of professional evaluation is culturally bound. Cultural diversity. Cultures are not internally uniform. The Latino community in Denver consisted of at least three separate groups. The descendants of the Santa Fe culture held professional positions and ambitions to send their children to universities. The new immigrants were trying to survive economically. Many immigrants wanted their children to learn English so they could quit school and get a job, an ambition antithetical to professional educators. The Latinos shared some views and values (family, language, and religion), but also held different values (educational, career, citizenship).

Faithful representation. Of course, such differences raise questions about who is representing whose views. In Denver Latino interests were advanced by lawyers, but the lawyers came from other social classes and ethnic groups. The lawyers had interests and views different from those they represented. As a practical matter one cannot involve all stakeholders directly. Faithful representation is a tough problem to sort out in democratic evaluation, as in democracy generally. Authentic processes. There is a strong temptation for governments to pretend to want democratic involvement when they don’t. Too often, officials have already have determined what their policies or programs are and only want to legitimate them. They hold public hearings adorned with the rhetoric of public involvement, but the process is for show and has little influence on what the government has determined. The ruse fools no one, partly because these attempts are common, unfortunately.

Structured interaction. DDE is directed at reaching sound evaluative conclusions. To accomplish this evaluators need structure. Discussions cannot let anyone express opinions at any time or proceed in an undisciplined fashion. Unfortunately, in trying to be fair, there is a temptation to abandon rules and structures, letting people vent their feelings and frustrations. Evaluation is not therapy, counseling, or “feel good” stuff. Unhampered emoting and rambling discursions result in disinterest and withdrawal by participants sensing the process is going nowhere. For these reasons open public hearings are not usually productive either.

Issue focused. Keeping everyone focused on specific issues and bringing evidence, discussion, and deliberation to bear is a productive way of keeping things moving. It’s not necessary that everyone like each other or agree on all matters. In fact, that is unlikely. What is useful is for participants to agree on resolving specific issues. This process includes jointly determining what new evidence might shed light on contested issues. Focusing on issues, not feelings, is a better way to go.

Rules and principles. Evaluators need rules and principles for dealing with culturally different people. The rules should not be rigid or inflexible; hopefully, they are adjusted to the people and circumstances. On the other hand, one should not abandon all rules because different cultures are involved. Guiding principles are necessary. After all, deliberative democratic evaluation is democratic, not anarchic. Evaluators are operating within a democratic framework, not without a framework.

For example, in Denver I decided early that I would meet with any group that had a legitimate claim. That included the most militant, such as those who did not want Spanish language instruction at all and those who wanted bilingual schools. Listening to these groups was not popular with the plaintiffs nor school officials. But I followed through with the principle of being informed about other views not represented in our discussions. Meetings with these groups also provided a chance to inform outsiders about what we were doing, perhaps reducing suspicions.

Collaboration. The evaluators’ role in deliberative evaluation is one of collaboration, not capitulation. In Denver I had procedures for processing data. Even though I wanted to involve major stakeholders, I could not cede these procedures to the contending parties without ruining the honesty of the evaluation, in my view. We reached a critical juncture when the plaintiffs wanted us evaluators not to make summary judgments but to give the plaintiffs the data and let them decide. I could see why they wanted to do this; they could decide whether schools were in compliance. But if I had ceded this point, the monitoring would have failed, in my view. The two parties would be unlikely to agree on something in the future they could not agree on now. I insisted we take the issue back to the judge and let him decide. I was ready to let the evaluation go at that point. I thought the judge would recognize the necessity of an impartial court monitor making these judgments, and I suspect the plaintiffs realized the judge would see it that way too. They desisted.

Balance of power. Power imbalances are a big threat to democratic dialogues. They disrupt and distort discussion. The powerful may dominate discussions as others are intimidated, silenced, or disengaged. There should be a rough balance of power among participants for reasoned discussions to occur. If one party has all the power, they can enforce their will. In Denver the situation changed during the evaluation. The district became controlled by those more favorable to Latino interests. This resulted from shifts in the governing board and district administration. Also, over time the evaluation itself helped reduce differences among the contesting parties.

Constraints on self-interest—Democratic processes work only if people do not act excessively in their own self-interest. In a sense corruption undermines democracies by people grabbing what they can for themselves and manipulating democratic processes. The public interest is lost. Frankly, I don’t know how to prevent this other than to promote an esprit that we are all in this together for our mutual advantage. If others do not see it that way and act selfishly or strategically, their behavior ruins the democratic processes. That’s true in democratic governments and in democratic evaluations. (See House/Care, Rawlsian constraints)

Limitations

Finally, it is worth mentioning a few limitations of deliberative democratic evaluation. It is no panacea, but rather an approach that may have merit under certain conditions, particularly where strong differences are operating. One weakness is that it underestimates opportunism. If participants have only their own self-interests in mind and act accordingly, there is not much that can be done. Of course, that is a weakness of other evaluation approaches as well. Pharmaceutical firms have used the rhetoric of randomized designs, even while manipulating drug evaluations to obtain favorable findings for their products.

Another limitation is that the approach can underplay the interest of the general public. In the way I conducted the Denver evaluation, I never found a way to involve the general public while engaging the major stakeholders. Although I informed the public through reports to the court, informing is not the same as involving. I suppose this is analogous to trade unions bargaining with corporations over wages. Sometimes the public interest is not served by the negotiations even if the participating parties are. Finally, the approach is modest in influence. I suppose that this is true for all evaluations. Sometimes evaluators forget in the midst of their studies that they are not the main actors in these dramas.

Deliberative Democratic Evaluation Checklist
Ernest R. House and Kenneth R. Howe

(Also see http://www.wich.edu/evalctr for checklist)

The purpose of this checklist is to guide evaluations from a deliberative democratic perspective. Such evaluation incorporates democratic processes within the evaluation to secure better conclusions. The aspiration is to construct valid conclusions where there are conflicting views. The approach extends impartiality by including relevant interests, values, and views so that conclusions can be unbiased in value as well as factual aspects. Relevant value positions are included, but are subject to criticism the way other findings are. Not all value claims are equally defensible. The evaluator is still responsible for unbiased data collection, analysis, and arriving at sound conclusions. The guiding principles are inclusion, dialogue, and deliberation, which work in tandem with the professional canons of research validity.

Principle 1: Inclusion
The evaluation study should consider the interests, values, and views of major stakeholders involved in the program or policy under review. This does not mean that every interest, value, or view need be given equal weight, only that all relevant ones should be considered in the design and conduct of the evaluation.

Principle 2: Dialogue
The evaluation study should encourage extensive dialogue with stakeholder groups and sometimes dialogue among stakeholders. The aspiration is to prevent misunderstanding of interests, values, and views. However, the evaluator is under no obligation to accept views at face value. Nor does understanding entail agreement. The evaluator is responsible for structuring the dialogue.

Principle 3: Deliberation
The evaluation study should provide for extensive deliberation in arriving at conclusions. The aspiration is to draw well-considered conclusions. Sometimes stakeholders might participate in the deliberations to discover their true interests. The evaluator is responsible for structuring the deliberation and for the validity of the conclusions. These three principles might be implemented by addressing specific questions. The questions may overlap each other, as might dialogue and deliberation processes. For example, some procedures that encourage dialogue might also promote deliberation.

1. Inclusion

a. Whose interests are represented in the evaluation?

  • Specify the interests involved in the program and evaluation.
  • Identity relevant interests from the history of the program.
  • Consider important interests that emerge from the cultural context.
b. Are all major stakeholders represented?
  • Identify those interests not represented.
  • Seek ways of representing missing views.
  • Look for hidden commitments.
c. Should some stakeholders be excluded?
  • Review the reasons for excluding some stakeholders.
  • Consider if representatives represent their groups authentically.
  • Clarify the evaluator’s role in structuring the evaluation.
2. Dialogue

a. Do power imbalances distort or impede dialogue and deliberation?

  • Examine the situation from the participants’ point of view.
  • Consider whether participants will be forthcoming under the circumstances.
  • Consider whether some will exercise too much influence.
b. Are there procedures to control power imbalances?
  • Do not take sides with factions.
  • Partition vociferous factions, if necessary.
  • Balance excessive self-interests.
c. In what ways do stakeholders participate?
  • Secure commitments to rules and procedures in advance.
  • Structure the exchanges carefully around specific issues.
  • Structure forums suited to participant characteristics.
d. How authentic is the participation?
  • Do not organize merely symbolic interactions.
  • Address the concerns put forth.
  • Secure the views of all stakeholders.
e. How involved is the interaction?
  • Balance depth with breadth in participation.
  • Encourage receptivity to other views.
  • Insist on civil discourse.
3. Deliberation

a. Is there reflective deliberation?

  • Organize resources for deliberation.
  • Clarify the roles of participants.
  • Have expertise play critical roles where relevant.
b. How extensive is the deliberation?
  • Review the main criteria.
  • Account for all the information.
  • Introduce important issues neglected by stakeholders.
c. How well considered is the deliberation?
  • Fit all the data together coherently.
  • Consider likely possibilities and reduce to best.
  • Draw the best conclusions for this context.

No comments:

Post a Comment

Coherence and Credibility: The Aesthetics of Evaluation

1979 Ernest R. House. (1979). Coherence and Credibility: The Aesthetics of Evaluation, Educational Evaluation and Policy Analy...