AEA Guiding Principles for Evaluators

AMERICAN EVALUATION ASSOCIATION
GUIDING PRINCIPLES FOR EVALUATORS
Revisions reflected herein ratified by the AEA membership, July 2004

Preface: Assumptions Concerning Development of Principles

A. Evaluation is a profession composed of persons with varying interests, potentially encompassing but not limited to the evaluation of programs, products, personnel, policy, performance, proposals, technology, research, theory, and even of evaluation itself. These principles are broadly intended to cover all kinds of evaluation. For external evaluations of public programs, they nearly always apply.  However, it is impossible to write guiding principles that neatly fit every context in which evaluators work, and some evaluators will work in contexts in which following a guideline cannot be done for good reason. The Guiding Principles are not intended to constrain such evaluators when this is the case. However, such exceptions should be made for good reason (e.g., legal prohibitions against releasing information to stakeholders), and evaluators who find themselves in such contexts should consult colleagues about how to proceed.

B. Based on differences in training, experience, and work settings, the profession of evaluation encompasses diverse perceptions about the primary purpose of evaluation. These include but are not limited to the following: bettering products, personnel, programs, organizations, governments, consumers and the public interest; contributing to informed decision making and more enlightened change; precipitating needed change; empowering all stakeholders by collecting data from them and engaging them in the evaluation process; and experiencing the excitement of new insights. Despite that diversity, the common ground is that evaluators aspire to construct and provide the best possible information that might bear on the value of whatever is being evaluated. The principles are intended to foster that primary aim.

C. The principles are intended to guide the professional practice of evaluators, and to inform evaluation clients and the general public about the principles they can expect to be upheld by professional evaluators. Of course, no statement of principles can anticipate all situations that arise in the practice of evaluation. However, principles are not just guidelines for reaction when something goes wrong or when a dilemma is found. Rather, principles should proactively guide the behaviors of professionals in everyday practice.

D. The purpose of documenting guiding principles is to foster continuing development of the profession of evaluation, and the socialization of its members. The principles are meant to stimulate discussion about the proper practice and use of evaluation among members of the profession, sponsors of evaluation, and others interested in evaluation.

E. The five principles proposed in this document are not independent, but overlap in many ways. Conversely, sometimes these principles will conflict, so that evaluators will have to choose among them. At such times evaluators must use their own values and knowledge of the setting to determine the appropriate response. Whenever a course of action is unclear, evaluators should solicit the advice of fellow evaluators about how to resolve the problem before deciding how to proceed.

F. These principles are intended to supercede any previous work on standards, principles, or ethics adopted by AEA or its two predecessor organizations, the Evaluation Research Society and the Evaluation Network. These principles are the official position of AEA on these matters.

G. These principles are not intended to replace standards supported by evaluators or by the other disciplines in which evaluators participate.

H.  Each principle is illustrated by a number of statements to amplify the meaning of the overarching principle, and to provide guidance for its application. These illustrations are not meant to include all possible applications of that principle, nor to be viewed as rules that provide the basis for sanctioning violators.

I. These principles were developed in the context of Western cultures, particularly the United States, and so may reflect the experiences of that context. The relevance of these principles may vary across other cultures, and across subcultures within the United States.

J. These principles are part of an evolving process of self-examination by the profession, and should be revisited on a regular basis. Mechanisms might include officially-sponsored reviews of principles at annual meetings, and other forums for harvesting experience with the principles and their application. On a regular basis, but at least every five years, these principles ought to be examined for possible review and revision. In order to maintain association-wide awareness and relevance, all AEA members are encouraged to participate in this process.

The Principles 

A. Systematic Inquiry: Evaluators conduct systematic, data-based inquiries.

1.  To ensure the accuracy and credibility of the evaluative information they produce, evaluators should adhere to the highest technical standards appropriate to the methods they use. 

2.  Evaluators should explore with the client the shortcomings and strengths both of the various evaluation questions and the various approaches that might be used for answering those questions.

3.  Evaluators should communicate their methods and approaches accurately and in sufficient detail to allow others to understand, interpret and critique their work. They should make clear the limitations of an evaluation and its results. Evaluators should discuss in a contextually appropriate way those values, assumptions, theories, methods, results, and analyses significantly affecting the interpretation of the evaluative findings. These statements apply to all aspects of the evaluation, from its initial conceptualization to the eventual use of findings.

B. Competence: Evaluators provide competent performance to stakeholders.

1.  Evaluators should possess (or ensure that the evaluation team possesses) the education, abilities, skills and experience appropriate to undertake the tasks proposed in the evaluation.

2.  To ensure recognition, accurate interpretation and respect for diversity, evaluators should ensure that the members of the evaluation team collectively demonstrate cultural competence. Cultural competence would be reflected in evaluators seeking awareness of their own culturally-based assumptions, their understanding of the worldviews of culturally-different participants and stakeholders in the evaluation, and the use of appropriate evaluation strategies and skills in working with culturally different groups.  Diversity may be in terms of race, ethnicity, gender, religion, socio-economics, or other factors pertinent to the evaluation context.

3.  Evaluators should practice within the limits of their professional training and competence, and should decline to conduct evaluations that fall substantially outside those limits. When declining the commission or request is not feasible or appropriate, evaluators should make clear any significant limitations on the evaluation that might result. Evaluators should make every effort to gain the competence directly or through the assistance of others who possess the required expertise.

4.  Evaluators should continually seek to maintain and improve their competencies, in order to provide the highest level of performance in their evaluations. This continuing professional development might include formal coursework and workshops, self-study, evaluations of one’s own practice, and working with other evaluators to learn from their skills and expertise.

C. Integrity/Honesty:  Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process.

1.  Evaluators should negotiate honestly with clients and relevant stakeholders concerning the costs, tasks to be undertaken, limitations of methodology, scope of results likely to be obtained, and uses of data resulting from a specific evaluation. It is primarily the evaluator’s responsibility to initiate discussion and clarification of these matters, not the client’s.

2.  Before accepting an evaluation assignment, evaluators should disclose any roles or relationships they have that might pose a conflict of interest (or appearance of a conflict) with their role as an evaluator. If they proceed with the evaluation, the conflict(s) should be clearly articulated in reports of the evaluation results.

3.  Evaluators should record all changes made in the originally negotiated project plans, and the reasons why the changes were made. If those changes would significantly affect the scope and likely results of the evaluation, the evaluator should inform the client and other important stakeholders in a timely fashion (barring good reason to the contrary, before proceeding with further work) of the changes and their likely impact.

4.  Evaluators should be explicit about their own, their clients’, and other stakeholders’ interests and values concerning the conduct and outcomes of an evaluation.

5.  Evaluators should not misrepresent their procedures, data or findings. Within reasonable limits, they should attempt to prevent or correct misuse of their work by others.

6.  If evaluators determine that certain procedures or activities are likely to produce misleading evaluative information or conclusions, they have the responsibility to communicate their concerns and the reasons for them. If discussions with the client do not resolve these concerns, the evaluator should decline to conduct the evaluation. If declining the assignment is  unfeasible or inappropriate,  the evaluator should consult colleagues or relevant stakeholders about other proper ways to proceed.  (Options might include discussions at a higher level, a dissenting cover letter or appendix, or refusal to sign the final document.)

7.  Evaluators should disclose all sources of financial support for an evaluation, and the source of the request for the evaluation.

D.  Respect for People:  Evaluators respect the security, dignity and self-worth of respondents, program participants, clients, and other evaluation stakeholders.

1.  Evaluators should seek a comprehensive understanding of the important contextual elements of the evaluation. Contextual factors that may influence the results of a study include geographic location, timing, political and social climate, economic conditions, and other relevant activities in progress at the same time.

2.  Evaluators should abide by current professional ethics, standards, and regulations regarding risks, harms, and burdens that might befall those participating in the evaluation; regarding informed consent for participation in evaluation; and regarding informing participants and clients about the scope and limits of confidentiality.

3.  Because justified negative or critical conclusions from an evaluation must be explicitly stated, evaluations sometimes produce results that harm client or stakeholder interests. Under this circumstance, evaluators should seek to maximize the benefits and reduce any unnecessary harms that might occur, provided this will not compromise the integrity of the evaluation findings. Evaluators should carefully judge when the benefits from doing the evaluation or in performing certain evaluation procedures should be foregone because of the risks or harms. To the extent possible, these issues should be anticipated during the negotiation of the evaluation.

4.  Knowing that evaluations may negatively affect the interests of some stakeholders, evaluators should conduct the evaluation and communicate its results in a way that clearly respects the stakeholders’ dignity and self-worth.

5.  Where feasible, evaluators should attempt to foster social equity in evaluation, so that those who give to the evaluation may benefit in return. For example, evaluators should seek to ensure that those who bear the burdens of contributing data and incurring any risks do so willingly, and that they have full knowledge of and opportunity to obtain any benefits of the evaluation. Program participants should be informed that their eligibility to receive services does not hinge on their participation in the evaluation.

6.  Evaluators have the responsibility to understand and respect differences among participants, such as differences in their culture, religion, gender, disability, age, sexual orientation and ethnicity, and to account for potential implications of these differences when planning, conducting, analyzing, and reporting evaluations.

E.  Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.

1.  When planning and reporting evaluations, evaluators should include relevant perspectives and interests of the full range of stakeholders.  

2.  Evaluators should consider not only the immediate operations and outcomes of whatever is being evaluated, but also its broad assumptions, implications and potential side effects.

3.  Freedom of information is essential in a democracy. Evaluators should allow all relevant stakeholders access to evaluative information in forms that respect people and honor promises of confidentiality.  Evaluators should actively disseminate information to stakeholders as resources allow. Communications that are tailored to a given stakeholder should include all results that may bear on interests of that stakeholder and refer to any other tailored communications to other stakeholders. In all cases, evaluators should strive to present results clearly and simply so that clients and other stakeholders can easily understand the evaluation process and results.

4.  Evaluators should maintain a balance between client needs and other needs. Evaluators necessarily have a special relationship with the client who funds or requests the evaluation. By virtue of that relationship, evaluators must strive to meet legitimate client needs whenever it is feasible and appropriate to do so. However, that relationship can also place evaluators in difficult dilemmas when client interests conflict with other interests, or when client interests conflict with the obligation of evaluators for systematic inquiry, competence, integrity, and respect for people. In these cases, evaluators should explicitly identify and discuss the conflicts with the client and relevant stakeholders, resolve them when possible, determine whether continued work on the evaluation is advisable if the conflicts cannot be resolved, and make clear any significant limitations on the evaluation that might result if the conflict is not resolved.

5.  Evaluators have obligations that encompass the public interest and good. These obligations are especially important when evaluators are supported by publicly-generated funds; but clear threats to the public good should never be ignored in any evaluation. Because the public interest and good are rarely the same as the interests of any particular group (including those of the client or funder), evaluators will usually have to go beyond analysis of particular stakeholder interests and consider the welfare of society as a whole.

Background

In 1986, the Evaluation Network (ENet) and the Evaluation Research Society (ERS) merged to create the American Evaluation Association. ERS had previously adopted a set of standards for program evaluation (published in New Directions for Program Evaluation in 1982); and both organizations had lent support to work of other organizations about evaluation guidelines. However, none of these standards or guidelines were officially adopted by AEA, nor were any other ethics, standards, or guiding principles put into place. Over the ensuing years, the need for such guiding principles was discussed by both the AEA Board and the AEA membership. Under the presidency of David Cordray in 1992, the AEA Board appointed a temporary committee chaired by Peter Rossi to examine whether AEA should address this matter in more detail. That committee issued a report to the AEA Board on November 4, 1992, recommending that AEA should pursue this matter further. The Board followed that recommendation, and on that date created a Task Force to develop a draft of guiding principles for evaluators.   The task force members were:

William Shadish, Memphis State University (Chair)

Dianna Newman, University of Albany/SUNY

Mary Ann Scheirer, Private Practice
Chris Wye, National Academy of Public Administration

The AEA Board specifically instructed the Task Force to develop general guiding principles rather than specific standards of practice. Their report, issued in 1994, summarized the Task Force’s response to the charge.

Process of Development. Task Force members reviewed relevant documents from other professional societies, and then independently prepared and circulated drafts of material for use in this report. Initial and subsequent drafts (compiled by the Task Force chair) were discussed during conference calls, with revisions occurring after each call. Progress reports were presented at every AEA board meeting during 1993. In addition, a draft of the guidelines was mailed to all AEA members in September 1993 requesting feedback; and three symposia at the 1993 AEA annual conference were used to discuss and obtain further feedback. The Task Force considered all this feedback in a December 1993 conference call, and prepared a final draft in January 1994. This draft was presented and approved for membership vote at the January 1994 AEA board meeting.

Resulting Principles. Given the diversity of interests and employment settings represented on the Task Force, it is noteworthy that Task Force members reached substantial agreement about the following five principles. The order of these principles does not imply priority among them; priority will vary by situation and evaluator role.

A. Systematic Inquiry: Evaluators conduct systematic, data-based inquiries about whatever is being evaluated.

B. Competence: Evaluators provide competent performance to stakeholders.

C. Integrity/Honesty: Evaluators ensure the honesty and integrity of the entire evaluation process.

D. Respect for People: Evaluators respect the security, dignity and self-worth of the respondents, program participants, clients, and other stakeholders with whom they interact.

E. Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare.

Recommendation for Continued Work. The Task Force also recommended that the AEA Board establish and support a mechanism for the continued development and dissemination of the Guiding Principles, to include formal reviews at least every five years.  The Principles were reviewed in 1999 through an EvalTalk survey, a panel review, and a comparison to the ethical principles of the Canadian and Australasian Evaluation Societies.  The 2000 Board affirmed this work and expanded dissemination of the Principles; however, the document was left unchanged. 

Process of the 2002-2003 Review and Revision.  In January 2002 the AEA Board charged its standing Ethics Committee with developing and implementing a process for reviewing the Guiding Principles that would give AEA’s full membership multiple opportunities for comment. At its Spring 2002 meeting, the AEA Board approved the process, carried out during the ensuing months. It consisted of an online survey of the membership that drew 413 responses, a “Town Meeting” attended by approximately 40 members at the Evaluation 2002 Conference, and a compilation of stories about evaluators’ experiences relative to ethical concerns told by AEA members and drawn from the American Journal of Evaluation. Detailed findings of all three sources of input were reported to the AEA Board in A Review of AEA’s Guiding Principles for Evaluators, submitted January 18, 2003.

In 2003 the Ethics Committee continued to welcome input and specifically solicited it from AEA’s Diversity Committee, Building Diversity Initiative, and Multi-Ethnic Issues Topical Interest Group. The first revision reflected the Committee’s consensus response to the sum of member input throughout 2002 and 2003. It was submitted to AEA’s past presidents, current board members, and the original framers of the Guiding Principles for comment. Twelve reviews were received and incorporated into a second revision, presented at the 2003 annual conference. Consensus opinions of approximately 25 members attending a Town Meeting are reflected in this, the third and final revision that was approved by the Board in February 2004 for submission to the membership for ratification. The revisions were ratified by the membership in July of 2004.

The 2002 Ethics Committee members were:

Doris Redfield, Appalachia Educational Laboratory (Chair)

Deborah Bonnet, Lumina Foundation for Education

Katherine Ryan, University of Illinois at Urbana-Champaign

Anna Madison, University of Massachusetts, Boston

In 2003 the membership was expanded for the duration of the revision process:

Deborah Bonnet, Lumina Foundation for Education (Chair)

Doris Redfield, Appalachia Educational Laboratory

Katherine Ryan, University of Illinois at Urbana-Champaign

Gail Barrington, Barrington Research Group, Inc.

Elmima Johnson, National Science Foundation