Acessibilidade / Reportar erro

Ethical issues of evaluation practice within the brazilian political context

Abstracts

This paper analyzes a conflict frequently encountered by an evaluation professional working in the Brazilian context and its implications to the evaluation process. The challenge is to follow ethical principles that guide a true evaluation, and yet untangle the interaction of all the actors within a complex political context, where: (a) the recognition and regulation of the evaluation profession leaves much to be desired; (b) a strong professional association of evaluators is yet to be formed, and (c) we have little empirical guidance that can enlighten the actors in the evaluation process. The conflict for the evaluator is in implementing the principles and standards that guide the formal preparation of an evaluation professional, in the face of limited autonomy of decisions regarding the use of results and recommendations. We illustrate this conflict by describing three case examples of evaluations by the Cesgranrio Foundation that focused social, educational and corporate programs.

Ethical values; Evaluation professional; True evaluation


Este trabalho avalia um conflito freqüentemente encontrado por um profissional de avaliação atuando no contexto brasileiro bem como suas respectivas implicações para o processo avaliativo. O desafio está em cumprir os princípios éticos que guiam uma verdadeira avaliação e que ainda qualificam as interações de todos os atores dentro de um complexo contexto político no qual: (a) o reconhecimento e a regularização da profissão de avaliador deixam muito a desejar; (b) uma sólida associação profissional de avaliadores está ainda por ser constituída e (c) temos precária orientação prática para prestar esclarecimento aos atores no processo avaliativo. O conflito para o avaliador está na implementação dos princípios e padrões que guiam a formação de um avaliador profissional em face da limitada autonomia de decisões em relação à utilização de resultados e recomendações. Ilustramos aqui este conflito descrevendo três exemplos de casos em avaliação realizados pela Fundação Cesgranrio, enfocando programas nas áreas social, educacional e empresarial.

Valores éticos; Profissional da avaliação; Verdadeira avaliação


INFORMES E PARTICIPAÇÕES

Ethical issues of evaluation practice within the brazilian political context * * Paper presented at the 18th Annual Conference of the American Evaluation Association, Atlanta, Georgia, November 3 – 6, 2004. ,1 1 The authors want to thank Dr. Vathsala I. Stone and Dr. John Stone for their useful suggestions to improve this paper.

Carlos Alberto SerpaI; Thereza Penna FirmeII; Ana Carolina LetichevskyIII

Cesgranrio Foundation:

IPresident

IICoordinator/Evaluation Center

IIIStatistician

Correspondência Correspondência gabinete@cesgranrio.org.br therezapf@uol.com.br anacarolina@cesgranrio.org.br

ABSTRACT

This paper analyzes a conflict frequently encountered by an evaluation professional working in the Brazilian context and its implications to the evaluation process. The challenge is to follow ethical principles that guide a true evaluation, and yet untangle the interaction of all the actors within a complex political context, where: (a) the recognition and regulation of the evaluation profession leaves much to be desired; (b) a strong professional association of evaluators is yet to be formed, and (c) we have little empirical guidance that can enlighten the actors in the evaluation process. The conflict for the evaluator is in implementing the principles and standards that guide the formal preparation of an evaluation professional, in the face of limited autonomy of decisions regarding the use of results and recommendations. We illustrate this conflict by describing three case examples of evaluations by the Cesgranrio Foundation that focused social, educational and corporate programs.

Keywords: Ethical values. Evaluation professional. True evaluation.

RESUMO

Este trabalho avalia um conflito freqüentemente encontrado por um profissional de avaliação atuando no contexto brasileiro bem como suas respectivas implicações para o processo avaliativo. O desafio está em cumprir os princípios éticos que guiam uma verdadeira avaliação e que ainda qualificam as interações de todos os atores dentro de um complexo contexto político no qual: (a) o reconhecimento e a regularização da profissão de avaliador deixam muito a desejar; (b) uma sólida associação profissional de avaliadores está ainda por ser constituída e (c) temos precária orientação prática para prestar esclarecimento aos atores no processo avaliativo. O conflito para o avaliador está na implementação dos princípios e padrões que guiam a formação de um avaliador profissional em face da limitada autonomia de decisões em relação à utilização de resultados e recomendações. Ilustramos aqui este conflito descrevendo três exemplos de casos em avaliação realizados pela Fundação Cesgranrio, enfocando programas nas áreas social, educacional e empresarial.

Palavras-chave: Valores éticos. Profissional da avaliação. Verdadeira avaliação.

Introduction

Today's world is going through a crisis of moral and ethical values that affects the structure of society and its educational, social, corporative and other organizations. This crisis of values is a worldwide problem that appears more or less emphatically in different countries and in different sectors of society. In countries with socioeconomic inequalities and therefore a lack of educational opportunities, it is likely to find people whose moral values and attitudes are not in accordance with the ethics expected from a true evaluation.

However, this is not a rule, since it is possible to find individuals with high moral principles in any social, economical and academic levels. Seeking and finding strategies to meet this crisis is a task of all citizens; for this reason it is necessary to rely on scholars and professionals from different areas of knowledge who act in the different sectors of society. In Brazil this has stimulated efforts against corruption and tax evasion. The outlook is hopeful, considering that (a) evaluation is now strongly emerging as an official procedure for accountability in Government, in international organizations, to investors, to academic communities and society in general; (b) professionals in the area of evaluation are getting together to attempt to provide organizational structures to allow the flow of information that would facilitate the creation of an evaluative culture in the country; (c) there is a growing number of initiatives seeking to create mechanisms to train professionals in the area of evaluation. However in a developing country with many social needs and demands, the first challenge to attain an evaluative culture may be making people understand that a certain amount of resources must be used to evaluate actions. In general, the first objection usually is to question: " Why evaluate? Wouldn't the resources be better used to implement actions? " Therefore the great challenge consists in explaining and convincing people of the importance of evaluation as a means to improve the quality of actions and minimize waste.

Evaluation Competence

Although ambitious, there is reason to believe that evaluation can help construct a new reality as long as its process is present in the different sectors of society and it counts on multidisciplinary teams of competent evaluators. By help we mean that for the development of a truly evaluative process one needs among other aspects (a) understanding and respecting the values involved in the process, (b) definition of criteria of excellence, and (c) revealing strengths (which must be preserved and reinforced) and weaknesses (which must be overcome). Thus, evaluation has a fundamental role in understanding values, the systematization of criteria of excellence, the identification of aspects that are going well, as well as those that need improvement. Evaluation could even recommend possible ways to solve problems, although without autonomy to implement the necessary changes since this depends primarily on the persons directly responsible for the object of the evaluation.

The Brazilian evaluator faces diversified demands from a complex socio-political context and attempts to fully accomplish the role without the benefit of adequate preparation and working conditions. An analysis of this situation could contribute to peer reflection from different cultures (PATTON, 1997), at the same time deriving new inspirations from them. It is hoped that the echo of this clamor may consolidate the preparation of the evaluator as a professional evermore competent whose moral qualities are a major asset (JOINT COMMITTEE ON STANDARDS FOR EDUCATIONAL EVALUATION, 1994), in confronting the challenging diversity of today's world.

In spite of the fact that ethics is the crucial aspect in the background of this professional, he must also acquire technical knowledge specific to the area of evaluation with respect to concepts, models, and methodologies. This professional must also know quantitative and qualitative methods of data collection, treatment and modeling. In this regard it is necessary to stress the importance of assuring that evaluators be adequately prepared to choose the most appropriate methods; moreover they must be prepared to apply, analyze, and communicate the results.

Evaluators often select the methods correctly, but fail in their application or analysis; moreover, many times, the choice, the application and the analysis are correct, but there are serious mistakes in their communication in terms of failure to indicate the scope and the limitations of the results, failure to point out sources of errors, failure to inform the level of credibility of the results, and failing to use adequate language to reach the intended audience. On the other hand there are those that failed to make the correct choice of the method to be used. There are many reasons for these mistakes mainly: (a) the lack of knowledge about the adequacy, the purposes and the limitations of each method and (b) the choice of the method coming a priori, before the formulation of the evaluative questions.

Before choosing the most appropriate technique for planning and analyzing the data a clear statement of which questions are to be answered is important, since a technique adequate to seek the answer to one certain question may not be adequate to answer another question (LETICHEVSKY, 2004). For example, if an evaluation of the performance of students is conducted with the purpose of determining which schools have the students with the best results it is possible to work directly with the scores of the students. On the other hand, if the purpose is to identify which schools are the most effective (MORTIMORE, 1991), it is necessary to consider that there are some differences among the students, who come from different backgrounds, in regard both to formal education and general information.

The socioeconomic levels of the families are varied and the pre-existent knowledge of the students is diversified. Thus, students from a higher socioeconomic level or students who start school with a wider range of information, tend to have a better performance. The student/school interactions interfere in the student's performance and must also be incorporated into the models. In this context it becomes necessary to isolate the effects which do not depend on the school, that is, those that are not under the control of the school's administrators, teachers, and supporting staff, from those effects that depend entirely on those who conduct the teaching process. In this regard, the objective is generally to isolate the effects of the socioeconomic level of the students and of the schools which somehow have an impact over their performances. Thus, the objective is to measure the school's aggregate value (GODSTEIN et al., 1999; YANG et al., 1999).

The traditional methods for studying the cause/effect relationship involve models of regression done in one single level, where one dependent variable is explained by a set of dependent variables plus one error. In this specific case, the dependent variable is the student's proficiency (estimated on the student's performance in content). When one analyzes multilevel questions by means of one-level models, mistakes may happen; therefore, it is necessary to use multilevel models (RABASH, 1999), the recommended analysis2 2 It should be noted that there are cases where, in spite of the fact that the population has a hierarchical structure, it is not possible to apply multilevel models due to the fact that there are differences among the groups which can be verified through the infracorrelation coefficient (RAUDENBUSH; BRYK, 2002) or due to the sample design (if that is the case) adopted in the data collection (PFEFFERMAN et al., 1998). for data with hierarchical structure and complex patterns of variability. Similarly to what occurs in schools, the same care must be taken in evaluating the performance of sectors or divisions within an enterprise, or the impact obtained on the beneficiaries of social programs with similar objectives (LETICHEVSKY, 2004).

It is our understanding that it will not be possible to prepare a competent evaluator in quantity and quality unless evaluation as a profession is recognized, regulated and supported by organizations of evaluators. In the near future, (who knows?) evaluators will identify themselves as such and not as psychologists, engineers, statisticians, or educators who work as evaluators. Our hope is not utopian. It is also derived from successful cases among which we selected three that represent the universe of diversified and challenging demands the Cesgranrio Foundation, a Brazilian non profit evaluative organization, bravely faced.

Practical Situation

The first one, of a social nature, consisted of an external evaluation of the impact on society of a television channel. The external evaluation team had to overcome a paradoxical situation in which the sponsor agreed with the theoretical aspects of the evaluation proposal but did not accept the methodological and practical implications. His resistance was to the democratization of the information generated by evaluation, evaluator-constructed indicators, and instruments. The second case was the design of an evaluation methodology for a program concerning the development of small corporations. The challenge was to transfer the experience and know-how acquired in the educational sector to the corporate sector (WORTHEN; SANDERS; FITZPATRICK, 1997). The third case was the implementation of a metaevaluation process (STUFFLEBEAM, 2000) in a higher education institution while it carried out its own evaluation. In this experience it was necessary to strengthen the evaluation team internal to the institution through capacity building (FETTERMAN, 2001).

CASE 1: External Evaluation of a TV Channel.

In its first year, the referred TV Channel, whose programs focus on a gamut of types of information towards the development of a social project on education to help improve the Brazilian education setting, accomplished its goals of ongoing development and enhancement of the comprehensiveness of its broadcasting. Thus, its evaluation proposal was not confined to a single point in time, but to a set of studies that, in different or in the same stages, investigated the most relevant issues with the necessary diversity of methodological approaches. To meet the need of leading an assessment on four targeted foci, reception, utilization, appropriation, and expectations, one had to gather information on target-audience adequacy, utilization and interests of the target public to be prioritized by the Channel through the design of different tools. The main goal of the assessment was to answer four questions:

Question 1: How was the Channel perceived in the role of knowledge-acquisition promoter/facilitator? (a question of merit)

Question 2:How the Channel was used? (a question of impact)

Question 3: How was the Channel appropriated? (a question of impact)

Question 4: What are the expectations for enhancement and accomplishment provided from the Channel? (a question of impact)

To lead the evaluation, an overall methodology was designed to include all the stages, with the necessary flexibility for adjustments and redesigns over the process. This methodology was particularly designed to meet this requirement, from a negotiating process with stakeholders.

Due to the diversity of the target-audience of the Channel, it was decided to classify institutions according to a tipology. The sorting of institutions according to type was done jointly with the Channel's staff, and was being constructed throughout the evaluation process. The basic criteria were related to the use made of the Channel and the profile of the target-audience. Thus, we took into consideration if the use was systematic or nonsystematic, and the particular features of the different audiences served by the involved institutions. The types were: Day-care centers; Elementary Schools and High School, Health- Care Institutions, Social Programs, Shelters, Prisons, Technical Training Schools, and Universities, among others.

This evaluation posed a challenge for both, the Channel and the external evaluation team. For the Channel, the challenge was that the evaluation took place in its first year of operation, it being a new Channel, with its goals yet to be outlined, its relationships still being established, and its work was still experimental, with doubts and uncertainties remaining. On the other hand, the challenge for the evaluation team was in designing a specific methodology to evaluate a TV channel while appreciating the values of the different stakeholders and able to swiftly identify the strengths that should be kept and enhanced, the weaknesses that should be corrected (offering timely suggestions and recommendations), and aspects that jeopardized the project and had to be overcome. Moreover, the evaluation team should be attentive not only to the expected use, but to unexpected ones, which arose from the appropriation of the Channel by society. Success came when subjects and evaluators realized these challenges and openly discussed them with no fear of exposing the weaknesses at the same time that strengths were pointed out, in an ongoing negotiating process. The evaluation had to be swift and accurate, to ensure timely information in a formative approach.

CASE 2: The Design of an Evaluation Methodology for the Project: "Territorial Development of Unique Paths in the State of Rio de Janeiro"

The focus of the evaluative process for which the methodology was designed is the "Projeto Desenvolvimento Territorial de Caminhos Singulares do Estado do Rio de Janeiro" (Territorial Development of Unique Paths in the State of Rio de Janeiro Project), being implemented by SEBRAE/RJ (Micro and Small Companies Support Agency). The main goal of this Project is to act, on a matricial and integrated way, in specific tourist – appealing Rio de Janeiro paths, whose configuration is closely related to the different stages of social-environmental occupation, and economic, historical, and cultural drive within the State.

In this sense, some cities of the State of Rio de Janeiro were selected to take part in the Project, and sorted out into "paths" according to its historical, cultural, environmental, physical and territorial, economic, and social identity. The paths were defined as follows:

1. Caminho do Ouro (The Gold Path): city of Paraty.

2. Caminho do Café (The Coffee Path): cities of Engenheiro Paulo de Frontin, Paracambi, Mendes, Vassouras, Barra do Piraí, Piraí, Valença, Rio das Flores, Paty do Alferes e Miguel Pereira.

3. Caminho das Matas (The Path of the Forest): Serra dos Órgãos National Park – cities of Petrópolis, Teresópolis, Guapimirim, Magé and Nova Friburgo; Parque Nacional de Itatiaia – Agulhas Negras National Park: cities of Porto Real, Quatis, Resende and Itatiaia.

4. Caminho do Açúcar (The Sugar Path): cities of Campos, Quissamã and São João da Barra.

5. Caminho do Sal (The Salt Path): cities of Maricá, Araruama, Saquarema, Iguaba Grande, São Pedro D'Aldeia, Arraial do Cabo, Cabo Frio and Búzios.

6. Caminhos Urbanos (Urban Paths): cities of Rio de Janeiro (Santa Teresa neighbourhood and Downtown areas) and Niterói (Niemeyer Path, Forts and Fortresses).

Four types of businesses are integrated in these paths by SEBRAE: tourism, agrobusiness, manual crafts, and culture. This time, the challenge was to design a methodology that allowed the development of small corporations, by adapting methodologies typically used for educational and program evaluation. Thus, in tune with the empowerment evaluation approach (FETTERMAN, 2001), and focused on utilization (PATTON, 1997), an evaluation methodology was developed, taking into account the standards principles of evaluation, the evaluator guiding principles, and the intentions and particularities of this Project. This methodology basically consisted of 28 steps: (1) to establish a negotiating process between SEBRAE's and the Cesgranrio Foundation team that was present throughout the evaluation process, (2) understanding the request, (3) project evaluation, (4) identification of stakeholders, (5) outlining of interests, concerns and priorities of potential users, (6) evaluation issues, (7) immersion into the project and in the documentation on "Specific Indicators – SEBRAE System", (8) designing a preliminary indicators bank, (9) involvement of users in the process, (10) determining the intended use, (11) validation and restatement of the evaluation issues and preliminary indicators, (12) defining the criteria for excellence to be achieved, (13) identifying sources of information, (14) selection of information collection techniques, (15) designing data- collection tools, (16) pre-testing, (17) validation of the tools, (18) forming and training the collection team, (19) data collection, (20) data tabulation, (21) triangulation, (22) interpreting the results (objective consideration of the information), (23) preliminary report, (24) validation, (25) final report, (26) easiness of use, (27) results dissemination and use, (28) conclusion of meta-evaluation (formative and summative).

Considering the scope of the Project, 5 evaluation questions were drawn:

Question 1: To what extent does the Project contribute to the establishment, regular operation and growth of micro and small companies? (a question of impact).

Question 2: To what extent does the Project promote local development of targetcities? (a question of impact)

Question 3: To what extent does the Project promote social integration? (a question of impact)

Question 4: Up to what point does the Project foster SEBRAE/RJ visibility in the target-cities? (a question of impact)

Question 5: To what extent does the Project help establishing favorable scenarios for the development of micro and small companies? (a question of merit)

This Project required the team of evaluators to: (a) understand needs and values involved, (b) detach itself from the paradigms of other kinds of assessment, (c) review previous methodologies, in order to keep what could be useful and having the courage to change whatever was necessary.

CASE 3: Implementation of a Metaevaluation Process in a Higher Education Institution

The focus of the metaevaluation lies on the self-evaluation of a higher education institution (the Newton Paiva University Center, Minas Gerais, Brazil). This task was based on the idea of metaevaluation as a decisive activity to ensure the integrity of an evaluation, from its conception to its results. In this regard, the metaevaluation even verified the extent to which the self-evaluation in question matched the four categories of standards of evaluative programs (JOINT COMMITTEE ON STANDARDS FOR EDUCATIONAL EVALUATION, 1994) in terms of utility (satisfying the needs of interested parties), feasibility (being realistic, prudent, diplomatic and simple), propriety (being legal and ethical), and accuracy (revealing, in an adequate and technical manner, information about the judgments of merit and worth). These standards were verified along the evaluation conducted (formative metaevaluation) and at its completion (summative metaevaluation) (SCRIVEN, 1967).

The methodological steps (STUFFLEBEAM, 2000) includes the initial interaction of persons involved and interested in the mission, the choice of a qualified team to conduct the process, the definition of the metaevaluation questions, the agreement in regard to standards for judging the evaluation of the Newton Paiva University Center, the drafting of the metaevaluation contract, the collection and revision of the necessary and available information, the analysis and interpretation of qualitative and quantitative information according to standards, the drafting of information papers, and sharing the results with the client institution.

As important as the success in completing the evaluation of the Newton Paiva University Center was the increased evaluative ability among the participants, by means of an innovative approach of empowerment (FETTERMAN, 2001), maintained throughout the mission - a result that arises as one of the greatest challenges in evaluation in the 21th Century. The Cesgranrio Foundation perceived that facing the challenge in building up the ability to evaluate arose from the integration of metaevaluation and empowerment. In other words, this process was achieved by means of implementation of the process of metaevaluation in an institution that is going through both an internal and an external evaluation, applying empowerment to the democratic practice of its principles and methodological procedures. (PENNA FIRME; LETICHEVSKY, 2003).

This approach differs from the traditional one basically in regards to a shift from an external assessment focus only to a primarily internal one, where the image of an authoritarian expert is replaced by the one of an expert partner as a reviewer friend. This favors the shift from a relationship of dependence between subjects and evaluators to a stand of selfdetermination and strengthening of the assessment capability towards less individual and more collaborative conceptions.

All these efforts would be of no use if the outcomes of the metaevaluation were not used to enhance the target-evaluation and, more broadly, the institution and each of its members. Under this perspective, the mission of metaevaluators is not finished. They need to help the client institution in the process of utilizing most of the results of the metaevaluation which is a collaborative endeavor. There is no question that the use of an empowerment approach in carrying out the metaevaluation process makes possible the training of the participants in evaluation skills, considering that there is light shed on the exchange of ideas and development of theories that aid the necessary clarification and decisionmaking. Pathways are presented to promote individual and group improvement; defense elements are built to enhance individual and institutional self-esteem. Finally, disengaging from prejudices and myths fosters emancipation and self-determination.

Metaevaluation linked to empowerment not only overcame the challenge of evaluation-capacity, but also placed in the institution the hope for ongoing dynamics of improvement and the gratifying confirmation of Patton's statement that "[ ] better than leaving written reports is to leave transformed people".

Conclusion

These three cases, in spite of being different kinds of evaluation, had a common aspect: the team of evaluators had to detach itself from traditional evaluation paradigms, respecting the values involved and acting with firm belief that no priority justifies an ethical breach.

In the first case - the TV Channel evaluation – the sensitization of the institution was what actually allowed the Project to be successful, since there were different perspectives on the most suitable methodology for the evaluation. At first, the sponsor was convinced of the need for evaluation, but did not agree with the methodology proposed by the evaluation team. At this point, the evaluation team had to carry out an extensive negotiation that included a training process so that the sponsor could understand why his proposed methodology could not be used. The negotiation was successful and the quality of the evaluation process was preserved.

In the second case – the design of an evaluation methodology for SEBRAE/RJ – the challenge for the evaluators was in designing a methodology that met the needs of those interested in evaluating companydevelopment programs, using their previous experience and their theoretical background on educational, social and health programs evaluation methodologies. The developed methodology had to be feasible, leading to useful and accurate outcomes, and at the same time complying with the values and the cultural background of those that are interested.

In the third case - a process of metaevaluation – one had to ensure that all weaknesses were recorded and conveyed through a formative process, which allowed for in loco training of the evaluating team without exposing the professionals involved or creating an atmosphere of punishment and demand. Above all, one had to emphasize all the successes achieved throughout the evaluation process in order to enhance the professionals' self-esteem.

In essence, all three cases were examples of true evaluation offering testimony that it is possible to preserve the ethics of and the faithfulness to the theoretical-methodological principles within the political context.

  • FETTERMAN, D. M. Foundations of empowerment evaluation Thousand Oaks, CA: Sage, 2001.
  • GOLDSTEIN, H. et al. A user's guide to multilevel models project London: Institute of Education, University of London, 1999.
  • JOINT COMMITTEE ON STANDARDS FOR EDUCATIONAL EVALUATION. The program evaluation standards: how to assess evaluations of educational programs. 2nd ed. Thousand Oaks, CA: Sage Publications, 1994.
  • LETICHEVSKY, A. C. La categoria precisión en la evaluación y en la meta evaluación: aspectos prácticos y teóricos. In: CONFERENCIA DE RELAC, 1., 2004, Lima. Trabajo presentado.. Lima, Peru: UNESCO, 2004.
  • MORTIMORE, P. The nature and findings of school effectiveness research in primary sector. In: RIDDELL, S.; BROWN, S. (Ed.). School effectiveness research: its messages for school improvement. London: HMSO, 1991.
  • PATTON, M. Q. Utilization-focused evaluation 3rd ed. Thousand Oaks, CA: Sage, 1997.
  • PENNA FIRME, T.; LETICHEVSKY, A. C. O desenvolvimento da capacidade de avaliação no século XXI: enfrentando o desafio através da meta-avaliação. Ensaio: avaliação e políticas públicas em educação, Rio de Janeiro, v. 10, n. 36, jul./set. 2002.
  • PFEFFERMANN, D. et al. Weighting for unequal selection probabilities in multilevel models. Journal of the Royal Statistical, London, n. 1, p. 23-40, 1998. Series B. Statistical methodology.
  • RASBASH, J. et al. MLwilN Beta version: multilevel models project. London: Institute of Education, University of London, 1999.
  • RAUDENBUSH, S.; BRYK, A. Hierarchical linear models: applications and data analysis. 2nd ed. Newbury Park, CA: Sage, 2002.
  • SCRIVEN, M. The methodology of evaluation. In: TYLER, R. W.; GAGNÉ, R. M.; SCRIVEN, M. (Ed.). Perspectives of curriculum evaluation Chicago, IL: Rand McNally, 1967. p. 39-83.
  • STUFFLEBEAM, D. L. The CIPP model for evaluation. In: STUFFLEBEAM, D. L.; MADAUS, G. F.; KELLAGHAN, T. (Ed.). Evaluation models 2nd ed. Boston: Kluwer Academic Publishers, 2000.
  • WORTHEN, B. R.; SANDERS, J. R.; FITZPATRICK, J. L. The program evaluation: alternative approaches and practical guidelines. 2nd ed. New York: Longman, 1997.
  • YANG, N. et al. The use of assessment data for school improvement purposes. Oxford Review of Education, Dorchester-on-Thames, n. 25, p. 469-483, 1999.
  • Correspondência
  • *
    Paper presented at the 18th Annual Conference of the American Evaluation Association, Atlanta, Georgia, November 3 – 6, 2004.
  • 1
    The authors want to thank Dr. Vathsala I. Stone and Dr. John Stone for their useful suggestions to improve this paper.
  • 2
    It should be noted that there are cases where, in spite of the fact that the population has a hierarchical structure, it is not possible to apply multilevel models due to the fact that there are differences among the groups which can be verified through the infracorrelation coefficient (RAUDENBUSH; BRYK, 2002) or due to the sample design (if that is the case) adopted in the data collection (PFEFFERMAN et al., 1998).
  • Publication Dates

    • Publication in this collection
      26 Aug 2005
    • Date of issue
      Mar 2005
    Fundação CESGRANRIO Revista Ensaio, Rua Santa Alexandrina 1011, Rio Comprido, 20261-903 , Rio de Janeiro - RJ - Brasil, Tel.: + 55 21 2103 9600 - Rio de Janeiro - RJ - Brazil
    E-mail: ensaio@cesgranrio.org.br