La evaluación de políticas puede entenderse como el proceso de observación, medida, análisis e interpretación, orientado al conocimiento del funcionamiento de una determinada intervención pública, con el objeto de alcanzar una valoración sobre su utilidad social.[20] Esta intervención pública se ve reflejada en decisiones que las autoridades respectivas reflejan en planes, programas y proyectos que, al momento de ponerse en operación, utilizan medios políticos, jurídicos, económicos y sociales, inclusive de tipo cultural, para dar cumplimiento a los objetivos que motivan la puesta en marcha de dicha política pública.
Por lo tanto, la evaluación es una actividad específica y con identidad propia, diferenciada de otras como el control interno, la auditoría financiera,[21] la auditoría de gestión[22] o el control presupuestario, pero con las cuales mantiene una estrecha relación de complementariedad. El objetivo último de la evaluación de políticas es obtener información para tomar decisiones.[20].
Los diseños evaluativos se suelen clasificar en tres grupos principales:.
-
- Los diseños experimentales, con asignación aleatoria de los individuos al grupo de intervención o al grupo control (grupos equivalentes).
-
- Los diseños cuasi experimentales, que incluyen los ensayos comunitarios, los diseños pretest-postest con grupo control no equivalente y las series temporales múltiples.
-
- Los diseños no experimentales, que incluyen las series temporales y los diseños pretest-postest sin grupo control.[20].
Es importante no olvidar que hablar de evaluación de una política pública es mucho más amplio, aquí sólo se han planteado algunos elementos que permiten establecer ciertos criterios para iniciar un proceso de evaluación, su enriquecimiento se dará en la medida que otros estudiosos incorporen sus experiencias y conocimiento sobre el tema.
Perspectives and models in program evaluation
The evaluation of public policies and their social programs or interventions, as understood today, begins in the 40s and 50s of the last century in the United States. Focused on evaluation through objectives in the field of education, Ralph Tyler[23] is undoubtedly the most relevant representative of these beginnings.
According to this perspective, a program or intervention had to have specific operational objectives; These measure the achievement/effect of the intervention, limiting the evaluation to reliably verifying whether the objectives are met or not and to what degree. The evaluative procedure followed simply consisted of pre-program measurement followed by post-program measurement of the operational objectives, deducing their achievement (or not) and issuing the corresponding positive or negative value judgment.
In the mid-60s, D. Campbell and J. Stanley[24] introduced a new evaluative perspective focused on the experimental method, with the proposition that there are no differences between evaluation and the scientific method. In the following years, the number of evaluations carried out from this perspective multiplies and acquires features of a dominant paradigm, although without completely displacing evaluation through objectives.
For his part, Edward Schuman[25] identifies evaluation with applied research on the effectiveness of a social intervention following, therefore, a scientific model. However, it does not limit itself only to the analysis of the achievement of objectives, but also adds as objectives of the evaluation: analyze the reasons for success and failure in achieving the objectives of the programs/services, highlight the base philosophy of the successful intervention, and redefine the means necessary to achieve the objectives. Furthermore, although the study of effects is a priority in any evaluation, according to Schuman, for its correct analysis it is necessary to collect information on: the program processes; the target population (characteristics and volume); situational conditions of the development of the intervention, and its differential effects.
Another interesting contribution of Schuman is to propose three possible types of evaluation: ultimate evaluation of results (the typical evaluation using objectives), prior evaluation (focused on the delimitation of needs, goals and objectives and implementation of the social intervention), and evaluation during the process (identifies the most useful activities or procedures). Finally, the Schuman model specifies up to five assessment criteria that actually give rise to five areas of evaluation, namely: 1) criterion or evaluation of the effort involved in the social intervention (quality and quantity of the intervention activities); 2) evaluation of the product or result; 3) evaluation of sufficiency; 4) efficiency evaluation; and 5) process evaluation, that is, the analysis of what leads to the results.
In the mid-70s and early 80s, the experimental perspective essentially entered into crisis for one reason: carrying out authentic social experiments is very difficult, in many cases impossible, and always requires a great economic effort and takes a long time, so its results are often neither useful nor usable.
Scriven[26] proposes an objective-free model, which places emphasis on the real and total consequences produced by social intervention, taking the needs of users/consumers as evaluation criteria. Scriven thus breaks with previous approaches, by considering all the effects caused by the program to be evaluated and not only those that were initially foreseen in its definition, adding the perspective of the user, the client.
Another notable proposal is that of R. Stake,[27] called responsive evaluation, due to its emphasis on accommodating the needs of “clients”; That is, the model has as its first consideration the use of the results of the evaluation of those who commission it. It proposes both descriptive procedures and the issuance of judgments/values, insisting that although the evaluation is not complete until the corresponding value judgments are issued, the descriptive part is already an evaluation or, in any case, it is a prior step to the corresponding evaluation or judgment of a program.
Crombach[28] presents the UTO model and proposes an evaluation based on forward-looking and flexible planning of evaluation activities. Along with other authors such as Stake, Schuman, etc. They represent the approach known as pluralistic evaluation or naturalistic evaluation, which is based on the recognition of the plurality of value systems that coexist within society. Although their methodological approaches differ in some aspects, they all consider evaluation as applied research that can provide objective and unbiased conclusions about the achievement of program objectives and useful contributions to redefine the means and resources necessary for their achievement. In short, it promotes the active participation of the agents involved in a program, giving less importance to the quantitative evaluation of results than to the qualitative evaluation of the management and execution processes.
For Stufflebeam,[29] evaluation is a process that provides useful information for decision making. This information refers to: 1) What needs exist and to what extent the proposed objectives reflect the felt needs; 2) Description of the intervention program, of the alternative proposals contemplated and conceptual analysis of the adequacy of the chosen proposal to the objectives; 3) Degree of completion of the proposed intervention plan and description of its modifications; and 4) Observed outcomes/consequences of the intervention and degree to which needs have been met. To obtain this information, Stufflebeam proposes the CIPP model, which is simply the enumeration of four different types of evaluation: context evaluation (C); input evaluation (I); process evaluation ℗; and product evaluation ℗.
With the incorporation of these latest models, the objectives of the evaluation are expanded, so that it is much more systematic and global. It is not only a matter of assessing whether the objectives are achieved (evaluation of results or product), but also of identifying the needs, whether these are fully included in the programs/services, how the programs/services are implemented, how these programs/services work, etc. These new models begin the path towards a comprehensive and systematic evaluation.
Evidence Summaries
Systematic reviews are long and technical documents, which can be difficult for policymakers to use when making decisions. For this reason, evidence summaries are prepared, which are brief documents that describe the main findings of the systematic reviews. It has been proposed to analyze whether evidence summaries help policymakers use the evidence obtained from systematic reviews, while at the same time it has been proposed to identify better ways of presenting summaries in order to increase their use.
A review of six randomized controlled studies was conducted, involving people from Europe, America, Africa and Asia. Two of the studies found that there was little or no difference in the way policymakers used the summaries. Likewise, in terms of usability, participants rated the entry format higher than the full systematic review, and felt that all evidence summary formats were easier to understand than the full systematic reviews. However, there is still little evidence about how best to present evidence to policymakers.[30].
• - Public administration.
• - Governance.
• - Organizational policy.
• - Franco Corzo, J. (2013). [1] Public Policy Design. Mexico: IEXE Editorial.
• - Franco Corzo, J. What are Public Policies? EXPOST from IEXE University.
• - Graglia, J. Emilio (2012) In the search for the common good. Public policy manual. Buenos Aires. Popular Studies Civil Association (ACEP) / Konrad Adenauer Foundation (KAS) Argentina.
• - Merino, Mauricio Public policies: Essay on state intervention in solving public problems; Government and Public Policy Collection; Center for Economic Research and Teaching, CIDE, 2013, Mexico.
• - Pinilla J, García-Altés A. The evaluation of public policies. SESPAS 2010 Report.
• - Méndez Martínez, J.") (2020). Public policies. Strategic approach for Latin America. Mexico: Economic Culture Fund and El Colegio de México.
• - Méndez, J")., Dussauge Laguna, M., & Vericat, I. (2020). The analysis of public policies in Mexico (First edition ed., International Library of Policy Analysis. Series).
• - Méndez, J")., & Dussauge-Laguna, M. (Eds.). (2017). Policy analysis in Mexico. Bristol, UK; Chicago, IL, USA: Bristol University Press. doi:10.2307/j.ctt1t895db. (English version).
• - State and public policies. Academic dissemination site on the field of public policies and editorial news..