Introduction to Evaluation

This section provides an introduction to evaluation including:

  • The principles of evaluation include rigor, relevance, transparency, independence, and ethics.
  • Drawing on existing State resources as you plan for evaluation.
  • Things to consider when determining your evaluation approach, and how equity will be incorporated:
    • Stakeholder involvement
    • Evaluation principles and practices
    • High-level research questions
    • Building an evidence portfolio
    • Tiered evidence approach
    • Partnership and governance structure
    • Plan, document and disseminate
  • Identifying the most pressing questions you hope to answer through evaluation.

For additional resources pertaining to this section, see pages 8-27 of the DOL Framework or view the WIOA Wednesday Webinar, “Evaluation Under WIOA: An Introduction” presented by Cynthia Forland of Forland Consulting LLC. The PDF presentation and the recording are below.

The use of evaluation to measure the effective and efficient use of funds for services is a management function for successful government programs. Federal, state, and local policymakers are increasingly aware of the need for rigorous evaluation results to support evidence-based policymaking—making well-informed decisions about program investments. In this era of limited public resources, information on program effectiveness is especially critical for state workforce administrators in setting the direction for state and local implementation of WIOA—what specific programs, services, or activities to prioritize within the context of formula grant, discretionary grant, and national programs, state economic development priorities, and state labor market dynamics.

Evaluation results, findings, or recommendations are also critical for garnering or maintaining support for a specific initiative. An evaluation explains whether a program produces positive outcomes (e.g., were program participants able to find a job or increase their earnings) and how the program achieved results from following an evaluation design plan with defined methodology and data analytics (e.g., what activities or actions produced the results).

Evaluations can positively affect program planning and implementation efforts, the individuals served, strategic policy planning, and funding efforts, as well as coordination efforts with the larger workforce community.

  • Improve specific programs, services, interventions, or activities. Equitable evaluations can reveal whether program service components produce positive outcomes or whether there is a need for program improvements. Learning that certain program components may not produce the intended results is just as valuable as learning that program(s) have positive results. The local area can then make changes that may result in improved outcomes for all individuals served.
  • Use tested or evaluated innovative equitable interventions to increase or improve outcomes for program participants. Program improvements identified through equitable evaluation findings may show more effective or efficient results that lead to better services for participants and produce desired outcomes.
  • Determine which planning activities to continue, and funding priorities to consider as part of WIOA program administration and management.
  • Secure other funding needed to sustain and scale up a priority initiative. If an equitable evaluation produces positive or promising findings, how the local area disseminates those results may increase support.
  • Demonstrate long-term impacts on individuals and communities. Impact evaluations may help emphasize the causal evidence or attributions of the services used to produce a change or changes in the existing service delivery area.
  • Educate the larger workforce development community. State and local workforce administrators, evaluation managers, and policymakers may benefit from learning about participation in program evaluations. A regularly updated evaluation framework serves as a useful tool for the workforce community to maintain focus on the most essential, or federally important, factors of program success.

There are essentially four major types or categories of program evaluation—implementation, outcome, impact, and cost studies—with various subtypes within each. Each type of program evaluation is defined and discussed in this section and is followed by four tables that detail the differences in the subtypes of studies and their associated practical considerations, including cost and levels of rigor.

1. Implementation Studies: An implementation study documents program operation or compares it against goals, across locations, or overtime. It describes and analyzes “what happened and why” in the design, implementation, administration, and operation of programs and is generally used to determine whether a program is being carried out in a manner consistent with its goals, design, or other planned aspects. Implementation analyses can serve as stand-alone studies, especially to document new program processes not yet studied. Implementation studies, as part of more comprehensive evaluations, may also include outcome, impact, and/or cost studies. Implementation studies provide context for other or subsequent evaluation findings and results and make the findings or results interpretable and useful for the programs, services, or interventions studied.

2. Outcome Studies: An outcome study compares individual outcomes against goals, across programs or locations, or overtime. Outcome studies differ from impact studies in one key area of comparative data analysis. Essentially, outcome studies determine if programs achieve the desired results or assess the effectiveness of programs to produce change. Nevertheless, outcomes are often thought (by program staff, not program evaluators) to indicate measurable change or “impact” when outcomes are compared over time or across comparable programs.

3. Impact Studies: An impact study estimates the difference in individual outcomes attributable to a specific program or policy. Impact studies determine whether programs or policies measure the intended impacts—that is, the program causes the outcome differences that it is designed to influence. If the purpose of an evaluation is to determine whether an occupational training program has the desired impacts on the employment and earnings of the individuals it serves, an impact study is the ideal type of evaluation to choose. Determining the best type of impact study to conduct depends on considerations such as the budget for the evaluation, the desired level of confidence in the evaluation results, and the practical constraints on conducting an evaluation of a given program. It is important to note that experimental studies (randomized control trials or RCTs) are considered the most rigorous form of evaluation and are often called the gold standard, given that they provide the best scientific evidence of what works or does not. However, they are also the most intrusive type of impact study in that they intervene with program processes. Various types of implementation studies are usually part of impact studies, such as site comparisons and fidelity studies, and of course, outcomes are measured.

4. Cost Studies: A cost study estimates program costs, makes cost comparisons or weighs costs against outcomes or impacts. Cost studies involve analysis of the costs of a program, and some weigh program effectiveness against overall program cost. Sometimes cost documentation, estimation exercises, or simple cost calculations are considered a cost analysis, but not by program evaluators. While they are common elements of all cost studies, they are not considered cost analyses. A cost study draws conclusions about program costs based on systematic cost comparisons (particularly between programs and overtime) or statistical analysis of cost differences or responses to changes in program features or inputs. The specific comparisons and statistical analyses depend on the program and the quality and detail of available cost data.

Consider utilizing existing resources to complete the evaluation process.

To learn more about the types of evaluation, see pages 17-23 of the DOL framework or see below to view the WIOA Wednesday Webinar, “Evaluation Under WIOA: An Introduction” presented by Cynthia Forland of Forland Consulting LLC. The PDF presentation and the recording are below. 

Evaluation under WIOA: An Introduction – April 22, 2020

Presentation

Principles of Evaluation

Department of Labor Evaluation Principles

Principle Brief Description
Rigor “Rigor is required for all types of evaluations, including impact and outcome evaluations, implementation and process evaluations, descriptive studies, and formative evaluations. Rigor requires ensuring that inferences about cause and effect are well-founded (internal validity); requires clarity about the populations, settings, or circumstances to which results can be generalized (external validity); and requires the use of measures that accurately capture the intended information (measurement reliability and validity).”
Relevance “Evaluation priorities should take into account legislative requirements and the interests and needs of leadership, specific agencies, and programs; program office staff and leadership; and DOL partners such as states, territories, tribes, and grantees; the populations served; researchers; and other stakeholders.”
Transparency “DOL will make information about evaluations and findings from evaluations broadly available and accessible, typically on the internet. DOL will release results of all evaluations that are not specifically focused on internal management, legal, or enforcement procedures or that are not otherwise prohibited from disclosure. Evaluation reports will present all results, including favorable, unfavorable, and null findings. DOL will release evaluation results timely…and will archive evaluation data for secondary use by interested researchers (e.g., public use files with appropriate data security.”
Independence “Independence and objectivity are core principles of evaluation. Agency and program leadership, program staff, stakeholders, and others should participate in setting evaluation priorities, identifying evaluation questions, and assessing the implications of findings. However, it is important to insulate evaluation functions from undue influence and from both the appearance and the reality of bias.”
Ethics “DOL-sponsored evaluations will be conducted in an ethical manner and safeguard the dignity, rights, safety, and privacy of participants. Evaluations will comply with both the spirit and the letter of relevant requirements such as regulations governing research involving human subjects.”