Implementing Evaluation

Evaluation implementation activities are the responsibility of the selected evaluator, whether it is in-house staff, a third-party evaluator, university or other organizational partners, or the state agency in charge of the evaluation. As such the lead will need to understand key implementation elements to oversee the process and to make overall determinations concerning the study’s timeline, resources needed (staff and funding), and other factors that could affect scope and quality. This section describes key evaluation implementation activities that state workforce administrators and evaluation managers need to be successful:

  • Creating the evaluation design report to guide each phase of the evaluation;
  • Developing a data analysis plan for inclusion in the evaluation design report;
  • Addressing the protection of participants’ rights (for certain types of studies);
  • Linking or coordinating evaluation and program activities (for certain types of studies); and
  • Reporting by the evaluator on interim and final study findings.

The first task of the selected evaluator will be to develop a plan that follows the technical proposal to implement the evaluation. Typically called an Evaluation Design Report (EDR), this document serves as a guide for the evaluator and state agency to unfold various evaluation activities. Depending on the type of evaluation conducted, it also serves as a guide to program staff on how various evaluation activities link to or coordinate with program operations. The EDR builds upon the preliminary evaluation plan and includes a data analysis plan developed by the selected evaluator. The selected evaluator expands and refines each element in the state’s preliminary evaluation plan to create a detailed and feasible evaluation implementation plan. In addition to expanding upon the elements of the preliminary evaluation plan and depending on the type of evaluation conducted, the evaluator may propose or otherwise suggest variations and additions concerning the following items:

  • Appropriate and reliable outcomes that can be measured through available resources;
  • Evaluation method(s), including data collection processes and sources;
  • Data analysis plan/approach, including suitable controls for mitigating any threats or risks to successful interpretation of findings, and overcoming any limitations to the maximum extent possible;
  • Timeline and milestones for evaluation activities; and
  • Reporting details to convey evaluation progress, results, and findings.

See page 49 of the DOL Framework to view the components of the Evaluation Design Report.

The selected evaluator is responsible for developing a data analysis plan based on the preliminary plan. The evaluator will use the preliminary plan as the foundation, suggest expansion or modifications, and connect the dots via the data analysis plan—how to get from the high-level purpose, scope, key research questions, and research design to the analysis and determination of findings. A data analysis plan outlines the key steps and processes used to analyze the data collected prior to collecting data. The data analysis plan is a roadmap that connects the research questions to the data, describes how the data are analyzed, separates the key research questions into “testable” hypotheses, and aligns each with the data and analytical methods used. A data analysis plan identifies the metrics for outcomes measured—both process outcomes describe the program implementation activities, and outcome measures define the intended results, along with the variables examined.