Evaluation Planning

This section reviews the process for planning an evaluation.   Below you can find a presentation regarding the planning process.  This section also contains a self-assessment tool to determine your readiness for evaluation and factors to consider.

When beginning an evaluation, it is important to create SMART research questions (Specific, Measurable, Achievable, Realistic, Timely) and review existing evaluations similar to those you are conducting.   

Next, you can start to create your evaluation design and consider your timeline and budget.  

There are links throughout this section to further information in the DOL Framework, and you can also view the following WIOA Wednesday Webinar PDF Presentation and recording, “Evaluation Under WIOA:  Planning and Performing an Evaluation” from May 6, 2020, below for further information. 

Evaluation Under WIOA:  Planning and Performing an Evaluation- May 6, 2020

Presentation

1. Overview of Assessment

The purpose of the Evaluation Readiness Assessment (ERA) is to help agencies and other entities gauge their overall readiness to conduct rigorous evaluations. 

Agencies can use the assessment results to identify and explore factors inhibiting evaluation capacity and areas where additional resources and/or technical assistance (TA) are needed.

The ERA consists of the following five sections:

  1. Evaluation Culture and Awareness;
  2. Funding Strategies;
  3. Data Management;
  4. Staff Skills, Capacity and Knowledge; and
  5. Strategic Planning.

Each section contains an overarching question followed by a series of statements aligned to the evaluation topic of the section. Agencies should review each statement carefully and assign a rating on a scale of 1—5 to indicate the extent to which the statement is currently addressed by the agency. Each statement includes a reference to the appropriate section of the Evaluation Toolkit where more detailed information and guidance can be found.

After completing the assessment, the agency should implement the following next steps:

  • Identify strengths and opportunities for improvement within each evaluation readiness topic area based on the assigned ratings;
  • Hold facilitated meetings to review and discuss the assessment results as a team (e.g., state or local workforce board, WIOA implementation workgroup, or partner coalition); and
  • Develop and implement an action plan to address any opportunities for improvement identified during the team discussions.

After completing the assessment, if you find you are not ready for evaluation, here are some steps you can take to prepare to be ready for evaluation:

  1.  Start to collect baseline data or find secondary data as a baseline for evaluation.
  2. Talk to others who have collected data for their programs, policies, and processes for tips.
  3. Consider a ‘secret shopper.’  Ask someone outside your agency to review your policy, program, or process to give feedback.
  4. Refer to Workforce Webinars for training and resources on data collection and evaluation.

You can find the Evaluation Readiness Assessment on page A17 or 79 of the PDF of the DOL Framework.

The key research questions that will guide an evaluation plan require input from stakeholders. Research questions identify distinct workforce systems or program areas to assess in a systemic and credible way. Key research questions share the following characteristics:

Specific and Measurable. The questions identify the specific elements or outcomes to examine and learn about those elements. For example, a specific research question may ask: “Are participants who complete the program in its entirety more likely to be placed in full-time unsubsidized jobs than those who do not complete the program within three months after program exit?” The trends in employment data available may support an outcome study using participants’ post-program data from employers to answer this question. 

Answerable. Research questions must be answerable. Some research questions may not be answerable because data may not exist to address the question, or the outcome of interest may need further definition. For example, workforce program managers may have an interest in the impacts of services on participant self-sufficiency. However, self-sufficiency does not have a standard unit of measurement and may mean different things to different people. To answer this question, evaluation planners, with stakeholder input may want to define the term self-sufficiency and identify observable measurement units. Research questions with outcomes not clearly measurable may also require additional consultation with the selected evaluator. Discreet, High-Level, and Limited in Number. In general, key research questions should be discreet, meaning that they do not overlap one another. Typically, key research questions are written at a relatively high level and are few. A limited number of research questions help all involved stay focused on the “what” and the “why” of a state agency-sponsored evaluation and help clearly articulate the scope of the evaluation to stakeholders, customers, and other interested partners. The selected evaluator will examine the key research questions, explore their relevance to the study, and develop a more discrete set of questions tied to methodology.

Rooted in Firm Program Knowledge and Realistic Expectations. Strong research questions are rooted in firm program knowledge, based on understanding past similar efforts with demonstrated program results, and set with realistic expectations for conducting a study that addresses the research questions and explains how the evaluation will be achieved.

Timely. Research questions will be able to be answered within the time that you have for evaluation. Create research questions that include time frames, where appropriate.

Once the logic model(s), the purpose and scope, and draft key research questions are complete, a preliminary evaluation plan can be created. A key component of the preliminary evaluation plan is a literature scan or review of the existing research-based evidence related to the subject of the evaluation. The identified research-based evidence provides a foundation for the evaluation plan and design because it provides useful, timely information, and justifies how the study will build upon the current knowledge base. The existing evidence will help guide the following activities when creating the plan:

  • Refining the evaluation purpose, scope, and key research questions by building off of and improving upon the existing evaluation work that has been done;
  • Determining what aspects of the program to evaluate using a relevant evaluation design, data sources, and methodology corresponding to how components of other similar programs, systems, strategies, services, activities, or interventions were evaluated;
  • Identifying appropriate outcomes and how best to measure or otherwise assess them;
  • Ensuring that the evaluation builds upon the existing evidence and contributes additional information to the current base of evidence (i.e., the evaluation goes beyond what has already been done and sheds new light on the issues/questions); and
  • Considering how to best integrate evaluation-related activities into program operations; and
  • Looking ahead to how the agency may want to disseminate and inform others of eventual evaluation results.

The evidence review also called a literature review or scan includes references to scholarly studies of programs, systems, strategies, services, activities, or interventions like the proposed evaluation of a workforce system program. Evaluations of other job training programs, work-based learning or statewide career pathway systems may be organized and summarized according to how those findings from each study relate to the proposed evaluation plans. In addition, the literature review or scan includes the:

  • Studies’ methods;
  • Overall design and level of rigor;
  • Types of data collected, data collection and analysis methods used;
  • Implementation processes observed; and
  • Research findings and recommendations of interest.

When this level of information is not available in a study’s public report, the literature review or scan can also document the missing elements of the evidence gathered. The evidence base collected for the subject of the evaluation may not be limited to exact replicas of the program or its elements. Evaluation planners may want to research subjects or topics related to an area of study that applies to the proposed evaluation plan. For example, research on programs that serve different populations, and with some similarity or variation in design or services, may be useful for the evidence base.

For workforce development evaluations, there are several primary sources of studies listed in the DOL Framework, which you can find here on page 37.

Several factors apply to the decision-making processes for research design and data collection approaches, such as the following:

Methods to Most Accurately Answer Key Research Questions: Some questions, such as who is participating in a program and the characteristics of their participation, may be best answered with an implementation or descriptive outcomes study, whereas other questions about the effectiveness of the program are likely to be best answered with a pre-post outcomes study, RCT, or quasi-experimental study. You may want to conduct a study that includes several types of evaluation. For example, often an outcome or impact study will also include implementation and/or cost study components. The key guiding factor in making the final choice of study design is what the agency wants to learn and why, and how certain it wants to be about the findings.

Organizational Capacity to Participate in the Evaluation: Consider how the evaluation activities will blend into the implementation activities of the program, system, strategy, service, activity, or intervention included in the study. Discuss the feasibility and options to carry out and participate in the selected evaluation with organization or agency managers or operators that implement the subject or topic of the study. Include other key stakeholders and partners to identify their organizational capacity to participate. For example, RCT evaluations of service delivery interventions integrate a random assignment process into the participant intake and enrollment processes that may span multiple partners or service providers. Each partner engages in discussions and negotiates agreements to participate in the evaluation.

Organizational Capacity to Conduct the Evaluation: The selection process for an evaluator also depends on the agency’s capacity to conduct the evaluation. While funding may be a driver to building evaluation capacity, investments in evaluation management development, staff training, strategic and long-range planning, budgeting, and technical assistance are key elements. Each of these elements supports organizational evaluation capacity, whether the evaluators conduct in-house studies, partnerships are formed with research universities to manage administrative data, or third-party evaluators are procured to conduct independent evaluations. 

Data Availability for the Desired Type of Evaluation or Capacity to Collect It: In addition to a final determination of evaluation type(s) for the evaluation plan, the evaluation planner and the selected evaluator will finalize the details to conduct and carry out the research— research design (i.e., methods) and data collection approach. The type of evaluation and the key research questions will present different methods, data sources, and data collection options. Data availability or the agency’s capacity to collect data are critical factors in deciding the type of evaluation to conduct. The selected evaluator will refine the study methodology and data collection approach; however, as part of the state evaluation planning process, data availability and capacity to house, transmit, and secure the data must be addressed to put the evaluation on the right track. How the data needs are identified and resolved within the context of the evaluation methodology allows the evaluation planner to work most effectively with the selected evaluator— in-house unit, partner university/organization, or third-party entity.   See page 42 of the DOL Framework to see a table that explains the key elements of the evaluation plan.

  • Budget Needs: Upon going through the planning process, an evaluation design plan may ideally identify all research needs. However, pricing the plan is incumbent on the relationship between funds available and the length and depth of the study. The evaluation plan, in terms of the type of evaluation and research design (i.e., methods and data collection) identified, may need to be modified or incrementally funded to meet the proposed research needs.
  • Timeline Considerations: Timelines are critical to determining the feasibility of the planned evaluation project. Evaluators with little or no experience may underestimate the amount of time needed for the various phases of an evaluation. If planners are new to evaluation research, they may want to review samples of other research and evaluation timelines to help map out a tentative schedule. Consider the following when determining a schedule:
  • When will the evaluation start and finish?
  • Are there particular stages to the proposed evaluation?
    • Pilot or interim testing, then move to steady-state and full implementation?
    • Will focus groups and individual interviews be scheduled for observational input?
    • Is time built into each stage of the study to document, progress, milestones, and deliverables?
  • What are the objectives of the study and are they addressed in the timeline?
  • Will any other internal or external constraints or deadlines influence the evaluation design plan?