logo
Alcohol Education Guide
to Reducing Harmful Drinking

Inclusion Criteria for Good Practice in Education Programs

 

The Programs section of the Guide highlights good practice approaches in the field of alcohol education. While the highest standard for programs in this field is the achievement of evidence-based reductions in risky behaviors, the program examples included here have demonstrated change in either behavior or in one or more of the intermediate steps on the way to sustained behavior change (knowledge, attitudes, or beliefs).

To be included in the Guide's list of good practice programs, each of the programs listed in this section has been reviewed by the members of the Advisory Group and determined to have met the following criteria:

 

  1. A longitudinal or repeated cross-sectional design that includes experimental, quasi-experimental or single-group designs, such as one of the following designs, listed in order of most rigorous:
  • Experimental designs include pre-test and post-test assessments of an intervention group with a comparison or control group and the random assignment of participants among groups. Participant groups are matched on key social and demographic factors.
  • Quasi-experimental designs are similar to experimental designs but do not include the random assignment of participants to intervention or control groups.
  • Structured single-group designs include pre-test and post-test assessments of participants within a single intervention group. There may or may not be randomization of participants to the intervention group, but there is no control group with which to compare results of the pre- and post-test assessments.

 

2. Completed outcome evaluation

  • An outcome evaluation has been performed to measure any effects on participants after program implementation and establish evidence that changes in knowledge, attitudes, beliefs, intentions and/or behaviors have occurred in response to the intervention being evaluated.
  • The evaluation assesses participant exposure to the program and controls for variation in key participant social and demographic characteristics.

 

3. Demonstrated measurable impact

  • The program demonstrably changes an outcome in the desired direction.
  • Statistically significant changes and non-significant trends are distinguished.

 

4. Documentation

  • Program report and evaluation is accessible.
  • The program’s approach (procedures) and findings are well described and the analysis is transparent.
  • Assessment methods are clearly described.
  • Funding sources are acknowledged.

 

  5.  Recency

  • The program has been implemented, replicated, or evaluated since the year 2000.