Without evaluating the success of training ventures, little can be done to measure an initiative’s effectiveness or future areas of developmental focus.
Donald Kirkpatrick created the definitive training evaluation model that has been adopted and administered by most of the modern adult learning world. The model consists of 4 levels:
These levels have been the cornerstone of contemporary evaluation theory since they debuted in the late 1950s, and later captured in Kirkpatrick’s 1975 seminal work, “Evaluating Training Programs”.
Kirkpatrick’s son, James, has continued with his father’s work and uses what he calls ROE (Return on Expectations). The idea of ROE requires that clear success measures are identified with the executive sponsor to measure something tangible in terms of outcomes, to “begin with the end in mind” and identify what is expected early on so that trainers can ensure they deliver the goods.
Xponents relies heavily on the research and published work of Kirkpatrick, and has also discovered a few practical elements to assist in the process. What we’ve found: when evaluating a training program, stick to the F.A.C.T.S.:
Be very clear about what it is you plan to evaluate, and how you intend to gauge effectiveness. Keep your questions focused on the skills or behaviors the training is meant to develop. For example, “Leadership” is not a skill. It is the result of multiple skills and beliefs. When framing your questions, be sure not to drift into broad generalizations, or connect invisible dots for the user. Ask direct questions related to specific training focal points.
Can evaluation results be quickly applied to the system’s needs? Evaluation forms are great tools for proving ROI on a training initiative, but don’t let that be their only purpose. If the form is properly focused, areas of opportunity should be immediately evident. Have a plan to respond quickly and give participants the tools they need to apply their learning to the workplace.
If you only hand out an evaluation form at the end of a training session, you’re only getting half of the picture. Collect participant data before and after the training to see the evolution of understanding, not just a final snapshot. A documented improvement in numbers helps to prove program effectiveness, highlight areas for participant improvement, and pinpoint specific curriculum strengths and opportunities. In addition to program pre and post-assessments, it is also a good idea to offer pre and post evaluations of the system’s overall mood, ability to work together, and general effectiveness.
This should sound obvious, but it must be said: Forms and analytics should be shared with all appropriate parties. This doesn’t mean that trainees should be required to sign their name to each assessment; in fact, anonymity yields the most honest feedback. But if aspects of an initiative fell short, that should be reflected in the numbers, and shared with appropriate parties. If evaluations are completed by hand, all original hard copies should be saved and documented.
The evaluation process and the forms themselves should be simple and systematic. The bulk of training efforts should go into program development, delivery, and determining action steps for the future. The delivery and assessment of evaluation results should be quick and efficient. Online forms yield the most immediate results, especially for online learning programs. But online questionnaires are not always practical in most live, in-person training situations. Questions should be straightforward; easy to answer and easy to tabulate. Forms should be easily customizable to accommodate varying needs.