As we all know, you can’t improve what you can’t measure. That is why the evaluation phase is essential in each eLearning project, although many times this step is overlooked due to a number of reasons (lack of time, no budget, small ROI for current project).
Moreover, in the age where our most basic actions over the internet generate hundreds of data points, the most commonly used evaluation method in eLearning - Kirkpatrick’s 4-Level Evaluation – is designed around learners having to manually input feedback through surveys and interviews. Using this technique, if you want to know if your learners liked your eLearning course, you need to actually ask your learners if they did like it. Just to have a look at other industries, Facebook knows how you feel by simply analysing your photos.
While Kirkpatrick’s 4-Level Evaluation provides a very solid framework for evaluation, its implementation is obsolete, mainly because we are talking about digital learning. In order for this process to be performed consistently and efficiently, it needs to be automatic.
This pattern is the first in a series of data points that can be gathered in a more automated fashion in order to evaluate an eLearning course
As evaluation is not an integral part of the learning process, it needs not to interfere with it. This is why the activity rating needs to have two characteristics:
Optional (Learners should only rate an activity if they wish to do so) and
Out of focus (the activity rating system should not interfere in any way with the actual learning content and should not distract attention for the learning process.)