Evaluation practice 4.7

Describe the evaluation criteria, questions and methods

Each of evaluation criteria, questions and methods are described and illustrated below with reference to a ‘feedback literacy’ example (see evaluation practice 1.2 for an introduction to this example as needed).

Evaluation criteria (linked to theory of change)

  • What immediate results (products of activities) will you capture to examine immediate results/impact? e.g. student satisfaction with resources provided and evidence of utilisation e.g. via VLE usage statistics/profiles).
  • What outcomes (intermediate changes) are being captured to examine and report impact? (e.g BSS assessment questions scores improve).
  • What indicators or measures are being employed to judge impact (in the longer term) e.g. NSS satisfaction with assessment and feedback at or above benchmark.
  • How will any unintended outcomes be treated?

The evaluation questions

  • What questions will be asked and can existing survey instruments (e.g. BSS, NSS) provide (some of) the necessary data (e.g. pre- and post- the intervention) (see illustration below).
  • Are the right questions being asked/available in the right format? And how do you know? This is about understanding what the emphasis / focus of the evaluation is and making sure the questions asked e.g. questions selected from a routine questionnaire like the BSS, are going to help provide an understanding of students’ feedback literacy (see example selections below for an illustration).

Evaluation methods

  • What type of evaluation is being carried out? (see evaluation practice 1.1 for guidance and check/align type to purpose) e.g. appreciative inquiry, user-led evaluation, cost/benefit evaluation, formative evaluation etc etc. For example for the ‘feedback literacy’ example project a formative evaluation may be suitable?
  • What are the main stages or steps of the evaluation approach? (tools and methods aligned to type of evaluation), in the case of the ‘feedback literacy’ project the use of questionnaires and perhaps focus groups with a participatory emphasis may work well in addition/combination with the baseline data already available. NB normal ethical procedures need to be followed (see evaluation practice 3.5 as well).

The example below illustrates the selection of suitable questions from the baseline data set available from BSS for the ‘feedback literacy’ example intervention used throughout these evaluation practices.

Example of use of existing (satisfaction) questions for summative evaluation looking at whether desired outcomes (intermediate) have been achieved

Using the feedback literacy example above the desired outcome is: enhance student satisfaction and engagement with feedback.

To gain type 2 empirical evidence according to OfS typology i.e. demonstrating that an intervention might be associated with potentially beneficial results (see evaluation practice 2.3 for further details) the following process might be planned for and implemented using existing (BSS) survey data:

Pick the most relevant questions from the ‘marking and assessment’ section of BSS survey:

E.g. 

  • How clear were the marking criteria used to assess your work?
  • How fair has the marking and assessment been on your course?
  • How often does the feedback help you to improve your work?

Ensure you have this data for previous years’ cohorts on the same programme(s) of study.  If not, you may need to consider an alternative approach e.g. see this guidance on small cohort evaluation, which also illustrates how you can explore the impact of interventions on target student groups. Always check and follow research ethics guidance and procedures for any evaluation.

After the intervention has taken place look at the scores for these feedback questions, is there an upward trend? If so this might imply the intended outcomes might be being achieved. If not this will need to be explored (see example of how this might be done below).

In terms of the unintended outcomes this will require different types of evaluation questions. An illustration is provided below.

For example “When I was completing my assessment on [module/course as appropriate]”..

Choose from relevant questions below: 

  1. The thing I found most valuable/helpful [select one] was….
  2. The thing that most changed/altered [select one] the way I learned was…
  3. What made learning most effective for me was…
  4. The thing I found most difficult was..
  5. To help me improve as a learner, I would like my tutor to:
    1. Stop…
    2. Start…
    3. Continue….

Open-ended questions and responses, captured for example, via a survey or used within a focus group setting can help to gain a picture of the unintended outcomes of an intervention and also help to probe / provide insights when things don’t work as hoped or planned. 

Unintended outcomes can include other benefits from the intervention not initially anticipated such as (in the ‘feedback literacy’ example) perhaps evidence of improved confidence and levels of engagement that might also, for example be reflected in learning session attendance figures.

Of course, unintended consequences can be unexpected negative effects. When things don’t achieve the desired (initial) outputs and outcomes the open-ended evaluation questions might help illuminate potential reasons for this that can then be investigated and the intervention refined accordingly. 

Evaluation is of course on-going and formative and focused on ‘what works’ while also probing what doesn’t (work) when this arises. See also evaluation practice 2.3 and practice 2.4 for how the formative process relates to the principle of sustaining engagement with scholarship in all forms.