There can be a tendency to try to evaluate everything about a project or intervention, forgetting, or not focusing on, the central motivation for the evaluation. A theory of change can help mitigate this tendency, to determine what is needed (and realistic) including given the resources including people’s time available (also see evaluation practice 4.8).
In defining the scope of the intended evaluation it is useful to ask:
- What types of things will the evaluation include within its focus? For example:
- student experience
- learner involvement (e.g. evaluate via relevant engagement or motivation indicators)
- Learner/learning experience (finding out ‘what works’ as well as what hasn’t from the learner perspective, can include e.g. ‘standard’ survey instruments such as BSS)
- Learner outcomes (e.g. improvements in pass and continuation rates, etc)
- staff experience
- teaching staff views and insights into their own role and how they teach and support learning and any changes to their practice (e.g. in techniques or strategies used) as result of engaging with evaluation
- teacher/teaching experience which might include perceived benefits or challenges
- staff development or support needs that emerge from engaging with the innovation/initiative
- student experience
- What is feasible to cover?
- Are there any specific inclusion criteria (see example below)
And to be explicit about any exclusions:
- What will the evaluation not include and why?
- Are there any specific exclusion criteria? (see example below)
To answer the questions above and to help to plan your evaluation the RUFDATA tool (Saunders 2000) can be a useful one.
Below the RUFDATA tool has been applied to the ‘feedback literacy’ intervention example:
RUFDATA tool | ‘Feedback literacy’ intervention example |
---|---|
Reasons and purposes | What are your Reasons and Purposes for evaluation? E.g. in the case of the ‘feedback literacy’ example the main reason and purpose is: to assess whether the planned intervention (to improve student feedback literacy) has had desired output, outcome and impact. |
Uses | What will be the Uses of your evaluation Providing and learning from good practice (or knowing that it isn’t working as an effective practice in this context) is probably the main intended use here) i.e. to use the evaluation to determine ‘what works’ (or not) in improving ‘feedback literacy’ in this context. Of course if you intend to publish / disseminate externally then ethics approval will be required upfront and planned accordingly (see evaluation practice 3.5). |
Foci | What will be the Foci of your evaluation? The focus is on ‘feedback literacy’ and whether the planned intervention is creating the desired outputs, outcomes and impact. So the focus initially will be on monitoring the outputs e.g. student satisfaction with resources provided and evidence of utilisation such as via VLE statistics and looking for early indications if the outputs are working as anticipated (and being open minded as to whether they are ‘working’ or not). Then focusing on whether outcomes are being achieved. This is when a student-led evaluation approach might be most beneficial complemented by use of e.g. BSS survey data to act as proxies for whether initial outcomes (and then impact) are being realised (see evaluation practice 4.7 and agency below). This evaluation may also wish to prioritise/focus on student-led evaluations e.g. focus groups for exploring effective (or ineffective) practice in action. An explicit exploration of staff experience may be excluded if resource is limited/scope does not warrant it. |
Data and Evidence | What will be the Data and Evidence for your evaluation? In addition to the (largely quantitative) data already suggested (i.e. immediate e.g. student satisfaction with resources provided and evidence of utilisation such as via VLE statistics, medium-term e.g BSS assessment questions scores, and longer-term e.g. NSS data and is it at or above benchmark), evidence gathered about the learner/learning experience e.g. through open-ended survey and/or focus group led by (trained) student evaluators is likely to be the most productive here (qualitative) and/or perhaps some observational data if the scope warrants it? All of course subject to normal ethics procedures. |
Audience | Who will be the Audience for your evaluations? This is likely to include the community of practice i.e. the teacher practitioners involved in the implementation and the wider community, students, the project team including student evaluator members, there may also be funders, and perhaps the wider research community. |
Timing | What will be the Timing for your evaluations? This will need to coincide with decision-making cyles and the life cycle of the project e.g. if initial immediate evaluations indicate the intervention is producing the types of immediate output expected in terms of student engagement with resources etc, then perhaps the student led evaluation of that experience needs to take place followed by analysis of the BSS results etc. But this will need to be planned and resource allocated to do it, proportionate to the scale of the intervention. |
Agency | Who should be the agency conducting the evaluations? As indicated above, in this example, this is likely to include yourselves (the project lead and team) and include (perhaps) external to project delivery team, student-led evaluations of student experiences. This needs to be carefully planned, see also considerations highlighted in evaluation practice 3.5 and practice 4.8. |