Teaching, Learning and Assessment (TL&A) or academic enhancement and innovation projects, initiatives and inquiries.
Evaluation practice 1.1
Notwithstanding the context of metric-informed provision and the need to report into TEF (Teaching Excellence Framework) and NSS (National Student Survey) processes, evaluation enables the educator to gather evidence concerning the impact of interventions (actions or activities undertaken) upon the student experience.
Evaluation may also (aim to) gain evidence concerning the impact on staff and their development in relation to their teaching and supporting learning practice. Evaluation allows us to determine if we should continue with an intervention / initiative, roll it out more widely, or desist / change tack.
Chelimsky (1997) recognises three conceptual frameworks for evaluation, which correlate with three different purposes, and associated types:
Purposes (based on Chelimsky, 1997) | Type of evaluation |
---|---|
Evaluation for accountability (measuring results or efficacy) | Summative, that is measuring the overall impact or effectiveness of an intervention/initiative. Also often described as an impact evaluation. |
Evaluation for development (providing information to help improve practice) | Formative, that is gathering information during the process and using it to inform improvement. Also often described as a process evaluation it is often conducted whilst an intervention is ongoing to determine if it is achieving its objectives. It can also be used to establish what went well and what could have gone better. |
Evaluation for knowledge (to acquire a deeper or richer understanding of some particular area of practice such as student learning or staff teaching methods) | Evaluation for learning and may make a contribution to an educational research agenda. However, this distinction can sometimes feel a little ‘artificial’, doesn’t all evaluation produce knowledge? But it is the ’level of rigour in inquiry’ (see e.g. Borrego, 2007 and OfS typology under evaluation practice 2.3) that determines the quality of the new knowledge produced and the extent to which the outputs from the evaluation process might contribute to an educational research agenda. |
NB The above are not mutually exclusive e.g. many evaluations scrutinise both process and impact, but it is important to establish the purposes of the evaluation at the outset of a project.
Evaluation is carried out for the benefit of both the learner and the educator, and also for the organisation to determine if the investment is garnering a return. As an educator / educational establishment evaluation is undertaken to help ascertain one or more of the following:
- the impact of the intervention / teaching practices in hand on the student learning experience including the impact on different student groups e.g. by gender, ethnicity, etc
- to generate insights into how and what the students are learning
- to provide the educator / teacher with information about how they might improve their own teaching practice plus other benefits such as job enrichment/motivation and potential career progression opportunities
- and in addition, perhaps provide the educator / evaluator with an evidence base to help them turn their findings of good practice into a scholarly publication (see also evaluation practice 2.3 and practice 2.4).
However, effective evaluation practice also provides feedback to students that offers them insights into themselves as learners and helps them to discover how they might improve their own learning practices. For example, in evaluation practice 1.2 an example is given of a project to improve student ‘feedback literacy’. In this case students should be given explanations of what has been determined and what practices they might engage in to improve their ‘feedback literacy’.
However, at Oxford Brookes University, in accordance with effective, inclusive evaluation practice, student engagement with evaluation goes well beyond offering students feedback that they can action to improve their own skills and learning. At Oxford Brookes, we advocate for students to be partners in enhancement and this relates to the principle under which this evaluation practice sits that all projects/interventions involve students.
Thus, as partners in enhancement students at Oxford Brookes are not (and should not) simply be the target for responding to questions or completing surveys. However, of course, as part of an evaluation they may be invited to do so and in which case the normal ethical considerations will apply such as informed consent, see evaluation practice 3.5). But in their capacity as students as partners in evaluation they will be involved as empowered collaborators in evaluation activities and in sharing the lesson learned from the evaluation. For example, students can be collaborators in the design of the evaluation strategy and collaborators in the design of the intervention itself. Please see evaluation practice 4.6, practice 4.7, and practice 4.8 to assist you when making such design decisions related to the involvement of students and other stakeholders in the intervention and its evaluation.
Chelimsky, E. (1997). Thoughts for a New Evaluation Society. Evaluation, 3(1), 97-109. https://doi.org/10.1177/135638909700300107