GenAI has prompted the sector to evaluate the extent to which existing modes of assessment are valid and sustainable in a GenAI-augmented world. Initial reactions to GenAI often focused on academic practice and integrity (Jisc, 2023). Since then, the debates have become much broader and deeper. For example, the essay has been pronounced ‘dead’ (Marche, 2022); and concerns about student agency have been raised (Wallbank, 2023). Three core certainties have emerged.
- We must embrace and adapt to GenAI. ‘Digital and information literacy’ is a Brookes Graduate Attribute and future scholars, workers and entrepreneurs need to be able to use digital tools effectively.
- Writing is a core component of assessment.
- In the face of artefacts being generated from prompts and created predictively via algorithms, there is an increasing emphasis upon placing human traits and thinking processes at the forefront of teaching and assessment.
If we rely on traditional models of assessment then we have to recognise that students may be able to use (and possibly misuse) GenAI at every stage of the process, as illustrated by the following example of a written essay or report on a tutor-generated topic.
Students should know which of these applications of GenAI are acceptable or not acceptable for every one of their summative assessments.
An example: stages in a simple written assignment which could involve GenAI
Assessment components
1. Assessment brief, students could:
- Define key terms (ask GenAI for definitions/expectations)
- Generate/plan structure (ask GenAI for an outline)
- Check assessment criteria (GenAI uses them to evaluate the completed assignment)
- Suggest schedule/time management (ask GenAI to propose schedule)
2. Research and planning, students need to:
- Find relevant research papers (search using relevant paper analysing GenAI tool)
- Select the most important sources (search using relevant GenAI tool)
- Read and understand key texts (use a GenAI Large language Model (LLM) to summarise and clarify)
- Summarise key ideas (use GenAI to summarise and clarify)
3. Writing, revising and presentation, students need to:
- Use appropriate structure (ask a LLM to suggest outline with appropriate sections)
- Prepare draft (ask relevant GenAI tool to propose outline)
- Design and prepare visual aids (use a image creating GenAI tool to prepare diagrams and images)
- Critique and revise (ask a LLM to evaluate using criteria)
- Final review and proof-read (LLM to proofread and comment on style)
The following synthesises current thinking and pedagogic principles related to inclusive, authentic assessments within the framework of Brookes’ Strategy 2035, Brookes’ Graduate Attributes, and the IDEAS Inclusive Curriculum Model. The focus is on assessment that either:
- incorporates GenAI or,
- creates opportunities for assessing students and fostering learning independently of AI.
Ideally changes to modules or assessments should be discussed at the programme level. This will ensure that the approach to assessment in response to GenAI is strategic and that decisions are made with consideration of both programme and level outcomes.
One useful response to the problem of deciding which uses of GenAI are appropriate in a given assignment is the AI Assessment Scale proposed by Mike Perkins and colleagues. There is also a useful short introduction to the development of this scale from Leon Furze. They propose a 5-point scale which ranges from ‘No AI’ where the assessment has to be “completed entirely without AI assistance” through to ‘Full AI’ where students can use AI fully without declaring what they have used. The intermediate steps allow students to use AI at some points and ask for different reporting of what has been used and how.
Pedagogic Practice 18: Diversify assessment
Using assessment for learning principles, we can mix and match different assessment types to develop a broader range of skills and prepare students for the increasingly hybrid, GenAI-enabled world. A useful recent resource comes from JISC (2023) - Assessment ideas for an AI enabled world.
A rich diet of assessment modes enables different learning outcomes to be assessed, and from multidimensional angles, enriching the learning experience and promoting deeper, more inclusive engagement.
Pedagogic Practice 19: Create authentic assessment tasks
In an authentic assessment activity, the task is realistic and meaningful, leading to an artefact, product or performance which is – or could be – useful in its own right in work, study or social life.
Authentic assessment can meaningfully contribute to our students’ developing a sustainability mindset. It can usefully support inclusion and diversity, and help students prepare for many other ‘real world’ challenges. This aligns with the aims of the ‘assessment as learning’ and ‘sustainable mindset’ elements of the Brookes IDEAS model.
An authentic assessment task should be relevant to students. For example:
- Does it meaningfully connect with the rest of the discipline and / or their lived experiences?
- Will it be the kind of task they might have to perform in their future lives?
Such activities are likely to involve students in working with ‘abstract concepts, facts, and formulae inside a realistic— and highly social—context’, in ways that replicate the activities of a professional or disciplinary community (Lombardi, 2007).
Realistic contexts for assessment also allow students to choose their own paths through the task and reach their own, contextually-informed, conclusions, rather than mechanistically applying procedures to arrive at ‘the correct’ solution. Diversity of approach and response is encouraged and can be used in contexts that either embed and embrace GenAI, or try to find ways to minimise its use so as to facilitate the development of other skills.
Authentic assessment tasks can be designed in different ways with different types and degrees of GenAI to suit the required learning outcomes, such as:
- Incorporating GenAI to help prepare students for a GenAI-enabled world whilst building GenAI competencies e.g. critiquing AI-generated material (Pedagogic Practice 7), debating with GenAI, road-testing GenAI (Pedagogic Practice 4), real or fake exercises (Pedagogic Practice 8), or producing a piece of hybrid writing (Pedagogic Practice 10).
- Creating specific, reflective activities based on what happened within the students’ immediate, real contexts or experiences. This means that the predictive/prompt-driven nature of GenAI cannot respond effectively to these tasks (such as reflecting on the specifics of a seminar debate, a work placement, or interdisciplinary tasks).
- Not using GenA when it deprives students of the opportunities to develop cognition and human-centric skills (e.g. students’ personal understanding and experiences, their ability to make connections between disparate fields of knowledge, thought processes and reasoning/self-efficacy.
- Incorporating assessments that emphasise ethical considerations, self-management, social intelligence, and innovation. This approach can enhance the personalisation of assessments, making them less susceptible to predictable GenAI-generated outcomes.
Pedagogic Practice 20: Ensure clarity in the expected use (or not) of GenAI in assessment
Clear expectations and parameters are essential to enable students to decipher and understand assignment tasks, and essential in communicating what is expected/acceptable in terms of using GenAI in assessment tasks.
There are a number of ways of increasing the clarity of assessment task, including:
- Clearly signposting students to the relevant Brookes Policies and Guidelines on the Use of GenAI.
- Ensuring alignment of activity keywords in assessment questions or guidance with either Bloom’s Taxonomy and/or key concepts.
We recommend the Skills Developments Scotland’s Skills 4.0 metaskills framework. This can help retain clarity and assist with sense-making. Assessments that focus on ethical issues, self-management, social intelligence and innovation may appear vague to some students and need ‘unpacking’. - Providing context. If students can see what, why and how they will be assessed (three pillars of Universal Design for Learning – CAST, 2023), it can help them link what they need to do with affective, recognition and strategic networks of learning. This renders the task more accessible, easily assimilated, inclusive and scaffolded). It can also help them see assessment as an integral part of learning (assessment as learning), rather than something that is ‘done to them’.
Pedagogic Practice 21: Provide integrated support for assessment
Clarity in outlining expectations and providing instructions can only go so far.
It is the nature of authentic assessments to mirror the real world in all of its complexity. Embedding GenAI in assessment involves supporting the development of enhanced critical thinking skills so as to decipher truth from fiction and recognise ‘hallucinations’, where fabricated information is presented as facts with confidence and authority. GenAI stands accused of flooding the internet with misinformation or ‘deeper deep fakes’. So students’ ability to critically evaluate information and distinguish between truth and fiction is increasingly important. Truth and civil society are crucial to rationality and are integral to both Brookes’ Strategy 2035 (Education and Enterprise Pillar) and Graduate Attributes. This work may be challenging for students who have come from educational environments that value certainty and will require careful support from staff in overcoming these worries.
Several practical strategies can help here. Staff can:
- scaffold academic and assignment support by offering clear instructions, assignment briefings within classes, forums or drop-in hours.
- encourage students to self-evaluate and prepare for/understand assignments through peer activities, or even co-create assignments with students.
- enlist the support of the Centre for Academic Development, which can run workshops on developing academic literacies.
Research undertaken at Brookes and five other institutions strongly supports the idea that this kind of support translates into higher achievement, better outcomes and increased inclusivity.
Pedagogic Practice 22: Promote self-evaluation
Authentic assessment tasks should explicitly build in opportunities and support for students to critically reflect and evaluate their ongoing work – building their metacognitive skills.
This kind of metacognition is linked with encouraging deep learning, the principles of assessment for and as learning articulated through the Brookes IDEAS model. It assists students in making connections between different areas of their learning (thus supporting the development of transferable skills (Ashford-Rowe, 2014). It also encourages students to become reflective, autonomous, empowered, lifelong learners (key to the Education and Enterprise Pillar of the Oxford Brookes Strategy 2035). In our increasingly GenAI-enabled world, fostering these skills is essential for individuals to navigate and contribute meaningfully to evolving technological landscapes.
Several practical strategies can help here. Staff can:
- encourage students to actively engage in evaluating their work by openly discussing grading criteria or rubrics. It's vital for students to know exactly how they'll be evaluated. However, please don't stop at just providing explicit criteria. Take it a step further and involve students in conversations about the unspoken expectations within their academic community (Villaroel et al, 2018). Prompt them to share their perspectives on these implicit criteria, helping them develop their own professional judgement in the process. This practical approach will empower students to better understand and navigate the assessment process.
- encourage students to practise evaluating and reflecting on their own work individually, in pairs, or in small groups, through a variety of written and oral means according to the type of task. This might include offering feedback on each other’s work, for example.
- encourage the ethical use of GenAI as a means of self-scaffolding formative work through writing prompts, analysis of styles of writing, reviewing drafts, as a reading/summarising tool, or even as a means of providing initial feedback. However, as we suggested above, always bear in mind that utilising GenAI effectively may require some training/support in terms of developing proficiency in prompt engineering as an academic skill.
Pedagogic Practice 23: Promote inclusivity through authentic assessment
Authentic assessments promote the use of higher-order cognitive skills such as problem-solving, creativity, analysis and decision-making, and promote evaluation (both at the level of assessing information and of the self to encourage self-regulation, and inclusion).
There is often an assumption that educational technologies are inclusive, enabling or ‘assistive’ by default. It is clear that GenAI has the potential to offer a means of rendering education more inclusive for neurodivergent students, either as ‘an accommodation’ and/or a form of intervention to reinforce learning - especially in the field of assessment. On the other hand, we are yet to fully understand, as UNESCO has pointed out, the true implications of GenAI with respect to thinking processes and the extent to which GenAI tools may even ‘exacerbate existing disparities’ in respect of access and equity (Miao and Holmes, 2023). Indeed, it has been suggested that educational technologies can have an adverse effect on learning and attention spans, and can even be detrimental to the formation and reinforcement of the very mental schemas and architectures essential to learning - issues which pertain to all students, but can be exacerbated in the case of neurodivergent students.
In addition to the above, key enablers of inclusive assessment, especially with respect to incorporating GenAI, include:
- appropriate scaffolding of assignment tasks, with clarity in terms of instructions and assignment briefs.
- very careful retrofitting existing assessment types to incorporate GenAI components.
Sometimes shoehorning in GenAI can either skew the original learning outcomes being assessed, embed new ones that aren’t explicitly stated, disrupt the three interrelated parts of ‘constructive alignment’ (Biggs and Tang, 2011) or create student confusion through laboured or ambiguous instructions or components of the task that no longer align. - ensuring you adhere to the principles of assessment as, of and for learning.
Remember, assessment is not simply a means of testing knowledge and providing a grade/feedback - assessment ought to facilitate learning in and of itself as well. Designing these practices not just as a method for testing knowledge, but with a focus on encouraging multidimensional learning and the development of higher-order cognitive skills, can foster a more comprehensive educational experience.