This guidance offers principles and practical suggestions for the ethical and effective use of Generative Artificial Intelligence (GenAI) within teaching, learning and assessment (TLA). It builds upon the Oxford Brookes’ Strategy 2035 the IDEAS inclusive curriculum model and Brookes’ guidance for digitally enabled programmes and reflects the university’s position on the use of GenAI.
Guidance for Schools Programmes and Modules
AI models and software tools offer exciting time-saving affordances for academic practice and professional service. However, they might store, use or distribute data uploaded to them. This means they are not safe and secure, or GDPR compliant.
When using AI tools, beware of uploading any sensitive, confidential or protected data.
Ask yourself these guiding questions:
- Do I fully understand the data protection and privacy settings on this AI tool?
- What data, in my prompts and in what I upload, am I giving them?
- Do I have the right to give it to them, is it my information and not someone else's?
- Am I happy for them to store, use and share this data with others?
- Will sharing this data lead to harm or impact on mine or someone's freedoms and rights?
If you are unsure about the data security of any AI tool you would like to use for Oxford Brookes academic practice or professional service, contact info.sec@brookes.ac.uk
Oxford Brookes recommend Microsoft Co-pilot
Oxford Brookes students and staff have access to the data-secure Microsoft Co-pilot AI chatbot, available through a Microsoft academic institutional licence.
When using Microsoft Co-pilot, signing in with your Oxford Brookes log-in ensures your data is protected; in accordance with Microsoft’s privacy notice. However, we do not recommend you upload any confidential or protected data or information because Co-pilot does not comply with Brookes’ expectations of good practice with regard to information security.
The benefits are you can use Co-pilot to, among other things, create images, learn about new topics, compare and contrast text in documents, summarise content and generate ideas or problem solve.
The guidance below explains how to access Oxford Brookes Microsoft Co-pilot. You must first register with Microsoft, using your Oxford Brookes email address, you can do this by visiting Office 365 Education. Then, to access Co-pilot:
- Go to https://copilot.microsoft.com/
- Click 'login' in the top left of the page.
- Choose the login with a work/school account option.
- Login using your Office365 username and password.
Google Gemini
Google Gemini is a useful information-search, production and reasoning tool, which you can sign into using your Oxford Brookes log-in.
However, Google’s own privacy notice tells the user not to share anything personal as it has human reviewers and also retains your data for three-years. Thus, Google’s approach may not adhere to general data protection regulations (GDPR) and so not be GDPR compliant. Do not upload any confidential or protected data or information to Gemini.
Other available AI models and software tools
With so many AI tools emerging it is impossible for Oxford Brookes to review them all. We must all develop AI Literacy, i.e., knowing what AI to use, how and when to use it. This will help us all be safe and secure in this evolving AI landscape.
Staying safe and secure
We recommend only using AI models or software tools that are considered safe and secure in how they handle your information.
Your privacy and the security of our Oxford Brookes network matter to us all. If you are not confident that an AI tool is safe and secure then stop, choose a recommended application or do some of your own research: you can ask yourself the guiding questions above, you can check the privacy statement, or ask the web. If you are still unsure, contact info.sec@brookes.ac.uk
With all Gen AI tools, if you are not fully confident they are safe and secure, and so GDPR compliant, don't enter anything you wouldn't want a human reviewer to see, or the AI model or software tool to store, use, or distribute.
The use of generative and other AI is emerging as an essential graduate skill (QAA, 2023) and we must ‘embrace and adopt’ these fast evolving technologies, developing the critical, digital literacies necessary to use them responsibly, ethically and with integrity.
Whilst there is deep anxiety about the threat AI poses to HE teaching, learning and assessment and academic integrity, at the same time, it is acknowledged that Generative AI has the potential for deep impact on teaching and learning experiences, enabling efficiencies and a personalised experience that can drive engagement. This highlights the importance of fair and equal access to AI (Illingsworth, 2023) and digital security. It is an issue relevant to all disciplines, levels of study, taught and research programmes.
The following four principles can help ensure effective and ethical use of GenAI in Teaching, Learning and Assessment. Each principle is explained and underpinned by supporting pedagogic practices and further resources to inspire future-fit teaching, learning and assessment practices.
These principles can be applied in every subject discipline and programme area. Their specific application should be discussed and agreed at programme/course and module level.
This guidance will be regularly updated to include and highlight further developments as the technology advances.
In addition to the examples given under the four principles, a growing number of examples demonstrate how and where teaching staff have modified their curriculum, teaching and assessment processes to take advantage of GenAI.
You may find the following examples especially useful:
The Oxford Brookes’ produced Talking Teaching Across the Globe webinar series is focusing on "Approaches to the Use of Generative Artificial Intelligence" for 2023/24. Upcoming episodes will be publicised throughout the year and videos of previous events are available in the archive.
The University of Kent’s Digitally Enhanced Education Webinars channel on YouTube includes several playlists relating to various aspects of GenAI and education.
The recent JISC collection of ‘Assessment Ideas for an AI-Enabled World’ contains 40 examples of innovative, AI-augmented assessment types, 29 of which explicitly moot the production of a ‘written document’ (e.g. as a blog, traditional essay, reflective account or portfolio) or highlight the development of academic writing skills as a key learning outcome.
You may also find the following reference list useful.
References
Acar OA. (2023) Are your students ready for AI? Harvard Business Publishing Education. At: Are Your Students Ready for AI? | Harvard Business Publishing Education (Accessed 4 January 2024)
Armstrong, P. (2010). ‘Bloom’s Taxonomy’. Vanderbilt University Center for Teaching. Available at: https://cft.vanderbilt.eduBloom’s Taxonomy | Center for Teaching/guides-sub-pages/blooms-taxonomy/ (Accessed 4 January 2024)
Ashford-Rowe, K., Herrington, J., and Brown, C. (2014). ‘Establishing the Critical Elements that Determine Authentic Assessment’. Assessment & Evaluation in Higher Education 39: 2. Available at http://dx.doi.org/10.1080/02602938.2013.819566 (Accessed 4 January 2024)
Attwell. G. (2023) The Prepare Framework. AI Pioneers Blog Post. At: https://aipioneers.org/the-prepare-framework/ (Accessed January 4 2024)
Beckingham, S. and Hartley, P. (2023) Reshaping Higher Education Learning, Teaching and Assessment through Artificial Intelligence: What do we need to know, do, and be concerned about? Keynote to UHI Learning and Teaching Conference, December 2023.
Bendor-Samuel, P. (2023) Key Issues Affecting The Effectiveness Of Generative AI. Forbes, at: https://www.forbes.com/sites/peterbendorsamuel/2023/12/05/key-issues-affecting-the-effectiveness-of-generative-ai/ (Accessed 4 January 2024)
Biggs, J., & Tang, C. (2011). Teaching for Quality Learning at University. London: McGraw-Hill Education.
Boud, D. (2023) Positioning assessment differently in a world of gen AI. Webinar at: Positioning assessment differently in a world of gen AI (youtube.com). (Accessed 4 January 2024)
Bri Does Ai (2023) 5 SECRET Ways to Become a Speed Learner With ChatGPT. At: https://www.youtube.com/watch?v=VeXKByjBMXw (Accessed 4 January 2024)
CAST (2023). ‘About Universal Design for Learning’. Available at: https://www.cast.org/impact/universal-design-for-learning-udl (Accessed 4 January 2024)
CLA (Copyright Licensing Agency) Undated. Principles for copyright and Generative AI. At: https://cla.co.uk/about-us/copy-right/principles-for-copyright-and-generative-ai/ (Accessed 4 January 2024)
Farrell, H. (2023) AI in the classroom: enhancing student engagement. Webinar at: AI in the Classroom: Enhancing Student Engagement (youtube.com) (Accessed 4 January 2024)
Fitzpatrick, D., Fox, A., and Weinstein, B. (2023) The AI Classroom: The ultimate guide to artificial intelligence in education. Beech Grove, IN: TeacherGoals Publishing.
Francis, N. and Smith, D. (2023). Generative AI in assessment. National Teaching Repository. Educational resource. https://doi.org/10.25416/NTR.24121182.v2 (Accessed 4 January 2024)
Furze, L (2023) The AI Assessment Scale: Version 2. Blog post at https://leonfurze.com/2023/12/18/the-ai-assessment-scale-version-2/ (Accessed 4 January 2024)
Greenfield, Susan, (2014). Mind Change: How Digital Technologies are Leaving their Mark on our Brains. London: Random House.
Grynbaum, M.M. and Mac, R. (2023) The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work. New York Times, 27/12/23. At: https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html (Accessed 4 January 2004)
Hao, K. (2023) The Chaos inside OpenA. Interview by Big Think. At: https://www.bilibili.com/video/BV1vN4y1e71B/?spm_id_from=888.80997.embed_other.whitelist&t=8 (Accessed 4 January 2004)
Hart-Davis, G. (2023) Killer ChatGPT Prompts: Harness the power of AI for success and profit. Hoboken, NJ: Wiley.
https://www.wired.com/story/what-is-artificial-general-intelligence-agi-explained/
JISC, (2023) Generative AI - A Primer. At: https://beta.jisc.ac.uk/reports/generative-ai-a-primer (Accessed 4 January 2004)
Keary, T. (2023). AI Hallucination. [online] Techopedia. At: https://www.techopedia.com/definition/ai-hallucination. (Accessed 4 January 2004)
Lombardi, M.M. (2007). ‘Authentic Learning for the 21st Century: An overview. ELI paper 1’, Educause Learning Initiative, At: https://www.researchgate.net/publication/220040581_Authentic_Learning_for_the_21st_Century_An_Overview (Accessed 4 January 2004).
Marche, Stephen (2022). ‘The College Essay is Dead: Nobody is Prepared for How AI will Transform Academia’, The Atlantic. Available at: https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/ (Accessed 4 January 2004)
Mark, Gloria, (2023). Attention Span, Toronto, Ontario: Hanover Square Press.
McArthur, J. (2021). ‘QAA Annual Conference Blog – Rethinking Authentic Assessment in a Post-Covid World: Is it Right to Hope for Change?’ QAA blog. Available at: https://www.qaa.ac.uk/news-events/blog/qaa-annual-conference-blog-rethinking-authentic-assessment-in-a-post-covid-world. Accessed 31 October 2022.
Miao, Fengchun & Holmes, Wayne, (2023). ‘Guidance for Generative AI in Education and Research’, UNESCO. Available at https://unesdoc.unesco.org/ark:/48223/pf0000386693 (Accessed 4 January 2004).
OECD (2023) OECD.AI Expert Group on AI Futures: Future AI scenarios exercise. At: https://www.youtube.com/watch?v=b9ymA_OjDWo (Accessed 4 January 2004)
Ofcom (2023) Gen Z driving early adoption of Gen AI, our latest research shows. At: https://www.ofcom.org.uk/news-centre/2023/gen-z-driving-early-adoption-of-gen-ai
Patel, V. (2023) Elon Musk-Led xAI Adopts A For-Profit Benefit Corporate Structure With Aim To Do Some Societal Good. At: https://www.ibtimes.co.uk/elon-musk-led-xai-adopts-profit-benefit-corporate-structure-aim-do-some-societal-good-1722412
Perkins, M., Furze, L., Roe, J. MacVaugh, J. (2023) Navigating the generative AI era: Introducing the AI assessment scale for ethical GenAI assessment. ARXIV Preprint. At: https://arxiv.org/abs/2312.07086 (Accessed 4 January 2024)
Ra, Chaelin K., Cho, Junhan, & Stone, Matthew D., (2018). ‘Association of Digital Media Use with Subsequent Symptoms of Attention Deficit/Hyperactivity Disorder Among Adolescents’, JAMA, 320 (3), pp.255-263. Available at: https://jamanetwork.com/journals/jama/fullarticle/2687861 (Accessed 4 January 2004).
Riedel, J. (2023) Understanding the Environmental Impact of AI and GenAI. Linkedin at: https://www.linkedin.com/pulse/understanding-environmental-impact-ai-genai-jürgen-riedel-nnfhe/ (Accessed 4 January 2004)
Rogers, R (2023) What’s AGI, and Why Are AI Experts Skeptical? Wired. At:
Rust, C. (2022). ‘Meaningful assessment: What is it and why does it matter?’ Teaching Insights, Available at: https://teachinginsights.ocsld.org/meaningful-assessment-what-is-it-and-why-does-it-matter/. (Accessed 4 January 2004)
Shah, S. (2023) Sam Altman on OpenAI, Future Risks and Rewards, and Artificial General Intelligence. Time.com At: https://time.com/6344160/a-year-in-time-ceo-interview-sam-altman/ (Accessed 4 January 2004)
Sltman, S. (2023) Sam Altman on OpenAI, Future Risks and Rewards, and Artificial General Intelligence. Interview for Time Magazine. At: https://www.youtube.com/watch?v=e1cf58VWzt8 (Accessed 4 January 2004)
Thor Jensen, K. (2023) Yes, Machines Make Mistakes: The 10 Biggest Flaws In Generative AI. PC Magazine. At: Yes, Machines Make Mistakes: The 10 Biggest Flaws In Generative AI (pcmag.com) (Accessed 4 January 2004)
Trucano, M (2023) AI and the next digital divide in education. Brookings Institution. At: https://www.brookings.edu/articles/ai-and-the-next-digital-divide-in-education/ Last accessed 3.1.2004. (Accessed 4 January 2004)
Trustible (2023) How Does China’s Approach To AI Regulation Differ From The US And EU? At: https://www.forbes.com/sites/forbeseq/2023/07/18/how-does-chinas-approach-to-ai-regulation-differ-from-the-us-and-eu/ (Accessed 4 January 2004).
UCL (2023) Designing assessments for an AI-enabled world. University website at: https://www.ucl.ac.uk/teaching-learning/generative-ai-hub/designing-assessments-ai-enabled-world (Accessed 4 January 2024)
University Alliance, (2023). ‘Supporting Student Progression and Attainment Through Sustainable Inclusive Assessment Practices: What Works?’ Available at: https://www.unialliance.ac.uk/our-work-2/university-alliance-inclusive-assessment-research-project/ (Accessed 13 November, 2023).
Villarroel, V., Bloxham, S., Bruna, D., Bruna C. & Herrera-Seda, C. (2018). ‘Authentic Assessment: Creating a Blueprint for Course Design’, Assessment & Evaluation in Higher Education, 43:5, 840-854. At: https://www.tandfonline.com/doi/abs/10.1080/02602938.2017.1412396 ((Accessed 4 January 2004)
Wallbank, Adrian J. (2023a). ‘ChatGPT and AI Writers: A Threat to Student Agency and Free Will?’ Times Higher Education. Available at: https://www.timeshighereducation.com/campus/chatgpt-and-ai-writers-threat-student-agency-and-free-will ((Accessed 4 January 2004)
Wallbank, Adrian J. (2023b). ‘Prompt Engineering as Academic Skill: A Model for Effective ChatGPT Interactions’. Times Higher Education. Available at: https://www.timeshighereducation.com/campus/prompt-engineering-academic-skill-model-effective-chatgpt-interactions (Accessed 4 January 2004)
Wallbank, Adrian J. (2023c). ‘ADHD in Higher Education: Is Digital Learning Making it Worse?’ Times Higher Education. Available at: https://www.timeshighereducation.com/campus/adhd-higher-education-digital-learning-making-it-worse?ct=t(EMAIL_CAMPAIGN_8_3_2023_17_4__COPY_01) (Accessed 4 January 2004)
Wasserman, N. (2023) OpenAI’s Failed Experiment in Governance. Harvard Business Review. At: https://hbr.org/2023/11/openais-failed-experiment-in-governance (Accessed 4 January 2004)
Wolfram, S. (2023) What is ChatGPT doing and why does it work? Wolfram Media Inc.
World Economic Forum (2018) The Future of Jobs report 2018. WEF https://www.weforum.org/reports/the-future-of-jobs-report-2018