The Office for Standards in Education, Children's Services and Skills (Ofsted) and Office of Qualifications and Examinations Regulation (Ofqual) have recently set out their approaches for regulating AI in the education sector in England. In this blog we summarise the key principles being adopted by the regulators and how this might impact on use of AI by schools, universities and other education providers.
Background
In February 2024, the Department for Science, Innovation and Technology (DSIT) and the Department for Education (DfE) asked a number of key regulators to publish their strategic approach to using artificial intelligence (AI). Following this request, on 24 April 2024, Ofsted and Ofqual both published their approaches to regulating AI in the education sector.
Both regulators are committed to the principles set out in the UK Government's AI regulation white paper (UK AI Regulation White Paper) (we discuss this in our blog here in more detail) which may in turn encourage other education regulators to do the same and promote a more uniform approach to AI regulation in the UK's education sector.
While Ofsted and Ofqual are English regulating bodies and there are separate inspecting bodies for Wales, Scotland and Northern Ireland, it is possible that Ofsted and Ofqual's approaches influence education regulators across the UK.
Ofsted's approach to regulating AI
In its approach to AI in education, Ofsted specifies that it supports the use of AI by education providers where it improves the care and education of children and learners. Ofsted will not, however, actively inspect the quality of AI tools used by education providers, meaning providers are responsible for ensuring the tools they use are of adequate quality and capable of producing compliant output.
Ofsted also specifies the UK AI Regulation White Paper as the standard for which it will review not only its own use of AI, but the use of AI by education providers falling under Ofsted's regulation.
This may help education providers assess whether their use of AI is compliant. In particular, Ofsted's approach expects compliance with the five key principles set out in the UK AI Regulation White Paper as follows:
- Safety, security and robustness. Both Ofsted and education providers are expected to continually test AI solutions to identify and rectify bias and error, and ensure AI solutions are secure and safe, protective of user data.
- Appropriate transparency and explainability.Both Ofsted and education providers are expected to be transparent about their use of AI, and ensure they understand the suggestions made by the AI.
- Fairness. Both Ofsted and education providers are expected to use AI solutions that are ethically appropriate, consider any bias relating to small groups and protected characteristics at the development stage and monitor and correct where appropriate.
- Accountability and governance. Ofsted is expected to provide clear guidance and rules of AI users and developers falling under Ofsted's regulation, and education providers are expected to ensure their staff have clear roles and responsibilities in relation to the monitoring, evaluation, maintenance and use of AI.
- Contestability and redress. Both Ofsted and education providers are expected to empower staff to correct and overrule AI suggestions, with the aim of ensuring decisions are ultimately made by the user and not the technology. Concerns and complaints are also expected to be managed continually.
Furthermore, Ofsted has expressed that it intends to work with other regulators and the DfE and DSIT moving forward to ensure that its approach remains consistent with updated guidance.
Ofqual's approach to regulating AI
Much like Ofsted, Ofqual's approach to regulating the use of AI in the qualifications sector also relies on the five key principles set out in the UK AI Regulation White Paper.
This is presented in the form of five key objectives, which connect with the principles of the UK AI Regulation White Paper and aim to support the design, development and delivery of high-quality assessment and identification and assessment of risks in using AI in non-exam assessments. These five key objectives are: (1) ensuring fairness for students, (2) maintaining the validity of qualifications, (3) protecting security, (4) maintaining public confidence, and (5) enabling innovation.
As mentioned, each of these objectives expressly connects with one of the five key principles of the UK AI Regulation White Paper:
- Safety, security and robustness. Alertness to malpractice including assessing and addressing vulnerabilities to assessment security from AI, and protection of student data and questions and paper security.
- Appropriate transparency and explainability.Recognising and managing potential threat to validity through varying applications of AI and identifying and acting on activities more susceptible to being adversely affected by the use of AI.
- Fairness. Ensuring use of AI does not lead to unfair outcomes for students, loss of currency of their achievements, and/or lack of clarity over what constitutes malpractice.
- Accountability and governance. Ensuring use of AI does not lead to unfair outcomes for students, loss of currency of their achievements, and/or lack of clarity over what constitutes malpractice and maintaining public confidence around the use of AI.
- Contestability and redress. Ensuring use of AI does not lead to unfair outcomes for students, loss of currency of their achievements, and/or lack of clarity over what constitutes malpractice and identifying activities which are more susceptible to being adversely affected by the use of AI.
Moreover, Ofqual is concerned with ensuring that AI is used by awarding organisations in a manner that is safe and appropriate and does not threaten the fairness and standards of, or public confidence in, qualifications, and it is focused on how awarding bodies manage malpractice risks, use AI to mark students' work and how they use of AI in remote invigilation.
Scotland's approach to regulating AI
The Scottish Qualifications Authority (SQA) has also issued guidance on its approach to using generative AI, which primarily focusses on students' use of AI during assessments and the review and authentication of that work. Specifically, avoiding plagiarism and encouraging verification are at the heart of the SQA's guidance as learners cannot submit AI outputs as their own work and AI cannot be referenced as a source.
Although the SQA has published material on AI and commented on the benefits of AI in improving learning, teaching and assessments, there are currently no firm policy decisions on regulating AI in Scotland's education sector from the Scottish Government, Education Scotland or The General Teaching Council for Scotland.
Comment
Ofsted and Ofqual's approaches provide certainty that the principles of the UK AI Regulation White Paper will underly regulation of the English education sector moving forward.
While some guidance material on AI in education has been produced previously, what we have seen so far has largely been independent and non-uniform. However, with commitments from two of England's largest education regulators to implement the key principles of UK AI Regulation White Paper, we may see other education regulators also commit to adopting these principles and, in turn, see a shift to a more unified approach to AI regulation in other parts of the UK's education sector.
This in turn provides helpful guidance to education providers when considering whether and how to utilise AI. In particular, education providers should ensure that the carry out appropriate diligence on any proposed AI tool and an AI risk assessment on the proposed use case.
An AI risk assessment should consider issues such as transparency, fairness and accountability, together with other legal risks such as data protection compliance and intellectual property infringement.
When carrying out diligence on an AI tool, education providers should ensure that they understand how the tool works, how it has been trained and tested, how training data is used and how data is kept secure.
Should you wish to discuss the regulation of AI tools in the education sector, please contact Christine O'Neill, Martin Sloan or your usual Brodies contact.