AI in education

16 June 2025

Looking for an AI tool you can trust for use in education? How Aila meets and exceeds the DfE’s safety expectations for teacher-facing AI tools

Hannah-Beth Clark

AI Quality and Safety Principal

More than half of UK teachers are using generative AI tools to help with school work.1 If built and used responsibly, they have the power to transform education by helping teachers to focus on what they do best: teaching.

However, there are mixed feelings about the use of AI in education.2 In a recent survey, school leaders noted their concerns about data privacy and security, bias and fairness, safeguarding and quality control.3 At the same time, the majority said they were using readily available tools such as ChatGPT, Gemini and Copilot, with only a minority using tools that are specifically designed and tested for the UK education sector.4 Organisations offering incredible sounding, AI-powered, tools are coming out so fast - how can you be confident that the AI tools you have chosen to use are safe and suitable for a school environment?

The Department for Education (DfE) has recently published its ‘Generative AI: product safety expectations’ framework, which sets out clear technical safeguards for educational AI tools. It represents the most detailed set of safety expectations tools anywhere in the world.

We worked with the DfE and contributed to shaping this framework, and we’re proud that Aila, our AI lesson assistant, not only meets but exceeds the safety expectations outlined. Our principles for building with AI are not just to meet expectations, but to exceed and help shape them, which is why Aila not only meets the expectations for teacher-facing tools, but certain aspects of the pupil-facing expectations too, despite being a teacher-only tool.

How Aila meets and exceeds the DfE’s safety expectations - at a glance

Expectations for teacher and pupil-facing tools

  • Security
    • DfE expectation: The generative AI product must be secured against malicious use or exposure to harm. Read more about AI security.
    • How Aila meets this expectation: One of Aila’s four safety guardrails is input threat detection, which identifies and flags users who could be trying to mislead it or use it for purposes other than its intended ones. Before launch, Aila also underwent sustained ‘jailbreaking’ testing by leading experts across the government.
  • Privacy and data protection
    • DfE expectation: The generative AI product must have a robust approach to data handling and transparency around the processing of personal data. Read more about privacy and data protection.
    • How Aila meets this expectation: Using Aila does not involve sharing any personally identifiable information. Our privacy policy sets out how we manage our responsibilities regarding data privacy, retention, access and processing. We do this in line with the UK General Data Protection Regulation (GDPR) and the Data Protection Act 2018 (DPA). Our Terms and Conditions also inform users that information which identifies an individual should not be included in their inputs when using Aila.
  • Intellectual property
    • DfE expectation: The generative AI product must not store or collect intellectual property created by pupils or the copyright owner for any commercial purposes, such as training or fine tuning of models, unless appropriate consent is granted. Read more about intellectual property.
    • How Aila meets this expectation: Just like our human-made teaching resources, all resources created using Aila are available on an Open Government Licence (OGL). The resources you create are not used as training data to improve the underlying large language model (LLM), and anonymised information is only ever used to improve the tool's quality and safety. Oak National Academy is a government-funded arm’s length body of the Department for Education. Any decisions we make about our AI tools are not for commercial purposes.
  • Design and testing
    • DfE expectation: The generative AI product must prioritise transparency and children’s safety in its design. Read more about design and testing.
    • How Aila meets this expectation: Aila undergoes rigorous testing, including using our auto-evaluation tool, which uses AI to judge the quality of AI-generated content at scale. Aila and our auto-evaluation tool have been built alongside and tested with our subject experts, and then improved through feedback from thousands of teachers prior to and since launch. The results are used to continuously iterate Aila, ensuring it is producing safe and high-quality, useful outputs for teachers.
  • Governance
    • DfE expectation: The generative AI product must be operated with accountability. Read more about governance.
    • How Aila meets this expectation: Teachers can submit any safety issues, concerns or feedback about Aila to our team of trained AI specialists at Oak via the feedback form on the platform.

Expectations for pupil-facing tools only

These expectations are only expected for pupil-facing tools, but we considered it important to incorporate certain aspects of them into Aila.

  • Filtering harmful and inappropriate content
    • DfE expectation: Generative AI products must effectively and reliably prevent access to harmful and inappropriate content by users. Read more about filtering.
    • How Aila meets this expectation: Unlike general-purpose AI tools, Aila is trained to generate lesson resources aligned to the national curriculum according to our curriculum principles, ensuring that the resources it creates are suitable for use in your classroom.
    • We built Aila its own helper, a content moderation agent, which works independently to review the content Aila produces and filters any content that may be inappropriate for the classroom. Aila has been built specifically to identify what is and isn’t appropriate for UK KS1-4.
    • Finally, Aila is designed so that you, as the teacher and expert, are the human in the loop, and carry out a final check of the resources you have created, to ensure they are appropriate for your pupils before use.
  • Monitoring and reporting
    • DfE expectation: The generative AI product must maintain robust activity logging procedures. Read more about monitoring and reporting.
    • How Aila meets this expectation: We monitor and collect data on all activity in Aila. Anyone who has created potentially harmful or inappropriate material in Aila gets alerted and stopped in real-time. Note that we cannot alert supervisors or senior leadership.

In conclusion

Embracing AI in education brings huge opportunities, but also distinct challenges. Safety must be a key priority when deciding whether to use generative AI in your classroom. Being open and transparent about how our AI tools work is important for us, so that you can make that decision with confidence.

We’ve done the legwork with Aila to ensure it goes above and beyond AI safety expectations so you can focus on using it to create safe, suitable and high-quality resources for your pupils.

Try Aila


You might also be interested in: