AI in education

25 February 2026

Choosing AI for your school? How our AI tools exceed the DfE’s latest safety standards

Rachel Urquhart

Interim AI Quality & Safety Principal

Over the past year, many schools have moved from asking:

“Should we allow AI?”

to a more practical question:

“If our teachers want to use AI to support lesson planning, what do we need to consider?”

At Oak, we’ve been working closely with teachers as they explore that shift - developing and refining our AI tools and AI lesson assistant, Aila, in response to real classroom feedback. Along the way, one thing’s been clear: schools don’t just want useful AI tools. They want safe, transparent and education-specific ones.

In January 2026, the Department for Education (DfE) formalised its Generative AI safety expectations into Generative AI product safety standards for the edtech sector. You may have seen the announcement and wondered what it means for you.

Busy teachers, school leaders and IT teams can’t reasonably be expected to track every development in AI regulation. But the new standards do offer something helpful: a structured way to think about what responsible AI looks like, and the questions worth asking before approving these tools.

So, when your teachers want to use AI to support planning, how can the DfE’s standards help you see whether a tool is right for your school? And how has Aila been built with those same questions at its core - not only meeting, but exceeding, the standards?

1. Filtering and monitoring

The DfE standards place strong weight on filtering harmful or inappropriate outputs, particularly where content is delivered directly to pupils.

If you’re approving an AI lesson planning tool, you might ask:

  • Even if this is teacher facing, what content does it filter?
  • How are sensitive topics handled?
  • Are there safeguards in place, or does the tool simply display whatever the AI model produces?

Aila creates lesson materials for teachers, with professional judgement sitting firmly with the teacher. We believe safeguarding begins at the planning stage.

If a lesson includes sensitive content, such as discrimination, Aila may display a short safety message encouraging reflection on how the topic could affect particular pupils, or whether additional context and care are needed.

In some cases, content will be blocked altogether. Topics that carry significant safeguarding risk, such as self-harm, are considered inappropriate for AI lesson generation. In this case, nuance and specialist expertise are essential. Aila also blocks clearly harmful or dangerous content.

2. Accuracy, design and professional judgement

In our regular discussions with teachers, accuracy consistently emerges as their top concern. Particularly for non-specialists or early career teachers, the risk is not just that AI might be wrong - but that errors might go unnoticed.

So key questions to ask are:

  • Has this tool been designed and tested specifically for classroom planning?
  • How does it support curriculum alignment?
  • Does it reinforce professional judgement, or risk bypassing it?

Aila has been developed by teachers, for teachers, and is aligned to our curriculum resources across all national curriculum subjects key stages 1-4. Its outputs are structured to support planning decisions, not replace them. Rather than producing a fully finished lesson to deliver unquestioned, it provides a scaffold that keeps the teacher in control.

The DfE standards highlight the risk of reduced independent thinking, particularly for pupils. We think it’s worth asking a parallel question for teacher facing tools: does this tool strengthen your professional expertise, or shortcut it?

Several PGCE trainees we spoke to were concerned about becoming overly reliant on AI before mastering the lesson planning process themselves. Aila is designed to support that process thoughtfully, embedding evidence-informed pedagogy. It’s designed to support and strengthen teachers’ thinking - prompting reflection where it matters, not automating professional judgement away.

3. Data protection and GDPR

Teachers tell us they are careful not to enter information that can identify pupils when using AI tools. They trust that their school’s approval processes mean the right checks have already taken place.

That’s sensible practice. GDPR and data protection are well established in schools, and your approach to AI should be no different. If your school is considering approving a lesson-planning tool, it’s worth checking:

  • Is the tool GDPR compliant?
  • Is there a clear privacy policy explaining how data is handled?
  • Has a DPIA been completed?
  • What information is required to set up and use the tool?

To use Aila, you create an account with a few basic details. The tool does not require you to enter information that can identify pupils when planning lessons.

Teachers’ use of Aila is not used to retrain the AI system that powers it. From time to time, Oak staff may review examples of how the tool is being used in order to strengthen safety features and improve content guidance. This is done carefully and responsibly, with data protection in mind.

Oak is free and always will be, so you can be confident that decisions about Aila’s design and development are made to support schools, not for commercial data purposes. You can read full details about how we handle data, retention and access in our Privacy Policy here.

4. Governance and accountability

When approving any AI tool, it’s reasonable for you to ask:

  • If something goes wrong, who’s responsible?
  • How are safety decisions made - and who reviews them?
  • Is this tool being actively monitored and improved, or was compliance a one-off exercise?

AI tools are not static. Models evolve, standards develop and new risks can emerge over time. Schools deserve to know that the organisations behind the tools they approve are taking long-term responsibility for how those tools operate.

Aila’s governance, oversight and compliance sit within our wider institutional framework. Teachers can submit any safety issues, concerns or feedback about Aila to our team of trained AI specialists via [email protected].

We continually review and refine our safety and moderation approaches as guidance and policies develop. For schools, that means accountability isn’t hidden in a black box. It sits with us; a transparent organisation dedicated entirely to supporting education.

5. Emotional influence, manipulation and anthropomorphism

The DfE standards caution against AI systems that encourage emotional attachment, dependency or manipulative engagement - particularly where children are concerned.

While Aila is teacher facing, we believe these principles still matter.

Aila is described as a lesson “assistant,” but it’s not designed to simulate companionship or offer flattery. It doesn’t affirm teachers emotionally or encourage dependency. Its interaction style is intentionally neutral and task-focused. The teacher remains the decision-maker throughout.

This reflects a simple principle: AI tools in education should support professional judgement, not shape it through emotional influence.

Why this matters now

AI in education is moving fast - and standards around safety are evolving with it. The DfE’s shift from expectations to standards shows that safe, responsible AI is becoming embedded practice across the sector. While frameworks will continue to develop, schools need clarity and confidence today.

Oak exists to support schools and give teachers more time to focus on achieving great outcomes for pupils. We’re committed to meeting and exceeding national standards, strengthening safeguards as guidance evolves, being transparent about how our tools are designed and governed, and continuing to listen to teachers and leaders.

If your school is considering AI for lesson planning, we hope these questions are a helpful starting point - whether you’re evaluating Aila or any other product.