New
New
Lesson 4 of 8
  • Year 10

Getting the most out of an LLM

I can describe how to improve LLM output and use prompt engineering to improve LLM output.

Lesson 4 of 8
New
New
  • Year 10

Getting the most out of an LLM

I can describe how to improve LLM output and use prompt engineering to improve LLM output.

These resources will be removed by end of Summer Term 2025.

Switch to our new teaching resources now - designed by teachers and leading subject experts, and tested in classrooms.

These resources were created for remote use during the pandemic and are not designed for classroom teaching.

Lesson details

Key learning points

  1. The performance of LLMs is highly dependent on how users craft their prompts.
  2. Prompts should be specific and clear to enable the model to interpret what you want.
  3. Prompts should provide relevant context or examples so the model can generate more accurate and tailored responses.
  4. LLMs can reflect biases from their training data, so responses should be critically evaluated and cross-checked.

Keywords

  • Prompt - the input question, instruction, or message you give to a large language model (LLM)

  • Vague - when something is unclear or too general to be useful

  • Context - the extra information you give to help the LLM interpret your prompt better

  • Engineering - using knowledge and skills to design and create things that solve problems or make life easier

Common misconception

You can just type anything into an LLM and it will always give the best answer.

LLMs work best when you give them clear, specific and well-structured prompts. The quality of the input directly affects the quality of the output; this process of designing and refining inputs to get better results is called prompt engineering.


To help you plan your year 10 computing lesson on: Getting the most out of an LLM, download all teaching resources for free and adapt to suit your pupils' needs...

Ask learners to use a vague prompt with a chatbot. Then get them to engineer and improve their prompt by adding specific details, relevant context and stating the format they would like the output to be in. Then compare the before and after outputs to see how their changes improved the response.
Teacher tip

Equipment

It would be useful for learners to have access to an AI chatbot application for this lesson.

Licence

This content is © Oak National Academy Limited (2025), licensed on Open Government Licence version 3.0 except where otherwise stated. See Oak's terms & conditions (Collection 2).

Lesson video

Loading...

Prior knowledge starter quiz

Download quiz pdf

6 Questions

Q1.
What do the initials LLM stand for in artificial intelligence?

Correct Answer: large language model

Q2.
Match the example to the correct keyword:

Correct Answer:trust,system gives results you can rely on

system gives results you can rely on

Correct Answer:language model,AI system produces text based on patterns in training data

AI system produces text based on patterns in training data

Correct Answer:prediction,AI system generates a sentence based on previous words

AI system generates a sentence based on previous words

Correct Answer:bias,model’s responses unfairly favour certain topics or groups

model’s responses unfairly favour certain topics or groups

Q3.
Which of these is an example of bias in an AI system?

the model answers every question
Correct answer: the model always selects the same type of response for similar prompts
the model responds in multiple languages
the model has a large vocabulary

Q4.
What is the role of training data in an LLM?

Correct answer: it teaches the model how to predict text
it stores pictures
it increases the speed of the computer
it deletes old files

Q5.
Put these steps in order for how a chatbot generates a reply:

1 - receives user input
2 - analyses input using the language model
3 - predicts the next word or phrase
4 - outputs the response

Q6.
Which of these is NOT a reason to be cautious about trusting LLM outputs?

training data may be biased
LLMs can make mistakes
LLMs may not always be accurate
Correct answer: LLMs always use perfect data

Assessment exit quiz

Download quiz pdf

6 Questions

Q1.
What is the most important factor in getting a useful response from an LLM?

typing slowly
typing quickly
using as few words as possible
Correct answer: crafting a clear and specific prompt

Q2.
What is the process called where you design and refine inputs to improve the responses from a large language model?

Correct Answer: prompt engineering

Q3.
Match the keyword to its definition:

Correct Answer:prompt,the input or instruction given to an LLM

the input or instruction given to an LLM

Correct Answer:vague,unclear or too general

unclear or too general

Correct Answer:context,extra information to help interpret a prompt

extra information to help interpret a prompt

Correct Answer:engineering,using knowledge to design solutions

using knowledge to design solutions

Q4.
Put these steps in order for improving an LLM’s output:

1 - write an initial prompt
2 - add specific details and context
3 - check the LLM’s response
4 - refine the prompt if needed

Q5.
Which statement about using LLMs is correct?

you can type anything and always get the best answer
Correct answer: clear, specific prompts lead to better results
the LLM always ignores your instructions
prompts are not important

Q6.
What should you do if an LLM’s answer seems biased or incorrect?

accept it without question
assume all outputs are perfect
Correct answer: critically evaluate and cross-check the response
ignore the answer