Loading...
Hello, my name is Mrs. Holborow, and welcome to computing.
I'm so pleased you decided to join me for the lesson today.
In today's lesson, you'll be exploring how you can improve output produced by an LLM, and what prompt engineering is.
Welcome to today's lesson from the unit Using data science and AI tools effectively and safely.
This lesson is called Getting the most out of an LLM, and by the end of today's lesson, you'll be able to describe how to improve LL.
This lesson is called Getting the most out of an LLM, and by the end of today's lesson, you'll be able to describe how to improve LLM output and use prompt engineering to improve LLM output.
Shall we make a start? We will be exploring these keywords throughout today's lesson.
Prompt.
Prompt: the input question, instruction, or message you give to a large language model, or LLM.
Vague.
Vague: when something is unclear or too general to be useful.
Context.
Context: the extra information you give to help the LLM interpret your prompt better.
Engineering.
Engineering: using knowledge and skills to design and create things that solve problems or make life easier.
Look out for these keywords throughout today's lesson.
Today's lesson is split into two parts.
We'll start by describing how to improve LLM output.
We'll then move on to use prompt engineering to improve LLM output.
Let's make a start by describing how to improve LLM output.
Sofia says, "What is an LLM, Alex?" Alex says, "Large language models, LLMs, are a specific type of AI model designed to generate a text output by using patterns found in large amounts of training data.
Chatbot applications use LLMs to help do their work." A chatbot application is a software programme that interacts with users.
Chatbot applications process user input and generate responses to assist with tasks or answer questions.
Sofia says, "I've used chatbot applications on a website before.
I'll see if I can get it to write a poem for fun." Sofia's going to use the prompt: "Write me a poem." Here is the response from the chatbot application.
"Certainly.
Here is a poem about a scrapyard.
Old cars rest in the sunshine bright, dreaming of roads in the quiet night, till new adventures take to flight." Sofia says, "I didn't want a poem about scrapyards, Alex.
I wanted a poem about having to stay indoors on a rainy day." Alex says, "You might have to provide clearer instructions to the chatbot application so that it can give you useful output that you want, Sofia!" LLMs and chatbot applications work best when given clear and specific instructions.
The more focused your input prompt is, the more useful the output is likely to be.
Alex says, "It's best to avoid vague or open-ended prompts to help you get useful information back." Sofia has a couple of questions.
"What's a prompt and what does vague mean?" Maybe pause the video whilst you think about Sofia's question.
A prompt is the input information humans give to LLMs and chatbot applications.
It's the instructions that tell the LLM what you want it to generate.
Vague means when something is unclear, too general, or not detailed enough to be useful.
Sofia says, "Okay, I'll try again with a clearer, less vague prompt." Now Sofia's prompt is: "Write a poem about staying inside on a rainy day." And here's the chatbot application response.
"Certainly.
Here is a poem about staying inside on a rainy day.
Rain taps softly on the pane, warm blankets wrap away the chill, inside, the world feels calm and still." I think Sofia's prompt has improved the output from the chatbot application.
It's now more what she wanted.
Okay, I have a true or false statement for you.
To improve LLM output, it's best to avoid vague or open-ended prompts.
Is this true or false? Pause the video whilst you have a think.
That's right, it's true.
Need to be as specific as possible with your prompts.
It can take a while to get used to working with LLMs, but there are a few simple ways to improve what they output, such as: use clear and specific prompts to avoid vague or confusing responses, provide relevant context or background information to guide the model's answers, state the desired format or style you would like, such as lists or step-by-step instructions.
Context is about giving LLMs extra information that will help the LLM interpret your prompt better.
By adding context, the LLM can reference the extra information you provide to generate more useful, accurate, and relevant output.
Which prompt do you think would give a chatbot application clearer and more specific instructions? Is it Jacob's prompt, "Tell me about science," or Sofia's prompt, which is, "Explain three important discoveries in physics and why they matter for everyday life"? Pause the video whilst you have think.
That's right.
Sofia's prompt is much more specific, so it's likely to give a better response.
Which prompt do you think provides relevant context to guide the LLM's output? Is it A, Jacob's prompt, "Write a summary of the causes and effects of climate change for a 15-year-old student studying geography," or Sofia's prompt, "Write a summary"? That's right, it's Jacob's.
We've got some context here.
We know that the output should be suitable for a 15-year-old student who's studying geography.
Which prompt do you think provides information about what format the LLM output should be in? Jacob's prompt, "Explain photosynthesis in five bullet points for a classroom presentation," or Sofia's prompt, "Explain photosynthesis for a classroom presentation"? That's right, Jacob's prompt specifies the format of the output.
It asks for five bullet points.
Okay, time to check your understanding with a question.
Which word means unclear or too general? Is it A, prompt; B, context; or C, vague? Pause the video whilst you have a think.
That's right.
Did you select vague? Well done.
Next question.
What is a prompt? A, software that stores AI data; B, a type of chatbot application; or C, the input given to an LLM or chatbot application? Pause the video whilst you think carefully about your answer.
That's right.
A prompt is the input given to an LLM or chatbot application.
Another question for you.
Which of the following is a good way to improve the output of LLMs? Is it A, using clear, specific input prompts with context; B, giving vague and open-ended input prompts; or C, allowing the LLM to predict the context of the prompt? Pause the video whilst you think about your answer.
Did you select A? Well done.
Using clear, specific input prompts with context is a good way to improve the output of an LLM.
Okay, we're moving on to our first task of today's lesson, Task A.
In your own words, describe how the output of LLMs can be improved and made more useful.
Pause the video whilst you complete the task.
How did you get on? Did you manage to describe how you can improve the output of LLMs? Well done.
Let's have a look at a sample answer together.
There are many ways to improve the output of LLMs. A good start is to carefully give instructions through prompts.
Prompts should be clear and specific and avoid vague information.
A good prompt should also include enough context and background detail for the LLM output to be relevant to the user.
The output can be further improved by stating what format you want the information output as, for example, a table or bullet points.
Remember, if you'd like to pause your video here to add any extra detail to your answer, you can do that now.
Okay, we're now moving on to the second part of today's lesson, where we're going to use prompt engineering to improve LLM output.
Sofia says, "I'm starting to get much more useful output from chatbot applications when I think carefully to create better prompts." That's really good, Sofia.
Well done.
Jacob says, "This is called prompt engineering, Sofia!" Prompt engineering is about creating better instructions for LLMs so that you get output that is more useful for your needs.
There are many advantages to prompt engineering.
Some examples include: getting more accurate and useful information, receiving output in the format that you need, and saving time and reducing errors in the output.
There are many ways to use prompt engineering to improve LLM output.
A simple four-step process can help you get good results: decide exactly what you need, write clear and specific instructions, add context and format, and test and improve.
You might start prompt engineering by identifying your goal and deciding exactly what you need.
"Be clear about exactly what you want to achieve by using the chatbot application or LLM." Think about the topic and the level of detail that you need.
Then, write a clear prompt that describes what you want from the LLM using specific words.
Sofia says, "Remember to avoid vague language and be direct with your instructions." That's a really good tip from Sofia.
Add clear formatting cues to the prompt to shape the output of the LLM.
Jacob says, "You can instruct LLMs to output information in a table, bullet points, or numbered steps, for example." Alex says, "You could also instruct the LLM to organise output by adding headings and subheadings or to summarise longer information." Adding context to the prompt will guide the LLM to generate output that is more accurate, relevant, and suited to your needs.
Without context, the model may produce vague or unhelpful output responses.
Jacob asks a question.
"What things could I include to add context?" Do you know the answer to this question? Maybe pause the video whilst you have a think.
You can add context to a prompt by: including background information about the topic's purpose or situation; stating the audience the information is aimed at, for example, explain for a year 10 student; giving examples to show the style, tone, or structure; and explaining constraints such as a word count or what information to exclude from the output.
Once you've created your prompt, you can test it out.
Sofia says, "If the LLM's output isn't quite right or what you need, you can tweak your prompt and try again." Prompt engineering is iterative.
This means you can try again and again.
You may be able to get better results by adjusting and improving your prompt based on the LLM output.
Let's have a look at some examples.
So on the left-hand side I have a vague prompt which says, "Tell me about football." How would you engineer this prompt to give more useful information about the rules of football? Pause the video whilst you have a think.
How did you get on? Let's have a look at an example.
So we've improved the prompt by saying, "Explain the rules of football in five short bullet points, so someone who has never watched a match can understand." Did you add some context to your prompt? Developing prompt engineering skills helps you get the most out of LLMs. Sofia says, "I'll be able to do so many more useful things with LLMs now I know how to engineer better prompts." "Remember that LLM output may be inaccurate or biassed.
Don't assume that the output is always right!" Always double-check important facts with trusted sources.
Think critically and always ask yourself whether the answer makes sense, is fair, and fits the question you asked.
Okay, I have a question for you.
Prompt engineering is about, A, writing software to control AI systems; B, fixing errors in AI code; or C, designing and refining prompts to improve LLM output.
Pause the video whilst you have a think.
Did you select C? Well done.
Prompt engineering is about designing and refining prompts to improve LLM output.
Another question for you.
Adding context to LLM prompts, A, makes the LLM respond faster; B, generates output that is more relevant to your needs; or C, completely stops the LLM from making mistakes.
Pause the video whilst you have a think.
Did you select B? Well done.
Adding context to the LLM prompts generates output that is more relevant to your needs.
Okay, I have a true or false statement for you now.
Prompt engineering is iterative and you may be able to get better results by adjusting and improving your prompt based on the LLM output.
Is this true or false? Pause the video whilst you have a think.
That's right, it's true.
You can improve responses by refining your prompts.
Another true or false statement.
There is no need to fact-check and think critically about LLM output because the information is always right.
Is this true or false? Pause the video whilst you have a think.
Did you select false? Well done.
LLM output may be inaccurate or biassed.
Don't assume that the output is always right.
Always double-check important facts with trusted sources.
Think critically and always ask yourself whether the answer makes sense, is fair, and fits the question that you asked.
Okay, we're moving on to our second task of today's lesson, Task B.
For part one, use prompt engineering to improve the vague prompt above.
So the prompt is, "Tell me about music." As a tip, you can use the four stages below to help you: decide exactly what you would like to achieve, write a clear, specific instruction that describes what you want from the LLM, add context by giving background information, add format details of how you would like the LLM output presented.
Pause the video whilst you have a go at the task.
Did you manage to improve your prompt with some prompt engineering? Well done.
Let's have a look at a sample answer.
So our engineered prompt is, "Generate a list of five famous rock bands from the 1990s and one of their most popular songs, presented in a table." So we now have some more context and we've given the LLM some guidance about the format we want the information to be presented.
For part two, if you can, enter the engineered prompt you created into a chatbot application to see what output you receive.
Did you manage to test out your prompt? Great work.
Let's have a look at an example output.
So we have the table presented as we asked.
We have the bands and we have popular songs.
So we have Nirvana, "Smells Like Teen Spirit," Oasis, "Wonderwall," Radiohead, "Creep," Red Hot Chilli Peppers, "Give It Away, and Green Day, "Good Riddance, Time of Your Life." Did you have similar responses? Okay, we've come to the end of today's lesson, Getting the most out of an LLM, and you've done a fantastic job, so well done.
Let's summarise what we've learnt together in this lesson.
The output and usefulness of LLMs and chatbot applications is dependent on the quality of the user's prompt.
Prompts should be clear and specific to avoid vague or confusing output responses.
Prompt engineering is about creating better instructions for LLMs so that you get output that is more useful to your needs.
LLMs can reflect biases from their training data, so responses should be critically evaluated and cross-checked.
I hope you've enjoyed this lesson and I hope you'll join me again soon.
Bye.