Loading...
Hello, my name is Mrs. Holborow, and welcome to Computing.
I'm so pleased you've decided to join me for the lesson today.
In today's lesson, we're going to be exploring the risks of AI.
What risks are there to using AI systems and how can these risks be mitigated? Welcome to today's lesson from the unit Using Data Science and AI Tools Effectively and Safely.
This lesson is called Risk Mitigation of AI, and by the end of today's lesson, you'll be able to describe the main risks of artificial intelligence and explain how these risks can be reduced.
Shall we make a start? We will be exploring these keywords throughout today's lesson.
Let's take a look at them together now.
Transparency.
Transparency, being open and clear about how a system works.
Mitigation.
Mitigation, reducing the impact of a problem or risk.
Regulation.
Regulation, creating and enforcing rules to guide how something is used.
Human oversight.
Human oversight, keeping people responsible for checking and controlling how AI systems are used.
Look out for these keywords throughout today's lesson.
Today's lesson is broken down into two parts.
We'll start by defining risks of AI, and then we'll move on to explain risk mitigation.
Let's make a start by defining risks of AI.
Sofia says, "I've been reading how AI helps doctors spot illnesses more quickly.
That sounds amazing." Sam says, "That's true, but the same AI technology can also create risks." AI technology has brought many benefits across different industries, such as faster medical analysis in healthcare, safer and more efficient route planning in transport, improved efficiency and cost reduction in business.
Alongside these benefits, though, there are also risks that need to be understood and managed.
As with other forms of technology, AI technology can cause risks for people and society, including biassed information and outputs, job displacement, misuse to cause harm, so deliberate action, and a lack of transparency.
Jacob says, "One risk is bias in AI systems." If training data contains unfair patterns, the outputs may also be unfair.
This could result in certain groups of people being treated less favourably, leading to discrimination or unequal opportunities.
Jacob says, "AI technology can contribute to job displacement." AI technology can be used to automate certain tasks previously carried out by people.
This can mean that some employment opportunities are reduced or people may need to retrain in a different job.
Jacob says, "AI technology can be misused for harmful purposes." AI tools can be misused by criminals to automate scams or cyber attacks.
This results in greater risks to the data, finances, and digital security of individuals or organisations.
Jacob says, "Another harmful misuse of AI technology is generating and spreading fake information." Fake news articles, images, or videos can be created quickly and shared widely.
This can mislead people, influence opinions unfairly, and damage trust in reliable sources.
Jacob says, "AI technology can be used without transparency." A lack of transparency makes it difficult to check whether the system is fair or accurate.
AI technology used without transparency can lead to reduced trust in services or organisations who use AI.
Sofia's got a really good question here though.
"What does transparency mean?" Maybe pause the video whilst you think about Sophia's question.
Transparency means being open and clear about how a system works.
Being transparent about AI technology means being honest about how data is used and how outputs are produced.
Transparency helps people to understand, question, and trust the results.
AI technology has many benefits, but it also has risks that can have real impacts on people, including unfair treatment, loss of opportunities, or reduced trust.
The risk can affect individuals and groups of people in different ways, but there are ways to reduce risk through risk mitigation.
Okay, time to check your understanding.
I have a question for you.
Which of the following is a risk of AI technology? Is it A, reduced cost to businesses, B, faster medical diagnosis, or C, job displacement? Pause the video whilst you think about your answer.
Did you select C? Well done.
Job displacement is one of the risks of AI technology.
What can a lack of transparency in AI systems lead to? Is it A, poor screen quality, B, reduced trust in services or organisations, or C, expensive development costs? Pause your video whilst you have a think.
Did you spot it? Well done.
B is the correct answer.
A lack of transparency in AI systems can lead to reduced trust in services or organisations.
Have a true or false statement for you now.
AI technology has many benefits, but also has risks that can have real impacts on people.
Is this statement true or false? Pause the video whilst you have a think.
Did you say true? Well done.
Okay, we're moving on to our first task of today's lesson now, task A, where we're going to define risks of AI.
For part one, state and describe three potential risks of AI technology.
Pause the video whilst you have a go at the task.
How did you get on? Did you manage to define some risks of AI? Well done.
Let's have a look at a sample answer together.
So the first risk in this sample answer is bias.
If training data contains unfair patterns, AI outputs can also be unfair, leading to discrimination or unequal treatment risk.
Two, job displacement.
Tasks previously carried out by people can be automated, reducing certain employment opportunities.
Risk three, misuse.
AI tools can be applied for harmful purposes, such as spreading fake information or enabling intrusive surveillance.
Remember, if you've identified some different risks, that's absolutely fine, but if you'd also like to revisit your answer, you can pause the video here.
Okay, so we've defined some risks of AI.
Let's move on to explain risk mitigation.
Sofia says, "What does mitigation mean?" Do you know what the word mitigation means? Maybe pause the video whilst you have a think.
Sam says, "Mitigation is about reducing the impact of a problem or a risk." Well done, Sam.
In AI, mitigation involves making the AI systems safer and fairer for everyone.
Mitigating the risks of AI involves a few strategies, such as ethical design, regulation, testing systems, and human oversight.
Jacob says, "Ethical design considers fairness, inclusivity, and safety from the beginning." It involves checking data for bias, considering affected people, and setting clear goals for responsible development.
This reduces the chance of biassed or harmful outputs from AI systems. Jacob says, "Regulation is about setting rules for how AI systems can be developed and used." These rules can set standards for things like safety, fairness, privacy, and accountability.
Governments can enforce these standards to protect people, but it might be that also organisations developing AI systems will have their own regulation and standards.
Jacob says, "Testing means checking AI systems carefully before they are widely released and used." Testing can reveal problems such as bias, errors, or weaknesses in performance.
Testing helps mitigate risks by improving reliability and fairness of AI systems. Jacob says, "Human oversight ensures people remain responsible for important decisions." Important decisions are reviewed by humans instead of being left only to automated processes.
Human oversight helps catch mistakes, reduces risks, and keeps accountability with people.
Reducing AI risks is a shared responsibility between governments, companies, and users.
Governments can create laws and standards to protect people from harm.
Governments can monitor and enforce rules to prevent harmful or irresponsible development and use of AI technology.
Companies should be transparent about how their AI tools work and what data they use.
Companies should follow laws and regulations to design, test, and deploy systems responsibly.
Companies also have a duty to respond quickly if their systems are known to cause harm or problems. Users have a responsibility to use AI tools in a safe and fair way.
Users should think critically about AI system outputs and avoid harmful uses, such as spreading false information.
Responsible use helps reduce risks and supports trust in technology.
Risk mitigation is an ongoing process.
It's not a one-time fix.
AI systems can and should be continuously monitored, tested, and improved to reduce risks.
Okay, time to check your understanding.
I have a question for you.
What is the purpose of ethical design in AI? Is it A, to increase sales of AI technology products, B, to replace human oversight, or C, to reduce harm and make AI systems fairer? Pause the video whilst you think about your answer.
Did you select C? Well done.
The purpose of ethical design in AI is to reduce harm and make AI systems fairer.
Why is human oversight important in AI systems? Is it A, it makes AI systems run faster, B, it ensures accountability remains with people, or C, it removes the need for testing AI systems? Pause the video whilst you think about your answer.
Did you select B? Well done.
Human oversight ensures accountability remains with people.
What is the purpose of regulation in AI? Is it A, to set rules for things like safety, fairness, and privacy, B, to reduce the profits of companies that develop AI technology, or C, to eliminate the need for human oversight? Pause the video whilst you think about your answer.
Did you select A? Well done.
Regulation sets rules for things like safety, fairness, and privacy in AI systems. Okay, a true and false statement for you now.
Only companies are responsible for reducing AI risks.
Is this true or false? Think carefully about your answer.
Did you say false? Well done.
Reducing AI risks is a shared responsibility between governments, companies, and users.
Okay, we're moving on to our second task of today's lesson, task B, where we're going to explain risk mitigation.
Alex says, "I've heard there are many benefits to AI technology, but also risks.
How can the risks be mitigated?" In your own words, for part A, explain two to three strategies that can be used to mitigate risks of AI technology, and then for Part B, explain how reducing AI risk is a shared responsibility between governments, companies, and users.
Remember, if you need to revisit any slides in the slide deck, you can pause the video and go back through.
How did you get on? Did you manage to explain risk mitigation? Well done.
Let's have a look at some sample answers together.
So for part 1A, you were asked to explain in your own words two to three strategies that can be used to mitigate risks of AI technology.
Ethical design can reduce risks by building AI systems with fairness, safety, and inclusivity in mind from the start.
This reduces bias and make systems more responsible.
Regulations can reduce risks when governments create rules and standards to guide how AI can be used.
This helps prevent misuse and protects people's rights and privacy.
Human oversight is an important strategy for reducing risk by keeping people involved in reviewing and controlling AI outputs.
This ensures people remain accountable and reduces harm from errors and mistakes.
Remember, you may have described some different strategies, but that's absolutely fine.
If you want to pause the video here and add any detail to your answer, you can do that now.
For part B, you were asked to explain how reducing AI risk is a shared responsibility between governments, companies, and users.
Reducing AI risks is a shared responsibility.
Government set laws, standards, and regulations to keep AI systems safe and fair.
Companies must design and test AI systems responsibly, as well as being transparent about their AI tools.
Users should use AI responsibly, think critically about outputs, and avoid harmful use.
By working together, these groups can reduce risks and make AI safer for society.
Again, if you need to pause the video whilst you add any detail to your answer, you can do that now.
Okay, we've come to the end of today's lesson, Risk Mitigation of AI, and you've done a fantastic job, so well done.
Let's summarise what we've learned together in this lesson.
AI systems can pose risks, such as bias, job displacement, misuse, and lack of transparency.
AI risk mitigation involve strategies like ethical design, regulation, testing, and human oversight.
Governments, companies, and individuals all play a role in reducing the potential harms of AI technologies.
I hope you've enjoyed today's lesson, and I hope you'll join me again soon.
Bye.