Lesson video

In progress...

Loading...

Hello, my name is Mrs. Holbrook, and welcome to computing.

I'm so pleased you've been able to join me for the lesson today.

In today's lesson, you'll be exploring some of the impacts of AI systems. What do we mean by ethical concerns, and what are the social impacts of AI systems? Welcome to today's lesson from the unit: "Data Science, AI, and Machine Learning." This lesson is called "Social and Ethical Issues of AI Systems," and by the end of today's lesson, you'll be able to analyse the social and ethical implications of AI technologies.

Shall we make a start? We will be exploring these keywords throughout today's lesson.

Let's take a look at them together now.

Bias.

Bias: When something is unfair towards or against something or someone.

Privacy.

Privacy: The right to keep personal data safe and control who can see or use it.

Transparency.

Transparency: Being clear about how AI works and explains its decisions.

Look out for these keywords throughout today's lesson.

Today's lesson is split into two parts.

We'll start by describing ethical concerns of AI, and then we'll move on to discuss social impacts of AI.

Let's make a start by describing ethical concerns of AI.

Ethics is deciding what is right or fair.

In computing, ethics focuses on whether technology is used responsibly.

AI raises questions about fairness, responsibility, and impact on society.

Jun has a question: "What does biassed mean? Can you remember what biassed means?" Maybe pause the video and have a think.

Izzy's got an answer.

"Biassed means showing unfair preference for one person or group over another." Well done, Izzy.

Jun says, "AI systems can be biassed, unfair, and unethical if the system is trained on biassed data." This can negatively impact people.

For example, if facial recognition systems are not trained on a diverse set of images, they could produce biassed responses.

And also, if hiring systems are based on a biassed dataset, they may make recommendations about hiring some candidates over others, which would be unfair.

Time to check your understanding.

I have a question for you.

What does it mean for an AI system to be biassed? Is it A, the system is too slow to process data? B, the system is unable to understand human language? Or C, the system is showing an unfair preference for one group over another? Pause the video whilst you think about your answer.

Did you select C? Well done.

An AI system is biassed if the system is showing an unfair preference for one group over another.

Jun says, "AI often uses personal data to make decisions.

But is this data secure?" There is a risk that personal information may be collected, stored, or shared without permission, and this raises privacy concerns.

For example, facial recognition could be trained on images gathered from people in public places.

These people may not have given the permission to have the data or images of their faces stored.

Who is responsible when AI makes a mistake? Is it the programmer, the company, or the AI system itself? Who do you think should be responsible? Maybe pause the video and have a think, or discuss it with somebody if you can.

Without accountability, it's unclear who should fix problems or pay damages.

For example, if there was a car accident involving self-driving cars.

It's important that AI systems are transparent in terms of responsibility and accountability.

Inequality of access to AI tools and systems is an ethical concern.

Not everyone has equal access to AI tools.

Wealthy companies or individuals may benefit more from having access to AI tools.

The term "digital divide" refers to inequality of access to digital technology.

Some people get access, others will not.

For example, individuals that pay for the latest AI tools compared with those using older, free, and often less effective AI systems. Further ethical concerns of AI systems may include security.

AI systems may be hacked or misused for malicious purposes.

Autonomy: Should AI systems be allowed to make important decisions without human input? Okay, time to check your understanding.

I have a question for you.

Which of these is an example of an ethical concern related to security in AI systems? Is it A, an AI system that can be hacked and used for malicious purposes? B, an AI system that shows an unfair preference for one group over another? Or C, an AI system that is not available to everyone? Pause the video whilst you think about your answer.

Did you select A? Well done.

Remember, we're focusing on security here, so an ethical concern about security with AI systems is that they can be hacked and used for malicious purposes.

Okay, we are moving on to our first task of today's lesson, task A, and you're doing a fantastic job so far, so well done.

For part one, describe the ethical concerns of bias and privacy in AI systems, using an example for each to support your explanation.

For part two, explain the ethical concern of accountability when using AI technologies.

And then for part three, why is inequality of access to AI tools considered an ethical concern? Pause the video here whilst you have a go at the task.

How did you get on with the task? Did you manage to describe some ethical concerns of AI? Well done.

Let's have a look at some sample answers together.

So for part one, you were asked to describe the ethical concerns of bias and privacy in AI using an example for each to support your explanation.

AI systems can be unfair if they're trained on data that favours certain groups over others.

For example, a facial recognition system might be less accurate for some groups of people than others.

AI systems often use personal data to make decisions, which creates a risk that this information could be collected or shared without permission.

For example, facial recognition software using data collected from CCTV footage without permission.

For part two, you were asked to explain the ethical concerns of accountability when using AI technologies.

Accountability is an ethical concern that asks: Who is responsible when AI systems make a mistake? Without clear accountability, it is difficult to know who should fix the problem or pay for damages.

A good example of this is a self-driving car accident where it is unclear whether the programmer, the company that made the car, or the AI technology itself should be held responsible for the mistake.

For part three, you asked why is inequality of access to AI tools considered an ethical concern.

Inequality of access is an ethical concern because not everyone has the same chance to use and benefit from AI tools and systems. This can create a digital divide where some people who can afford the latest AI tools may benefit more than those who can only access older, free, and often less effective systems. Did you have some similar responses? Remember, if you need to pause the video here and add any extra detail to your answer, you can do that now.

Okay, we are now moving on to the second part of today's lesson, where we are going to discuss the social impacts of AI.

Lucas says, "AI technology must have made lots of positive impacts, Izzy." Izzy says, "You are right, Lucas, but it can also have negative impacts." Can you think of any positive or negative impacts of AI technology? Pause the video here whilst you have a think.

Lucas says, "There are many positive impacts of using AI technology." Some examples include improving efficiency and productivity, helping to solve complex challenges, and making services more accessible.

Did you have these as some positive impacts, or did you have anything else? Lucas says, "AI technology can improve efficiency and productivity in manufacturing by automating repetitive tasks.

This can lead to more affordable products for customers and higher profits for businesses.

Lucas says, "Governments and scientists can use AI systems to help them make data-informed decisions about climate change.

This can lead to more accurate weather predictions and an improved understanding of how to respond to a changing climate.

Lucas says, "AI technology can make services more accessible.

AI tools such as translators, speech recognition, and assistive technologies can support people with additional needs and help improve communication globally.

Okay, time to check your understanding.

I have a true or false statement for you.

AI technology can be used to improve efficiency and productivity in manufacturing by automating repetitive tasks.

Is this statement true or false? Pause the video whilst you have a think.

Did you spot that it was true? Well done.

Jun says, "There are a number of ways AI technology can have negative impacts." Some examples of these include reduced or changed employment opportunities, biassed systems that favour certain groups, and harmful use of AI technology.

Did you have similar disadvantages of AI systems, or did you have anything different? Jun says, "AI technology can have social implications for people and society.

The social implications of AI technology can affect individuals, communities, and the way we live together.

Jun says, "AI systems may reduce or change employment opportunities in many different industries.

Some workers might lose their jobs or need to learn new skills so that they can work with AI technology.

This can lead to financial stress and reduced opportunities within communities.

Jun says, "Misusing AI technology can lead to harmful consequences and negative impacts on society.

For example, deepfakes and misinformation can influence public opinion, manipulate people, or cause harm to people's reputations.

Okay, time to check your understanding.

I have a question for you.

What is the most likely outcome when an AI system is trained using biassed data? Is it A, the AI will not be able to perform its task at all? B, the AI will automatically correct the bias and produce fair results? Or C, the AI will learn the patterns in the biassed data and produce biassed outcomes? Pause the video whilst you think carefully about your answer.

Did you select C? Well done.

Remember the term: "Biassed in, biassed out." As with all other forms of technology, care must be taken to ensure that AI systems are used responsibly and safely.

As AI technology continues to develop, it's important to hold developers responsible for making AI tools as safe and fair as possible.

Knowing how AI systems arrive at decisions enhances transparency.

Explainable AI, or XAI, is a growing area of research that focuses on creating AI systems where the predictions and decisions can be easily explained to humans.

Such transparency is essential for holding those who develop and deploy AI systems accountable for the system's output, especially in critical areas like healthcare, finance, and criminal justice.

Okay, we are moving on to our second task of today's lesson, task B.

For part one, describe two positive and two negative social impacts of AI technology.

For part two, explain what explainable AI, or XAI, is and why it is essential for holding those who develop and deploy AI systems accountable.

Pause the video whilst you complete the task.

How did you get on with the task? Did you manage to discuss some social impacts of AI? Well done.

Let's have a look at some sample answers together.

For part one, you were asked to describe two positive and two negative social impacts of AI technology.

Improving efficiency and productivity: AI systems can automate repetitive tasks in manufacturing, which can lead to more affordable products for customers and higher profits for businesses.

Making services more accessible: AI tools like translators and speech recognition can support people with additional needs and help improve communication globally.

Let's now have a look at the negative social impacts.

Reduced or changed employment opportunities: Some workers might lose their jobs or need to learn new skills to work with AI technology, which could lead to financial stress.

Harmful use of AI technology: Misusing AI to create deepfakes and misinformation can influence public opinion, manipulate people, or cause harm to reputations.

For part two, you were asked to explain what explainable AI, or XAI, is and why it is essential for holding those who develop and deploy AI systems accountable.

Explainable AI is a growing area of research that focuses on creating AI systems where the predictions and decisions can be easily explained to humans.

It is essential for holding developers and others accountable because it provides transparency.

By knowing how an AI system arrived at a decision, we can hold those who developed it responsible for its output.

Did you have some similar responses? Remember, if you need to pause the video here and go back and revisit your answers, you can do that now.

Okay, we've come to the end of today's lesson, "Social and Ethical Issues of AI Systems," and you've done a fantastic job, so well done.

Let's summarise what we've learned together in this lesson.

The introduction of AI systems has brought benefits, but it also raises important social and ethical issues that need to be considered.

Social implications of AI include job displacement, inequality of access, and privacy concerns over how data is used.

AI can unintentionally amplify existing bias if it is trained on biassed data.

The transparency of AI systems is crucial so that users can understand how decisions are made.

I hope you've enjoyed today's lesson, and I hope you'll join me again soon.

Bye.