You have turned-in this assignment. You can review the lesson and see your previous answers.

Lesson video

In progress...

Loading...

Hello, my name's Mrs. Rawbone and I'm really looking forward to working with you today on our topic of personhood, what might be the people of the future?

In today's lesson, we're going to be looking at the issues surrounding AI and asking the question, AI, is it a person?

In today's lesson, you will be considering whether AI can be a person and how Searle's Chinese Room thought experiment can be applied to this question.

We're going to be using some key words in the lesson.

Artificial intelligence or AI, Chinese Room, person, and understanding.

Artificial intelligence or AI is computer systems that perform tasks needing human-like intelligence.

The Chinese Room is Searle's thought experiment about understanding.

A person is often described as a being with a mind capable of conscious experience, understanding, and meaningful action.

Understanding is grasping meaning not just following rules.

Today's lesson will take two parts.

We'll be looking at being a person and at the Chinese Room thought experiment.

So let's get started on being a person.

Andeep is asking Aisha and Lucas, what makes someone a person?

Aisha says, "I think a person is someone who has a mind.

"This means they have thoughts, memories, "beliefs, desires, and consciousness.

" Lucas responds, "I agree that having a mind makes someone a person.

"A mind allows us to understand and interpret the world.

"We are aware of our own thoughts, we feel emotions, "and we can connect with others.

" So how well do these ideas match what you think?

If you're able to take a moment to pause the video and turn and talk to someone nearby, or alternatively you can pause and have a think or turn and talk to me, come back when you're ready to move on.

There are many different philosophical ideas about what the mind is.

From ancient times, many people have been dualists.

This means they view the physical and non-physical as two distinct types of things or substances.

The mind is non-physical and it's separate from the rest of the body.

In the 1920s, with the development of modern science, we have the view called physicalism that everything is physical, including the mind.

Now, this idea had been around before, Aristotle, in fact, argued for this in ancient times, but it had sort of gone out of favor for a long period of history.

In the 1960s, we have a view that's often considered physicalists called functionalism.

Now, functionalism says that the mind is defined by what it does and not what it is made of.

Your views about what makes someone a person might depend on which theory of the mind you agree with.

Most dualists would say, that only a human being can be a person, and that's because only humans have non-physical minds.

You can see in our diagram here that it suggests that two kinds of things exist.

There's the non-physical on the left and the physical on the right.

Human beings have a non-physical mind, dualists would argue, where they experience emotions, awareness and understanding.

The body, including the brain is physical and these two things are distinct and different.

Physicalists would say that any being with the right kind of physical brain activity counts as a person.

This is because modern discoveries about how the brain works known as neuroscience have led many philosophers to argue that separating the mind from the body is a mistake.

So we have a diagram here showing you what physicalism is.

It argues that whatever we might call the mind is simply the same thing as something physical, such as being the same thing as the brain.

Everything is physical, including the mind.

So let's check your understanding.

Is this statement true or false?

Physicalism says a person has both a physical body and a non-physical mind.

Take a moment to think about your answer.

Jot it down.

Have a think also about why you have come to that conclusion.

Come back when you're ready.

So well done, if you put false, but why is this false?

Well, it's false because physicalism doesn't include anything non-physical.

So saying a person has a non-physical mind or soul is the dualist view, not the physicalist one.

Well done if you've got to that conclusion.

Artificial intelligence or AI is a type of computer program.

You might have come across it in various forms such as digital assistance that respond to questions or help with tasks.

Content recommendation systems that suggest videos, articles or music.

Image and speech recognition used in devices and apps.

Game opponents that adapt to player behavior.

Creative talks that help generate text, images or ideas.

AI learns and processes information.

It is given large collections of data and it looks for patterns within them.

It notices connections such as subtle features in images or patterns or in language.

Once trained, it can answer questions, classify images, generate text, or suggest likely next steps.

When its output is inaccurate, the system can be refined so it learns from its mistakes and it performs better next time.

AI acts intelligently and in this way we might say that it behaves like a person.

Acting intelligently: while AI spots patterns in information, it uses these patterns to decide what to do next, and it produces responses based on those patterns, and these are the sorts of behaviors we might see a human engaging in.

Some people would argue, however, that there is a difference between acting intelligently and actually understanding.

And so in this way, AI, although it acts intelligently, is not actually like a person.

So if we think about understanding, a person notices what a situation means, they're guided by reasons, they choose a response because its purpose makes sense and that is different from the way that AI acts intelligently, some people would argue.

Andeep is using AI on his phone and he asks it a question.

He types, "Are you a person?

" The AI responds, "That depends on what we mean by person.

"I can talk, answer questions and appear intelligent, "but I'm not a person in the human sense.

"I don't have consciousness, feelings or intentions.

"I can produce intelligent answers, "but I don't understand or experience anything.

"I act like a conversational partner, "but that's different from being a person.

"So the short answer is no.

"I'm an AI system and not a person.

" Andeep interviews Zainab, who is a software engineer, she builds AI features into apps and systems.

He asked her, "do you think AI is a person?

" Zainab replies, "No, AI isn't a person.

"The systems I help build don't have consciousness, "feelings or intentions.

"They can process information and behave intelligently, "but they don't understand what they're doing "or experience the world.

"AI can act like a conversational partner, "but being a person involves awareness "and moral responsibility, which AI doesn't have.

" So let's check your understanding.

Which of the following is incorrect?

A, AI can follow rules without understanding.

B, AI processes data and produces answers.

C, AI that gives correct answers always understands what it is saying, or D, AI learns from its mistakes.

So take a moment, think about what you've learned about AI and choose the one that you think is incorrect.

Pause if you need to and come back when you're ready to check your answer.

Well done if you put C.

AI, when it gives correct answers, does not understand what it is saying.

For task A, I'd like you to consider the statement: AI is a person.

Find yourself a partner and discuss with them whether the points in the table could be used to agree or disagree with this statement and then explain why.

So let's have a look at the points.

AI can learn from what it has done before.

AI does not understand meaning.

AI has no beliefs, intentions or emotions.

AI can use a language in complex ways.

AI does not have conscious awareness.

So remember this is a debate and there are people who would argue that AI is a person or close to being a person, very much like one as well as many people that would disagree, have a think about which points they could use to support either of those views.

Take your time.

Pause the video and come back when you're ready to see what you could have said.

You should have said something like this.

If AI can learn from what it has done before this would be argued to agree because it suggests that learning is a sign of intelligence.

For AI does not understand meaning, this would be disagree because understanding is often considered necessary for genuine thought and so to be a person.

For AI has no beliefs, intentions or emotions.

This would be used to disagree because those are the things that people often suggest mean we have a mind and hence our people or persons.

AI can use language in complex ways that would be agreeing with a statement because we sometimes say that being able to communicate is part of what it means to be a person.

And finally, AI does not have conscious awareness.

This will be to disagree because many definitions of a person include the idea that we are conscious or aware of our own existence.

Some work if you manage to work out how each of those points could become arguments in this debate about whether AI is a person.

Moving on to the second part of our lesson, we are going to be looking at the Chinese Room thought experiments.

Functionalism is a theory which states that a mind is defined by what it does, and as we saw in the last learning cycle, it's quite a modern or recent theory beginning in the 1960s and it is a theory that many philosophers support today.

So here is an explanation of what functionalism is in a diagram, what we have over on the left is some inputs that come into a system.

Here we have the input of seeing a spider and also another input which is a belief that a person has that spiders are dangerous.

What happens then is that these inputs trigger a process.

We could have lots and lots of inputs.

We've got a very simple example here of just two inputs that trigger a process.

In our example, an output that is produced is that the person runs away that is triggered by their fear of spiders or their belief that they're dangerous and by the fact that they have seen a spider.

Now functionalism says that when we talk about things like fear or sadness or unhappiness, things that we might term mental states or things that we hold in our mind, what we are really doing is we are describing how things that we experience the inputs affect us and make us behave or act in certain ways.

This is a little bit like how a computer system works.

So computers have inputs and they process those inputs to produce outputs.

Now, computers could be said to have a mind if a mind is defined by what it does.

And this is because a computer is a system that follows rules when it process inputs and produces outputs.

This diagram here represents how we might argue computers think.

So we have a computer system, a diagram here representing one.

Over on the left we might have inputs such as clicking the print button on a document, the computer having already stored some printer settings.

What happens is the central processing unit in the computer processes those inputs and then we have an example output, in our case, because we've clicked print and the computer has already stored printer settings, the printer prints the document that we would like to come out of the machine.

This is an example of how computers think and some people argue, functionalists argue, that this is actually quite similar to how humans think.

We process inputs and we produce outputs.

This is what makes us have a mind.

What do you think?

Is using rules to process inputs evidence of having a mind?

Take a moment to think about that question.

Turn and talk to someone nearby if can, pause the video, and come back when you're ready to move on to think about this topic a little bit further.

Now, AI is a type of software that runs on computers.

It follows rules and spots patterns to process inputs and produce outputs.

So it's within the computer, it's a system within the computer.

Now if functionalism is true, just like the computer itself, the software of AI, the system of AI could have a mind.

Let's have a look at this more closely.

So let's think of some example inputs.

You might be using a chat kind of AI system.

You might type something like explain photosynthesis in one simple sentence.

You're struggling with your biology homework, for example.

Now what AI does, the chat system recalls any stored learning patterns about biology.

It might also search the internet for some information.

It recalls what it knows about sentence structure and also what counts as simple and as an explanation.

It processes all of those inputs, so not just what you've typed, but all of the other things already kind of there in the system, and then what happens is AI produces an output such as photosynthesis is how plants use sunlight to make their own food.

Now, some people would argue that this is very similar to how a human mind works.

We take on board just like we talked about with computers, all of those different inputs, we process them and we then have an output.

So what do you think?

Do you agree that the way AI processes inputs shows that it has a mind?

Take a moment and pause the video.

Have a chat with someone nearby if you can, and then come back when you are ready to move on.

Let's check your understanding again.

Functionalism implies AI could have a mind because A, AI is made of silicon.

B, AI behaves like it thinks, C, AI is faster than humans or D, AI cannot feel emotions.

So take a moment to think about what functionalism suggests about why AI might have a mind.

Pause if you need to and come back when you're ready to check.

Excellent work if you spotted that it is B.

The theory of functionalism is all about the mind being essentially how things work, how information is processed.

The philosopher John Searle, who lived from 1932 to 2025, developed a thought experiment which challenged functionalism.

So he didn't agree that your mind was about processing inputs and producing outputs, and one of the reasons he didn't agree was that he felt that you could do this without real understanding.

Now, this is known as the Chinese Room thought experiment.

Look closely at the diagram.

In John Searle's thought experiment.

A human being sits inside a closed room with no way to communicate except through written messages.

Sheets of paper covered in Chinese characters are passed into the room.

The individual inside cannot read Chinese at all.

The symbols have no meaning to them.

Inside the room is a very large instruction book written in English and it contains precise rules for how to respond to any Chinese message.

The rules explain which symbols to copy, which patterns to rearrange, and how to produce sequences that match whatever has been sent in.

The individual inside follows these rules step-by-step, carefully matching shapes and producing new ones even though none of the symbols make sense to them.

With this thought experiment, Searle was trying to show that having a mind is more than just following rules to process inputs and produce outputs.

When the completed response is passed outside, a fluent Chinese speaker reads it and sees a perfectly sensible reply.

From the outside, it appears as if the room contains someone who genuinely understands Chinese.

The answers are coherent, appropriate, and indistinguishable from those of a real speaker inside the room.

However, there's no understanding at all.

There is only the mechanical following of rules.

The person is manipulating symbols without knowing what any of them mean.

Searle uses this scenario to question a particular idea about program systems.

If a system can produce convincing answers simply by following formal rules for manipulating symbols, does that mean it genuinely understands anything?

This challenge is the functionalists view which holds that having a mind depends on what a system does and not on what it is made of.

Searle's thought experiment invites us to ask whether outward behavior alone is enough for real understanding or whether something important is missing inside this room and therefore in the explanation of what makes something a mind.

Andeep and Aisha are discussing the Chinese Room thought experiment.

Andeep says, "the person in the room only follows rules "and spots patterns.

"So they only seem to understand Chinese.

"AI works the same way.

"It looks like it understand, "but it's really just matching patterns.

'To me that means AI doesn't have a mind "because minds do more than process inputs.

" Aisha says, "I can see what the thought experiment "is trying to show, "but I don't think understanding "is required for something to have a mind.

"In the end, what matters most is "how it responds and behaves.

"Maybe AI could have a mind because it takes inputs, "processes them and produces meaningful outputs "in the same way that we do.

" So whose view do you align with more?

Take some time to turn and talk to someone nearby.

Discuss Andeep's view and discuss Aisha's view.

Come back when you're ready to move on.

Searle thought that functionalism could not describe minds correctly because if it did this would mean computers could be said to have mind.

This could be applied to AI.

Searle would agree with Andeep when he says minds do more than process inputs.

So how is AI like the person in the Chinese Room?

Take a moment, pause the video, and turn and talk to someone nearby about this question.

Now you might have said that AI processes inputs and produces outputs, which is similar to the person in the room, but at the same time it doesn't actually really understand what it's doing, just like the person in the room did not understand Chinese.

Now if AI lacks understanding, does this mean it cannot be said to have a mind, even though it might appear to or it looks like it has?

Again, pause the video, turn and talk to someone nearby.

Have a think and come back when you are ready to move on.

Let's check your understanding.

What is the missing word?

Searle's Chinese Room shows that following alone does not prove understanding.

Pause if you need to and come back when you're ready to see the correct answer.

So well done if you put rule following.

For task B, I'd like you to consider the statement the Chinese Room thought experiment shows AI cannot be a person.

For part one, think about the following questions and discuss them with a partner.

A, why do people outside the room think the person inside understands Chinese?

B, what is Searle trying to show by describing this situation?

C, how does the Chinese Room relate to the idea of a computer following rules?

And D, how could this be applied to AI?

So take your time to have a really good conversation and discussion and then come back when you are ready to see what you could have said.

You might have said for A, why do people outside the room think the person inside understands Chinese.

Outsiders see fluent Chinese answers coming outta the room, so it looks as if the person inside genuinely understands the language.

For B, what is cell trying to show by describing this situation?

You might have said Searle aims to show that producing correct answers does not guarantee real understanding of meaning.

For C, how does the Chinese Room relate to the idea of a computer following rules?

You might have said, the person in the room follows a rule book step-by-step, just as a computer follows programmed instructions without grasping meaning.

And for D, how could this be applied to AI?

The thought experiments suggest that AI might generate answers while still lacking genuine understanding, raising questions about whether it could count as a person.

Excellent work if you manage to show your understanding of AI and how it might relate to the Chinese Room thought experiment.

For part two of our task, I'd like you to choose one question from the list below.

Discuss it and be ready to report back on your answer.

A, is understanding the kind of thing a person must have or could something counter a person even without it?

What does this mean for AI?

Or B, what would need to be happening inside a system for us to say it genuinely understands something?

What does this mean for AI?

Or C, could understanding be nothing more than doing the right job or playing the right role as functionalists suggest, what does this mean for AI?

So take your time, pause the video, have a discussion, and then come back when you're ready to see the sort of thing that maybe you might have come up with.

So let's have a think about what you might have said if you chose B.

What would need to be happening inside a system for us to say it genuinely understands something, and what does this mean for AI?

Perhaps you said, "A system would need more than the ability "to give the right answers.

"It would need some kind of inner awareness "of what the symbols mean, "not just a set of rules for matching one symbol to another.

"This raises the question of whether AI "has anything happening inside that counts as understanding "or whether it is just processing patterns.

"If AI only manipulates symbols "without any awareness or meaning, "then Searle would say it doesn't genuinely understand.

"But if understanding could be something different "from human awareness, "then the argument might not rule out AI "as a potential person.

" Now of course, your discussion might have been quite different from this and you might have chosen a different question, but that's just an example of how you might have developed your ideas.

Well done if you managed to engage with the question.

In today's lesson, we have looked at how what we count as a person might depend on what we think a mind is.

The dualists usually say only humans can be persons because only humans have a non-physical mind.

That physicists and functionalists might allow the AI could count as a person if it has the right kind of physical or functional mind.

That AI can act intelligently spotting patterns, answering questions and talking like a partner without understanding.

And finally, that Searle's Chinese Room challenges the idea that AI is a person arguing that rule following and correct answers do not prove real understanding.

Thank you so much for all of your hard work today on this lesson.

It's been really great working with you.