Lesson video

In progress...

Loading...

Hello, my name is Mrs. Holborow, and welcome to Computing.

I'm so pleased you've decided to join me for the lesson today.

In today's lesson, we're going to be looking at client-side and server-side processing, and investigating how search engines return results.

Welcome to today's lesson from the unit The Internet and Web Technologies.

This lesson is called Web Technologies, and by the end of today's lesson, you'll be able to explain the difference between client-side and server-side processing, and describe how search engines return relevant results.

Shall we make a start? We will be exploring these keywords in today's lesson.

Let's take a look at them together now.

Client-side processing.

Client-side processing, when code runs on the user's device or the client.

Server-side processing.

Server-side processing, when code runs on a web server, and then the content is sent to the user's browser.

Web crawler.

Web crawler, a programme used by search engines to browse and collect information from websites.

Search engine optimization.

Search engine optimization, or SEO, the process of improving how a website appears in search engine results.

Look out for these keywords throughout today's lesson.

Today's lesson is split into two sections.

We'll start by explaining client-side and server-side processing, and then we'll move on to describe how search engines return results.

Let's make a start by explaining client-side and server-side processing.

There are two main roles in the client server model, the client and the server.

Clients make requests.

These could be for all sorts of tasks, like requesting files, accessing emails, or printing documents.

Servers fulfil the requests from the client, so you can see here in the diagram we have a web server with three client devices, which in this case are laptops which are connected to the server.

When using the internet, clients connect to a web server.

The web server stores web pages, which are made using code.

This code can run on the client's device or it can run on the web server before it is sent to the client.

This means processing can happen on the user's device, which is called client-side, or on a web server, which is called the server-side.

This allows websites to respond to users, adapt to their needs and show changing content.

When the code is run on the client, it is known as client-side processing or sometimes front end processing.

Code is run on the user's device in their browser.

The code is usually written in JavaScript and is used to make pages interactive, for example, things like animations and popups.

Code is sent with.

Code is sent with the webpage and runs after the page loads.

It does not interact directly with the server or databases.

Time to check your understanding.

I have a true or false statement here.

Code for client-side processing runs on the web server before it is sent to the user's device.

Is this true or false? Maybe pause the video whilst you have a think.

Did you select false? Well done.

But why is it false? Client-side code is sent with the webpage and runs on the user's device in their browser after the page loads.

Client-side processing can be used when a webpage changes page content without refreshing, validates form inputs before submission.

When the code is run on the web server, it is known as server-side processing or sometimes backend processing.

When server-side processing is used, the server sends the processed content to the browser.

It's often used to create dynamic web pages, and it commonly interacts with databases.

It helps to ensure that the content is always up to date.

server-side processing can be used when a webpage displays live product stock or prices, creates each page from stored content, processes form submissions and storing data, or performs extra validation and calculations.

On a static page, the content stays the same each time it loads.

On a dynamic page, the content changes based on processing or user input.

Server-side processing often works together with a database.

Data is pulled from the database to keep webpages up to date.

Online shops use this to show the latest prices and stock levels.

Many websites use content management systems or CMS. CMS store page content in a database so that it can be updated easily.

This helps to make sure that webpages always show the most current information.

You would be pretty frustrated if you wanted to order something online and it showed as in stock, but when you actually submitted your order, it turns out there wasn't any stock available.

Online gaming sites can often use a client-server model to provide shared access to an online environment.

All the players access the gaming environment via the same server.

This allows them to play together on the same game in real time.

Time to check your understanding.

I have a question for you.

Which of the following tasks is handled by server-side processing on a website? Is it, A, validating form inputs before submission, B, displaying live product stock or prices, or C, creating simple animations and popups? Pause the video whilst you have a think about your answer.

Did you select B? Well done.

Remember, the benefit of server-side processing is that the data can be kept up to date.

Okay, we're moving on to our first task of today's lesson, task A.

Webpages use different types of processing to become interactive and dynamic.

Explain the key characteristics of both client-side and server-side processing.

Pause the video whilst you complete the task.

How did you get on with the task? Were you able to explain the difference between client-side and server-side processing? Well done.

Let's have a look at a sample answer together.

Client-side code runs on the user's web browser, typically making pages interactive with features like animation or form validation.

Server-side code runs on the web server, creating dynamic pages often by interacting with databases to display up-to-date content, like live stock levels.

Did you have that in your answer? If you need to make any corrections, remember you can always pause the video and go back and have a look at some previous slides.

Okay, so we've explained client-side and server-side processing.

Let's now move on to describe how search engines return results.

So here, we have a screen mockup of a search engine with some search results.

Let's have a look at some features of this search engine results page.

So at the top, we have the search bar, which has the text Raspberry Pi Foundation written in.

This is the search term.

Underneath the search bar, we have the categories, so this will return content of certain types, so for example, images, videos, et cetera.

At the top, we have an unvisited hyperlink.

Okay, so this is normally presented in blue, but may not necessarily be blue.

Underneath, we have the page which has the title About Us.

This is a visited hyperlink.

These normally go a purple colour.

And then About Us, Projects and The Computing Curriculum are also known as what's called child pages.

Okay, so they're subpages within a webpage.

Search engines use keywords to categorise the webpages that they find.

When a user wants to find a useful webpage, they enter these keywords and the search engines provides hyperlinks, so that the user can access them.

Search engines use programmes known as web crawlers to find content on the worldwide web.

These web crawlers visit links from one page to another, recording common keywords that they find.

By travelling along these links, the web crawlers can eventually find newly created content.

Web crawlers, which are sometimes also called spiders or bots, are automated programmes used by search engines to explore and catalogue the web.

The web crawler visits the webpage and checks the HTML source code for meta tags.

These provide extra information about the page, such as keywords or a description to help web crawlers understand what the page is about.

The web crawler records keywords on the page and how often they appear.

Common words like "the" and "and" are usually ignored.

More importance is given to keywords in titles or near the top the page, as these usually reflect the page's topic.

The web crawler also records hyperlinks to other pages.

These links are added to a list of pages to be visited next.

As the web crawler visits pages, it builds a search engine index.

This is a large database that stores: frequently used keywords on each page, the type of content, so for example, text, images, video, the date that the page was last updated, and any other useful information to help decide how relevant a page is to a user's search.

Time to check your understanding.

I have a true or false statement now.

A search engine index is a database that stores frequently used keywords on each page.

Is this true or false? Pause the video whilst you have a think.

Did you select true? Well done.

But why is it true? The search engine index acts as a large database, storing keywords and other information to help determine a page's relevance to users' searches.

There are potentially millions of web pages in a search engine index corresponding to a single keyword.

Searches query the index database to find pages with those keywords in them.

Web designers can use their understanding of how search engines work to improve how their page appears in search results.

Ideally, they want their page to return towards the top of the search engine list.

This is called search engine optimization or SEO.

Several factors can influence how webpages are ranked.

Overusing keywords to appear more relevant is often detected by search engines and may actually reduce the page's ranking.

Regularly updated pages are more likely to rank higher, as search engines tend to favour fresh content.

Pages that receive many inbound links from other websites rank higher.

This means if this webpage was got to by clicking on a link in another webpage.

This is because they're often seen as more popular and trustworthy.

Linking to high quality relevant websites can improve a page's ranking by providing useful context.

If users spend a long time on a page, it suggests the content is engaging.

Pages that users leave quickly may be ranked lower.

Time to check your understanding with a question.

Which of the following actions can help a webpage rank higher in search results? Is it, A, overusing keywords in the page's content, B, rarely updating the page's content, or C, receiving many links from other high quality websites? Pause the video whilst you think about your answer.

Did you select C? Well done.

Receiving many links from other high quality websites can make a webpage rank higher in search results.

Okay, we're moving on to our next task of today's lesson.

What are the programmes called that search engines use to find content on the worldwide web? And how do these programmes travel across the internet to find web pages? Pause the video here whilst you answer the question.

How did you get on with the question? Well done.

Let's have a look at a sample answer together.

You were asked, what are the programmes called that search engines use to find content on the worldwide web? And how do these programmes travel across the internet to find web pages? The programme search engines use to find content on the worldwide web are called web crawlers, also known as spiders or bots.

These programmes travel across the internet by visiting hyperlinks from one webpage to another, adding these links to a list of pages to be visited next, which helps them discover new and updated web pages.

What kind of information do web crawlers record about each webpage they visit? And where is this information stored? Pause the video whilst you answer the question.

How did you get on with this question? Well done.

Let's have a look at a sample answer together.

You were asked what kind of information do web crawlers record about each webpage they visit, and where is this information stored? Web crawlers record several types of information about each webpage they visit.

This includes keywords found on the page, especially those in titles or near the top.

Information from metatags, which provide extra details about the page.

And hyperlinks to other pages to add to their list for future visits.

All of this information is stored in a large database called the search engine index database.

Did you have these details in your answer? Remember, if you need to pause the video and make any corrections to your answer, you can do that now.

Okay, our final question of today's lesson.

When a user types keywords into a search engine, how does the search engine use its index database to find and return relevant web pages? Pause the video whilst you answer the question.

Did you manage to answer the question? Well done.

Let's have a look at a sample answer together.

You were asked when a user types keywords into a search engine, how does the search engine use its index database to find and return relevant web pages? The search engine directly queries its vast index database.

This database contains a huge amount of information about millions of web pages, including the keywords found on each page, the type of content and how recently the page was updated.

The search engine quickly searches through this index to find all the web pages that contain those specific keywords.

After finding relevant pages, it then uses various ranking factors, like how often keywords appears, how recent the page is, or how many other trustworthy websites linked to it to decide the order in which to present the hyperlinks to the user in the search results.

Okay, we've come to the end of today's lesson Web Technologies, and you've done a fantastic job, so well done.

Let's summarise what we've learned in this lesson.

Web applications rely on both client-side processing, which occurs in your web browser, and server-side processing, which is handled by the web server, to deliver interactive content and functionality.

To return relevant results, search engines use a web crawler to index web pages.

Websites can use search engine optimization or SEO techniques to improve their visibility in these results.

I hope you've enjoyed today's lesson, and I hope you'll join me again soon.

Bye.