Lesson video

In progress...

Loading...

Hello, my name's Mr. Davidson.

And you've made a great choice to learn about representing bitmap images with me today.

The title for today's lesson is Representing Bitmap Images from the unit Representing Text, Images, and Sound.

By the end of today's lesson, you are going to be able to describe how computers represent bitmap images.

Our keywords for today's lesson are pixel, which is the smallest element of an image, color depth, which is the number of bits used to represent the color of each pixel, and resolution, the total amount of pixels an image contains based on its width and height expressed in pixels.

We have three learning cycles today.

Let's start with the first, describe bitmap image representation.

Computers have to represent lots of different images.

It's not uncommon for a situation like Laura's, where she's thinking about how her teacher has just uploaded to a page on the school's website, all about their visit to the Houses of Parliament.

They've taken a photo, uploaded it to a server somewhere, and it's available for everyone to view on the school's website.

But how do things we see with our eyes end up as photos on websites? We're gonna learn about part of that process where we use devices to capture real-life images.

Now, as humans, we can see things because our eyes pick up the reflections of light off objects.

A camera works in the same way, light reflects off objects and is captured through the lens of a device.

The color of each light detected at each part of the image is stored as a single point, and we refer to those points on a bitmap image as pixels, which is a word short for picture elements.

We can see on our image, if we zoom in, each of those individual pixels and the colors they've been turned into.

That process repeats across the entire image, of all the light that's been reflected through the lens into a camera sensor.

The image is converted into something called a map of pixels, and that's what we use to refer to as a bitmap image, it's a map of bits.

Now, we know, as with any data in a computer, anytime we have data, it's got to be represented as binary sequences, sequences of bits.

Therefore each pixel that we have on our map of pixels needs to be represented by its own binary sequence.

There is some important terms that we've got to remember there.

Have a look at the sentences.

And can you fill in the correct words that will make the sentences make sense? So the correct answers are a bitmap image is a grid of pixels.

Each of those single points of the image need to be represented by a binary sequence.

The shortest possible binary sequence for a pixel is just one bit.

And we know that one bit can have two different states, zero and one.

If we take each of those two states, we could assign a color value to that sequence.

We could then use those sequences put together to represent each pixel on the image as a whole.

And because we've only got two different states for that one bit, that means each of the two states will be given their individual colors.

In the case of our image that we've got there, that's only black and white, and a very simplistic image.

Putting all of those sequences for each pixel together, we could map out the entire image as a combination of sequences.

And you can see there that the values match up with the colors.

As the binary sequence for a pixel is increased, we can represent more colors.

So in this case, I've added in green and red, which means I've got two extra colors along with the black and the white.

That means for each of those four colors, I'm going to need two bits per pixel, because as you can see there, there are four different states for a sequence of two bits, but the same process would apply, for each pixel that we have on the image, we'd represent it using one of our four sequences, and then the map of pixels could be those pixel values joined together to create an overall binary sequence.

What's useful is to think about the number of bits per pixel that we're using to represent different colors.

The number of sequences, and therefore the number of colors, can be worked out as two to the power of N, where N is the number of bits.

If we apply it to this example, each pixel has two bits, therefore two to the power of two gives us four, which are four possible colors that can be represented per pixel.

Let's try a slightly different example to that.

How many colors can be represented by a sequence of three bits? Well done.

That's the right answer, eight.

Two to the power of three gives us eight.

That means there are eight possible sequences per pixel, and therefore eight possible colors.

These number of bits per pixel in a bitmap image is referred to as something called color depth.

A larger color depth allows an image to display more colors.

However, if we have a larger color depth, that means per pixel, we have more bits, and that ends up resulting in an image that requires a larger number of bits for its representation.

Let's put some of that into practice now.

I'm gonna give you a task with some questions for you to fill in based on what you've just seen.

For the first question, Izzy needs to explain to Andeep what bitmap images are and how they end up as data on a computer.

Using the prompts, discuss what Izzy should say to Andeep.

Once you've done that, complete the table to determine the number of colors that can be represented by different color depths.

Remember, we can find out the number of sequences by using two to the power of N, where N is the number of bits being used.

In this case, N is also going to represent our color depth.

Well done.

You worked really well.

Let's check what answers you've got.

For the first question, we could put an answer something like this, bitmap images are grids of pixels.

A pixel is a single point of an image that represents different colors that make up each image.

Each pixel is a binary sequence of a number of bits, and a sequence represents, therefore, a chosen color.

We can then say that the number of bits that a sequence can be made from is known as its color depth.

In question two, all we had to do is take the color depth and use it as our N value, and raise that to the power of two.

So for a color depth of one bit, we have two possible colors.

For a color depth of three bits, eight.

Color depth of five gives us 32 colors.

A color depth of 16 gives us 65,536 different colors.

And a color depth of 24 bits gives us 16,777,216 different colors.

Let's carry on to the next learning cycle, which is describe image resolution.

Image resolution is the size of an image measured in pixels.

It's measured by the number of pixels across the width and the height of the image.

If we take our same photo as before, we'd look at the width and the height, and then count how many pixels make up those two measurements.

We then express that image resolution as the pixel dimensions.

In our case of our photo, we'd have a width of 180 pixels and a height of 300 pixels.

We'd therefore say that the image resolution is equal to the width times the height, and we'd leave that as an expression, almost as dimensions and a multiple.

So we would say the image resolution could be 180 by 300.

Let's see if you can apply it to an example that we've got here.

Of those three images, one of them has a resolution of 300 by 600.

Remember which way round the dimensions go, and then pick your correct answer.

Well done.

It's B.

We have to eliminate the other two.

And if we look at each one in turn, A is a square image, so the dimensions would be the same, but our resolution in the question is 300 by 600, which are different.

And it couldn't be image C because the width looks longer than the height, therefore that would mean the resolution numbers would have the largest number first, followed by the second.

However, in our case, 300 by 600 means the width is smaller than the height, and B is the correct answer.

We can express image resolution in a different way.

We could express it by the total number of pixels in the image, not only just by its dimensions.

So image resolution would still be width times height.

However, this time, when we have our value, we actually work out what the multiple is.

So in this example, where the dimensions are 180 pixels by 300, that means our overall image resolution is 54,000 pixels.

So let's think about that again.

Which of the following is not a valid image resolution? Well done.

It's C.

A is expressed correctly, 300 by 300 pixels.

B is also expressed correctly, 90,000 pixels.

But we don't measure resolution in bytes.

A camera on a digital device is often specified by the total number of pixels it uses to represent images.

A digital camera may be able to take images of 50 million pixels.

But as Alex is pointing out, "50 million pixels is difficult to understand.

1 million pixels," he suggests, "could be considered as one megapixel.

So 50 million pixels could be referred to as 50 megapixels." Instead of saying 50 million pixels, we could call our camera a 50 megapixel camera.

Let's try and put some of that into practice in task B.

For the first question, Jun is saying, "The resolution of this image is 120 by 300," and he's drawn on the measurements of that image.

The width is 300, and the height is 120.

Jun, though, is wrong.

Can you explain why he's wrong, and what the correct resolution of the image actually is? Once you've done that, have a go at the second question, where we're thinking about a camera on a smartphone.

It's been specified on a website of having a pixel count of 12 megapixel.

Also on the website, an images shown, and that's been taken on the phone with pixel dimensions of width 4,000 pixels and a height of 3,000 pixels.

I want you to describe how these values represent the image resolution of the photos taken on the phone.

Pause the video and have a go at that now, How did you get on? Firstly, we have to explain why Jun is wrong and what the correct resolution of the image is.

Well, he's made the mistake because he's used the width and the height measurements in the wrong order.

When expressing image resolution as dimensions, it should be expressed as width by height.

Therefore, the correct resolution of the image is 300 by 120.

And for the second question, you have to describe how the values represent the image resolution of the photos taken on the phone.

We would say the pixel count of 12 megapixels represents the total number of pixels that an image will use to represent the image, and the dimensions of 4,000 by 3,000 pixels represents the pixel width and height of an image.

The two measurements are related, in that 4,000 by 3,000 gives us 12 million pixels, which we can work out is the equivalent of 12 megapixels.

Let's get onto our last learning cycle for today, comparing different resolutions and color depths.

If we take a bitmap image, and we enlarge that image in terms of its physical size, the number of pixels that have been used to create the image still remains the same.

As a result, the pixels become larger and fill the size the image needs to be.

What you'd notice is that the pixels would become more noticeable and the quality of the image would seem to get worse, it'd start to look more blocky.

Imagine we want to enlarge this image to try and read the writing that is quite small physically on our screen.

As we enlarge it, what would happen is the pixels would look more noticeable, and the image would become blurrier.

So our physical size there is 10 centimeters by 8 centimeters.

And as we enlarge the image and stretch it from one of the corners, you'll notice that the image looks blockier.

The pixels would become more noticeable as the physical size increases.

What has happened is the resolution of the images has stayed the same, but the pixels become larger to fill that physical space.

So we can see there, the resolution before is 50 by 30 pixels, but when we stretch, it still is 50 by 30 pixels.

We would say that the more the image is stretched, the larger the pixels become.

So let's check you've understood that.

When a bitmap image is stretched, the size of the pixels will either get bigger, get smaller, or stay the same? The correct answer is get bigger.

When a bitmap image is stretched, the size of the pixels will get bigger.

If we want more detail in the picture, we're going to need a higher resolution.

So the part of the image that you can see here is from an overall larger image of 1,665 pixels by 2,245 pixels.

The resolution of that image, as we zoom in and stretch and make the parts of the image bigger, still seems clear to us, even when it takes up more physical space.

If the original resolution of an image is high, then it will appear more clearly when the physical size is increased.

Different color depths also affect the displayed quality of an image.

A higher color depth means that we can represent more colors, and that means we can add more detail to our pictures.

It makes the changes in color smoother, and therefore gives us more detail that we can use on the picture.

And we can see that if we change the color depth of one image, as the color depth of an original image is reduced, the detail and quality also reduces as a result of using less colors.

The image on the left is the original at a 24-bit color depth.

It's then reduced to eight bits, where you can see some of the detail is gone, because it can't represent the different shades of red and green.

Eventually, we reduce that down to two bits, where only four colors can be represented, and we can't have any difference between similar colors.

Having fewer colors to use means that the image has to use the closest color it can from what is available.

If we actually work out how many colors we can have, two to the power of 24 gives us over 16 million different colors, two to the power of eight gives us 256 different colors, and two to the power of two only gives us four possible colors.

What word could we use to complete that sentence? Well done.

Reducing the color depth of an image will lower the amount of available colors the image can use.

For our last task today, I've given you just one question, but I want you to think really carefully about bringing all those pieces of knowledge together to answer this question.

A photo of a new cake shop has been taken and used on a billboard to advertise its new opening times.

When it's printed, though, it doesn't appear realistic in color, and it's very blurry.

Explain, with reference to both the resolution and the color depth, what the issues are likely to be with the photo.

Pause the video and have a go now.

Let's have a look at an answer I've created.

So I'm referring to the resolution and color depth, and the issues that are likely to be with the photo.

What you should have put is something about how the resolution of the photo is not large enough to be clear when we enlarge it to the size of a billboard.

It might look okay as a photo on a screen, but when we expand it to make it up to a big physical size, the physical size of the image is increased, but the pixels have to become larger, making the image look blurred.

And when the color depth used for the image is kept too low, that will mean there'll be not enough colors that can be used on the image, and that means that the changes between the colors will look more obvious and less realistic.

Well done.

You did really well today.

Let's just summarize what we've learned.

We found out that bitmap images are made from pixels, which we refer to as the smallest element of an image.

Each pixel identifies the color it should display.

The color depth of a bitmap image is the number of bits used per pixel to display a color.

And the resolution of an image identifies the dimensions in pixels of the image, and therefore the total amount of pixels used.