What Is… Bit Depth?

We get to grips with bit depth in our “What is… ?” video series

 

 

Bit depth refers to the level of information that can be represented in a single image, and images with a higher bit depth can show more accurate colours and tonal changes than those with lower ones.

Digital images are made up of binary information, essentially a series of noughts and ones. One bit yields two possible combinations, either a nought or a one, which corresponds to black and white. Therefore a one-bit image can only show a black or white value at each pixel.

A two-bit image would increase this to four possible combinations, as you can have 00, 01, 10 and 11. Four values equates to black, white and two grey values in between. Three bits would double this to eight possible values, once again white and black and six grey value inbetween to make eight, while four bits would double it yet again to sixteen.

Once you get to eight bits you end up with 256 possible values, and the human visual system needs this as a minimum amount for areas of continuously changing tones to appear smooth and even, rather than in defined blocks.

8bit colour images have 8 bits per red, green and blue colour channel, which translates to over 16.7 million possible colour combinations. When you capture JPEGs in camera this is what they are saved in as standard, although when you capture Raw images you’ll typically be saving them in a higher quality 12bit or even 14bit format, which gives you scope for capturing considerably more information.

This might seem like overkill, particularly when you consider that most scene do not themselves contains 16 million different colours. Not only that, but it’s estimated that we can only see around 10 million different colours – so is there any point in capturing so much additional information?

One reason there is is that when it comes to editing your images, it helps to have the most information to begin with so that the file is more malleable to any changes you make. If you capture an image in Raw and JPEG versions and try to edit them in the same way, you’ll be able to appreciate the extent to which is the case. Whereas the Raw file will be able to maintain smooth gradations as you edit, JPEGs may show defined points between darker and lighter areas.

You can see the extent to which this is the case when you look at the histograms of these images. An image that’s been processed appropriately should result in a histogram with a full and continuous curve, whereas an image that’s been edited badly will show spikes as a result of information that has been lost through processing.

When it comes to shooting, you’ll find that Raw images captured at 14bit format will not only fill up the card faster on account of there being more information, but will also slow things down when it comes to continuous shooting for the same reason. For this reason, it’s a good idea to keep Raw shooting to 12 bits for general day-to-day and images and those captured in a burst, and switch to 14 bits when you know you may be able to benefit from the increased information, such as when shooting landscapes and any other scene with a wide dynamic range that you may want to edit later.

 

Related articles

What is… 4K?

What Is… Crop Factor?

What is… Firmware?