Along with HDR (High Dynamic Range), a lot of technological innovations are creeping into your television. For example, HDR images offer a higher dynamic range (more black detail and higher white values) and a larger color range. But you also often see references to very technical sounding things such as 10-bit color and color depth. We explain exactly what TV background color depth means and why it is important.
TV background color depth – The basis of color rendering
How do we actually display color on a screen? As you know, every pixel on a screen consists of three sub-pixels: a red, green and blue. You can show the pure red, green and blue by activating only the corresponding sub-pixels.
All the colors that the television can display are within the triangle that is formed on the chromaticity diagram by that pure red, green and blue. All colors within this triangle form the color range of the television. If you would like a refresher on color gamut, read on in our article about the color gamut of TVs .
The television displays all colors of the color range by mixing the basic red, green and blue in a certain ratio. We can describe a pixel with a certain color with a triplet (x, y, z) where x, y, and z indicate the relative amount of red, green and blue.
TV background color depth: more bits, more shades
Which values can we use for (x, y, z)? The triplet (0, 0, 0) represents black (the pixel is off). If we define the maximum value of the sub pixels as 1, then (1, 1, 1) is the brightest white that the screen can show (red, green and blue mix together to form white). But intermediate values such as eg. (0.314, 0.272, 0.141) can be used in the mathematical framework around color theory, but not to control a screen. A video signal is digital and is thus encoded in a number of bits. We call that number the color depth (or sometimes bit depth).
If we assign 1 bit to each subpixel, then each subpixel can only be on or off. There are then 8 possible colors (2 * 2 * 2), namely black, red, green, blue, cyan (green + blue), magenta (blue + red), yellow (red + green) and white. As the number of bits we use to encode the value of a sub-pixel increases, the number of possible shades grows. The number of shades per base color is 2n where n is the number of bits. And the total number of colors that a screen can display is 2 n x 2 n x 2 n , or all possible combinations of the three basic colors. With 8 bits per color you can use any basic color in 256 (2 8= 256) activate steps. The total number of colors then comes to more than 16.7 million (256x256x256).
|Bits||Number of values per main color||Total number of colors|
|1||2 1 = 2||2x2x2 =||8|
|2||2 2 = 4||4x4x4 =||64|
|8||2 8 = 256||256x256x256 =||16,777,216|
|10||2 10 = 1024||1024x1024x1024 =||1,073,741,824|
|12||2 12 = 4096||4096x4096x4096 =||68,719,476,736|
Please note, in many cases this is referred to as 24-bit color. In that case, reference is made to the combined amount of bits of red, green and blue: 8 + 8 + 8 = 24. That, of course, better describes the total color depth of the system. In exceptional cases, one speaks of the same thing when talking about 8-bit color or 24-bit color. Confusing, that’s true, but now you know why.
Why is 8 bit color depth no longer sufficient?
Why are we now being bombarded with 10- and 12-bit color? Our televisions and computer monitors have been using 8-bit color for many years. DVD, Blu-ray and TV broadcasts are encoded at 8 bits per color. Are those 16.7 million colors not enough? This is sufficient for many images, but our eye is very sensitive to gradual changes in brightness and color. In very soft color transitions from one color to another, the display may not have enough steps to make that transition without visible color bands (which is why we refer to the problem as ‘banding’). In the past, on our ‘small’ 32-inch televisions, that was rarely a problem. But now a 55-inch television is no longer an exception. And the gradient that you perceived as perfect on your old 32 inch TV because you were so far from it,
But there are other factors that play a role. After all, with the introduction of HDR and expanded color gamut, the problem becomes even greater. HDR (High Dynamic Range) requires a higher peak luminance. Films for Blu-ray have so far been mastered in the studio on a screen with maximum 100 nits peak luminance. For HDR, the same film studios now use screens that can go much brighter, up to 1,000 nits or even 4,000 nits, and possibly even 10,000 nits in the future. (to be clear, these are very expensive professional monitors). So a larger range must be divided into a discrete number of steps. In addition, our eye is very sensitive to very slight variations in dark tones, so many more steps need to be added in the dark part of the gray scale than in the bright part.
A similar thing occurs with the color range. The new Rec.2020 color gamut (see image) is significantly larger than Rec.709 that we use today for all our content. To avoid banding, more and smaller steps are certainly needed to distribute that whole range in sufficient detail.
That is why we switch to our screens and our images to 10 bits. Thanks to 10-bit color, you can increase the number of steps available for a primary color (and therefore also for the gray scale) by a factor of four, and the number of available shades by a factor of 64.
Anyone who wants to visualize the concept simply can imagine the following. Suppose you divide a 10cm long bar into 100 steps. Those steps (1mm long each) are small enough to meet our goal (in this case, to measure accurately). The introduction of HDR, larger color range and larger screens can then be compared to stretching that bar up to 1m. If we still keep using 100 steps for the subdivision, they are 1cm long. But the accuracy we demand has not changed, it is still 1mm. So unless we make more subdivisions, we will invariably identify a certain deviation in our measurement with that 1m bar.
How many bits is enough?
There is a lot of science behind the calculations about how many bits are sufficient such that a step change in luminance or color detail is just invisible. We are not going to knock you around with all those formulas, but we will give you the result. The consensus is that 12 bits would suffice if we have screen technology that truly approaches the limits of the human eye. But since we are not there yet, 10 bits will suffice for the current and near future.
In post production, while editing the video, or in internal editing in a television, the signal can even be converted to an even higher color depth. This is important to keep any errors as small as possible during those operations. The final result is then reduced to 8 or 10 bit.
Is my TV a 10-bit TV?
What do you need to enjoy the 10-bit color depth? To start with, of course, a TV that has a 10-bit panel. In other words, the screen must be capable of being driven with 10-bit signals. Since the standards for HDR use a 10-bit signal, you could assume that every HDR-compatible TV uses a 10-bit panel. Unfortunately that is not the case. And manufacturers do not always indicate this clearly. The top models invariably use 10-bit panels, but in the middle bracket, where we rather speak of HDR-compatible instead of real HDR display, 8-bit panels are still regularly used. The 10 bit signal is then converted to an 8 bit equivalent. Of course that image is not as good as on a real 10 bit panel, but there is some improvement.
Of course, the content you view must also be encoded in 10-bit. That is certainly the case for all HDR content. Content in HDR10 (the standard that is mandatory on Ultra HD Blu-ray) uses a 10-bit signal. Dolby Vision (another HDR standard) can even deliver 12-bit signals. But all your existing content such as DVD, Blu-ray, or TV broadcast will still be an 8 bit signal.