“Gamma Correction”, most of you might have heard this strange sounding thing. In this blog, we will see what it means and why does it matter to you?
The general form of Power law (Gamma) transformation function is
s = c*rγ
Where, ‘s’ and ‘r’ are the output and input pixel values, respectively and ‘c’ and γ are the positive constants. Like log transformation, power law curves with γ <1 map a narrow range of dark input values into a wider range of output values, with the opposite being true for higher input values. Similarly, for γ >1, we get the opposite result which is shown in the figure below
This is also known as gamma correction, gamma encoding or gamma compression. Don’t get confused.
The below curves are generated for r values normalized from 0 to 1. Then multiplied by the scaling constant c corresponding to the bit size used.
But the main question is why we need this transformation, what’s the benefit of doing so?
To understand this, we first need to know how our eyes perceive light. The human perception of brightness follows an approximate power function(as shown below) according to Stevens’ power law for brightness perception.
See from the above figure, if we change input from 0 to 10, the output changes from 0 to 50 (approx.) but changing input from 240 to 255 does not really change the output value. This means that we are more sensitive to changes in dark as compared to bright. You may have realized it yourself as well!
But our camera does not work like this. Unlike human perception,
So, where and what is the actual problem?
The actual problem arises when we display the image.
You might be amazed to know that all display devices like your computer screen have Intensity to voltage response curve which is a power function with exponents(Gamma) varying from 1.8 to 2.5.
This means for any input signal(say from a camera), the output will be transformed by gamma (which is also known as Display Gamma) because of non-linear intensity to voltage relationship of the display screen. This results in images that are darker than intended.
To correct this, we apply gamma correction to the input signal(we know the intensity and voltage relationship we simply take the complement) which is known as Image Gamma. This gamma is automatically applied by the conversion algorithms like jpeg etc. thus the image looks normal to us.
This input cancels out the effects generated by the display and we see the image as it is. The whole procedure can be summed up as by the following figure
If images are not gamma-encoded, they allocate too many bits for the bright tones that humans cannot differentiate and too few bits for the dark tones. So, by gamma encoding, we remove this artifact.
Images which are not properly corrected can look either bleached
Let’s verify by code that γ <1 produces images that are brighter while γ >1 results in images that are darker than intended
img = cv2.imread('D:/downloads/forest.jpg')
# Apply Gamma=2.2 on the normalised image and then multiply by scaling constant (For 8 bit, c=255)
gamma_two_point_two = np.array(255*(img/255)**2.2,dtype='uint8')
# Similarly, Apply Gamma=0.4
gamma_point_four = np.array(255*(img/255)**0.4,dtype='uint8')
# Display the images in subplots
img3 = cv2.hconcat([gamma_two_point_two,gamma_point_four])
The output looks like this
I hope you understand Gamma encoding. In the next blog, we will discuss Contrast stretching, a Piecewise-linear transformation function in detail. Hope you enjoy reading.
If you have any doubt/suggestion please feel free to ask and I will do my best to help or improve myself. Good-bye until next time.