14 Bit RGB to YCrCb - matlab

I have a 14 bit image that I like to convert to YCrCb color space. As far as I know the conversions are written for 8-bit images. For instance when I use matlab function rgb2ycrcb and I convert it back to rgb then it would be all whites. It is very important for me to not lose any information. What I want to do is to separate luminance from chroma and do some process and convert it back to RGB.

The YCbCr standard to convert quantities from the RGB colour space was specifically designed for 8-bit colour channel images. The scaling factors and constants are tailored so that the input is a 24-bit RGB image (8-bits per channel. BTW, your notation is confusing. Usually you use xx-bit RGB to represent how many bits in total that is required to represent the image).
One suggestion I could make is to rescale your channels independently so that they go from [0-1] for all channels. rgb2ycbcr can accept floating point inputs so long as they're in the range of [0-1]. Judging from your context, you have 14 bits representing each colour channel. Therefore, you can simply do this, given that your image is stored in A and the output will be stored in B:
B = rgb2ycbcr(double(A) / (2^14 - 1));
You can then process your chroma and luminance components using the output of rgb2ycbcr. Bear in mind that the components will also be normalized between [0-1]. Do your processing, then convert back using ycbcr2rgb, then rescale your outputs by 2^14 - 1 to bring your image back into 14-bit RGB per channel. Assuming Bout is your output image after your processing in the YCbCr colour space, do:
C = round((2^14 - 1)*ycbcr2rgb(Bout));
We round as this will most likely provide floating point values, and your image colour planes need to be unsigned integers.

Related

Cairo convert to monochrome?

Is there a straightforward way of converting a Cairo RGB surface to 1 bit monochrome (FORMAT_A1) using operators, or does it require iterating through all the pixels?
There are numerous examples of converting to greyscale, where the hue and saturation are discarded and only the luminosity of the RGB channels is kept. The problem is that in A1 surfaces the 1 bit data comes from the alpha channel, and I can't find any method for translating luminosity (in the RGB channels) to alpha.

Saving kinect depth frame (uint16) using MATLAB but why is it too dark?

Recently I work on kinect using MATLAB. I take depth frame which is in uint16 format. But when I display it or save it using MATLAB command like: imshow & imwrite respectively, it shows too dark image. But when set the display range or convert it in uint8 format it becomes brighter. But I want to save it as a brighter format without converting in uint8 format like scaling the range between 0 to 4500.
vid = videoinput('kinect',1);
vid2 = videoinput('kinect',2);
vid.FramesPerTrigger = 1;
vid2.FramesPerTrigger = 1;
% % Set the trigger repeat for both devices to 200, in order to acquire 201 frames from both the color sensor and the depth sensor.
vid.TriggerRepeat = 200;
vid2.TriggerRepeat = 200;
% % Configure the camera for manual triggering for both sensors.
triggerconfig([vid vid2],'manual');
% % Start both video objects.
start([vid vid2]);
trigger([vid vid2])
[imgDepth, ts_depth, metaData_Depth] = getdata(vid2);
f=imgDepth;
figure,imshow(f);
figure,imshow(f,[0 4500]);
imwrite(f,'C:\Users\sufi\Desktop\matlab_kinect\Data_image\output\depth\fo.tiff');
stop([vid vid2]);
When I set the display range:
Without setting the display range:
The values in a 16bit image range from 0 to 65535.
If we take a look at the histogram of your image:
We see that the max value is 7995. But that's just a few outliers. Most information is somewhere between 700 and 4300.
So all our values are in 5-10% of our value range. That makes it look very dark.
In order to make it look better for humans we have to normalize it. (Some image viewer do this automatically).
So in order to get a nicer image into your power point presentation you have two options.
a) display it in an image viewer that can display it nicely and take a screenshot
b) normalize the image in matlab and save it to a file.
You can further improve the image by removing those outliers befor normalization.
One simple way can be scaling the image based on following formula:
Pixel_value=Pixel_value/4500*65535
If you want see the exact image that you get from uint8 ; I guess the following steps will work for you.
Probably while casting the image to uint8 matlab firstly clip the values above some threshold lets say 4095=2**12-1 (i'm not sure about value) and then it makes right shifts (4 shifts in our case) to make it inside the range of 0-255.
So i guess multiplying the value of uint8 with 256 and casting it as uint16 will help you get the same image
Pixel_uint16_value= Pixel_uint8_value*256 //or Pixel_uint16_value= Pixel_uint8_value<<8
//dont forget to cast the result as uint16

How to know the depth value of 24bit depth images in matlab

I am trying to deal with 24bit depth images from NYU Hand dataset in MATLAB.
When i tried to read images as below in MATLAB
img = imread('synthdepth_1_0006969.png');
the form of the variable( img) is 480x640x3 uint8.
My question is, in this case, how do i know the depth value from that?
When I read 8bit or 16bit images in MATLAB, each pixel show the depth value. But
in 24bit case, I don't know how to deal with it...
Thank you for reading my question.
Notice that the image data is 3 dimensional, with the third dimension having a size of 3. That third dimension encodes the red, green, and blue color planes in a Truecolor image. Three uint8 (i.e. unsigned 8-bit integer) color values equates to 24 bits of total color information per pixel.

Image processing-YUV to Rgb

Why does one convert from YUV to RGB , what is the advantage in image processing using matlab of doing such a conversion. I know the answer partially that Y is the light component which gets eliminated in RGB format? what is the basis of such conversions?
I'll tell you what you could have easily found on the internet:
YUV was introduced when colour tvs came up. there should be minimum interference with existing monochrome tvs. so they added color information uv to the luminance signal y.
Due to the way digital colour images are captured (using red, green or blue pass-filtered pixels) the native colour space for digital images is RGB.
Modern displays also use red, green and blue pixels.
For printing you will find YMCK colour space as printing.
Nowadays RGB is the default colour space in digital image processing as we usually process the raw image information. you won't find many algorithms that can handle YUV images directly.

Direct conversion from YCbCr to CIE L* a* b*

I would like to convert a pixel value in YUV (YCbCr) to CIE L* a* b* color space. Do I have to go through RGB and CIEXYZ color space or do anyone know a formula for direct conversion?
You need to go through each step. YCbCr is often encoded over video range (16-235/240 for 8 bit), and that needs to be converted to a XYZ using a particular Video RGB space definition(ie. Rec709 for High Def) which involves undoing the per-channel non-linearity of the RGB and then multiplying by the RGB->XYZ primary matrix. You then need to supply a white point (typically D65, the one in the RGB space definition), apply a different non-linearity and then another matrix to produce L*a*b*. I doubt there is much efficiency to be gained by combining all these into one transform.