Direct conversion from YCbCr to CIE L* a* b* - yuv

I would like to convert a pixel value in YUV (YCbCr) to CIE L* a* b* color space. Do I have to go through RGB and CIEXYZ color space or do anyone know a formula for direct conversion?

You need to go through each step. YCbCr is often encoded over video range (16-235/240 for 8 bit), and that needs to be converted to a XYZ using a particular Video RGB space definition(ie. Rec709 for High Def) which involves undoing the per-channel non-linearity of the RGB and then multiplying by the RGB->XYZ primary matrix. You then need to supply a white point (typically D65, the one in the RGB space definition), apply a different non-linearity and then another matrix to produce L*a*b*. I doubt there is much efficiency to be gained by combining all these into one transform.

Related

How to implement Gray scale morphology to detect round object on gray scale image in matlab?

There are many ways to implement math morph on binary image like imerode and imdilate. Its also used to detect different object/shape using this simple operations on binary image but the problem that i am facing right now is to apply this simple operation i.e erode, dilate and many on grey scale image with out convert them into binary image.
Selement = strel('disk',5);//disk type element used in morphology
erodeimage = imerode(image,selement);//this is only implement on binary image
Above code is for binary math morph how do i implement same concept on grey scale image.
Note: If your have any resources regarding gray scale math morph kindly provide it or provide useful link
There should be a mathematical morphology (MM) library in MatLab. MM operations on binary images are shown as example/illustration but are performed most of the time as gray level.
I think that the fastest C++ library is SMIL, and you can call it from MatLab. An other fast one in C is that one (optimized opening/closing in a single pass).
But if you want to understand a dilation in gray level, here is how it works: for a given pixel p you analyze the value of all the pixels in its neighborhood (defined by the structuring element), and you affect to p the highest value in the neighborhood. You do it for each pixel in your image. See the formula.
That's in fact a rank filter like the median, but instead of taking the median value, you take the max (or min for the erosion). Obviously that the basic definition and it exits faster algorithms, like the one developed in the library I pointed.

14 Bit RGB to YCrCb

I have a 14 bit image that I like to convert to YCrCb color space. As far as I know the conversions are written for 8-bit images. For instance when I use matlab function rgb2ycrcb and I convert it back to rgb then it would be all whites. It is very important for me to not lose any information. What I want to do is to separate luminance from chroma and do some process and convert it back to RGB.
The YCbCr standard to convert quantities from the RGB colour space was specifically designed for 8-bit colour channel images. The scaling factors and constants are tailored so that the input is a 24-bit RGB image (8-bits per channel. BTW, your notation is confusing. Usually you use xx-bit RGB to represent how many bits in total that is required to represent the image).
One suggestion I could make is to rescale your channels independently so that they go from [0-1] for all channels. rgb2ycbcr can accept floating point inputs so long as they're in the range of [0-1]. Judging from your context, you have 14 bits representing each colour channel. Therefore, you can simply do this, given that your image is stored in A and the output will be stored in B:
B = rgb2ycbcr(double(A) / (2^14 - 1));
You can then process your chroma and luminance components using the output of rgb2ycbcr. Bear in mind that the components will also be normalized between [0-1]. Do your processing, then convert back using ycbcr2rgb, then rescale your outputs by 2^14 - 1 to bring your image back into 14-bit RGB per channel. Assuming Bout is your output image after your processing in the YCbCr colour space, do:
C = round((2^14 - 1)*ycbcr2rgb(Bout));
We round as this will most likely provide floating point values, and your image colour planes need to be unsigned integers.

Grayscale image and L*a*b space in MATLAB

I have a bunch of images, the vast majority of which are color (rgb) images. I need to apply some spatial features in the three different channels of their Lab color space. The conversion from RGB Color space to Lab color space is straightforward through rgb2gray. However, this naturally fails when the image is grayscale (consists of one channel only, with the numerical representation being a double, uint8, anything really).
I am familiar with the fact that the "luminance" (L) channel of the Lab color space is essentially the grayscaled original RGB image. This question, however, is of a different nature; what I'm asking is: Given an image that is already grayscale, I trivially get the L channel in Lab color space. What should the a and b channels be? Should they be zero? The following example, using the pre-build "peppers" image, shows the visual effect of doing so:
I = imread('peppers.png');
figure; imshow(I, []);
Lab = rgb2gray(I);
Lab(:, :, 2) = 0;
Lab(:, :, 3) = 0;
figure; imshow(Lab, []);
If you run this code, you will note that the second imshow outputs a reddish version of the first image, resembling an old dark room. I admit to not being knowledgeable about what the a and b color channels represent in order to understand how I should deal with them in grayscale images, and was looking for some assistance.
An XY Type of Question.
IN CIELAB, a* and b* at 0 means no chroma: i.e. it's greyscale.
BUT WAIT THERE'S MORE! If you're having problems it's because that's not the actual question you need answered:
Incorrect Assumptions
First off, no, The L* of L*a*b* is NOT Luminance, and it is not the same as luminance.
Luminance (L or Y) is a linear measure of light.
L* (*Lstar) is perceptual lightness, it follows human perception (more or less, depending on a bazillion contextual things).
rgb2grey
rgb2grey does not convert to L*a*b*
Also, unfortunately some of the MATLAB functions for colorspaces have errors in implementation.
If you want to convert an sRGB color/image/pixel to LAB, then you need to follow this flowchart:
sRGB -> linear RGB -> CIEXYZ -> CIELAB
If you only want the greyscale or lightness information, with no color you can:
sRGB -> linear RGB -> CIE Y (spectral weighting) -> L* (perceptual lightness)
I discuss this simple method of sRGB to L* in this post
And if you want to see more in depth details, I suggest Bruce Lindbloom.

What range are L*a*b* values restricted to for a given reference white point?

I've got some images I want to do things in CIE L*a*b* with. What range can I expect the values to be in, given the initial sRGB values are in the range [0,1]?
I get my images like the following:
im_rgb = im2double(imread('/my/file/path/image.jpg'));
% ...do some muddling about with im_rgb, keeping range [0,1]
xform = makecform('srgb2lab');
im_lab = applycform(im_rgb, xform);
For starters, I'm reasonably sure that L* will be 0-100. However, I found this thread, which notes that "... a* and b* are not restricted to lie in the range [-100,100]."
Edit:
Matlab's default whitepoint is evaulated by whitepoint('ICC'), which returns 0.9642, 1, 0.8249. I'm using this value, as I'm not sure what else to use.
As I'm always using the same (default) transformation and the input colors are always real colors (as [0,1] for each of R, G, and B), Their equivalent L*a*b* representations are also real colors. Will these L*a*b* values also be bounded? If so, what are they bounded by, or how can I determine the boundaries?
You are basically asking how to find the boundary of the sRGB space in LAB, right?
So starting with L*, yes it will be bound between 0 and 100. This is by definition. If you look at the formula for conversion from XYZ to LAB you'll see that L = 116*(Y/Ywhitepoint)-16. When you are at sRGB white that Y ratio turns to 1 which makes the equation 116-16 = 100. A similar thing happens as back where the formula basically collapses to 4/29 * 116 -16 = 0.
Things are a little more complicated with the a and b. Since the XYZ -> LAB conversion is not linear the conversion doesn't make a easily described shape. But the outer surface of the sRGB cube will be become the outer boundary of the LAB space. What this means is you can take the extremes such as blue primary sRGB[0, 0, 1], convert to LAB and find what should be the furthest extent on the b axis: approximately -108. When you do that for all the corners of the sRGB cube you'll have a good idea about the extent of the sRGB space in LAB.
Most applications (notably Photoshop), clamp the encoding of the a and b channels between -128 and 127. This works fine in practice, but some large RGB spaces like ProPhoto RGB actually extent beyond this. Generally this doesn't have much practical consequence because most of these colors are imaginary, i.e. they sit outside the spectral locus.

how to detect colour from an image matlab?

we are doing a mat lab based robotics project.which actually sorts objects based on its color so we need an algorithm to detect specific color from the image captured from a camera using mat lab.
it will be a great help if some one can help me with it.its the video of the project
In response to Amro's answer:
The five squares above all have the same Hue value in HSV space. Selecting by Hue is helpful, but you'll want to impose some constraints on Saturation and value as well.
HSV allows you to describe color in a more human-meaningful way, but you still need to look at all three values.
As a starting point, I would use the rgb space and the euclidian norm to detect if a pixel has a given color. Typically, you have 3 values for a pixel: [red green blue]. You also have also 3 values defining a target color: [255 0 0] for red. Compute the euclidian norm between those two vectors, and apply a decision threshold to classify the color of your pixel.
Eventually, you want to get rid of the luminance factor (i.e is it a bright red or a dark red?). You can switch to HSV space and use the same norm on the H value. Or you can use [red/green blue/green] vectors. Before that, apply a low pass filter to the images because divisions (also present in the hsv2rgb transform) tend to increase noise.
You probably want to convert to the HSV colorspace, and detect colors based on the Hue values. MATLAB offers the RGB2HSV function.
Here is an example submission on File Exchange that illustrate color detection based on hue.
For obtaining a single color mask, first of all convert the rgb image gray using rgb2gray. Also extract the desired color plane from the rgb image ,(eg for obtaining red plain give rgb_img(:,:,1)). Subtract the given plane from the gray image........