I implemented SLIC algorithm to find labels and I obtained the labels. I would like to compute a color feature vector that contains the average of the color features for each region. For each pair of neighboring regions, if the Euclidean distance between their feature vectors is less than a threshold, I will merge the two regions. I will do this for all pairs of neighboring regions. Then, I will repeat steps until no pair of regions can be merged. However, I don't know how to implement those steps.
there are a few choices for your color features, and they really depend on your colorspace, and what information you are looking for.
If your objective is to find objects that are the same color (invariant to lighting) I would strongly suggest the hsv colorspace you convert your regular rgb image using rgb2hsv the HSV colorspace has three channels (just like RGB) which are channel 1 = H = Hue = the color, channel 2 = S = Saturation = how vivid the color is, channel 3 = V = brightness Value = how bright a color is. all values are between 0 and 1. again if you wanted to find colors invariant of lighting, your feature would simply be the Hue channel. One thing to not about the hue channel is it is actually cyclic so 0 and 1 are actually the same (red). so your distance would have to wrap around. for instance pixel A has h=.7 pixel B has H=.3 pixel C has H=.01. which is closer to pixel A? you would immediately guess pixel B since delta_H=.4 but actually delta_H for a and c is only 0.31
If you are interested in more than just the simplistic color model by hue other choices are YCbCr, YUV (most people just use YCbCr since there is no TRUE YUV in matlab), CIE (also not completely native to matlab but it is supported as in this example). each of these represent the image brightness in the first channel. and the colors are represented by the last 2 channels. Using the last two channels, you could easily plot colors on a 2d cartesian plane, with one axis being channel2 and the other being channel 3 something like this (this example is specifically YCbCr colorspace)
and the similarity measure could be the euclidean distance between two colors
Im guessing your overall goal is some kind of compression. So what I would do is simply replace pixel values. So if pixel A and B are similar, then make the value of pixel B = pixel A. This means that every iteration you are reducing the total number of different colors in the image. Whereas with averages, you are still maintaining lots of different colors. think of it this way
replace
1. iteration 1, pixel A=x B=x+delta, and they are close enough so you say A=B=x
2. iteration 2, pixel B=x, C=x-delta, they are close so you say B=C=x
3. at this point you have A=B=C=x so there is a reduction in the number of colors from 3 to 1
average
1. iteration 1, pixel A=x B=x+delta, they are close so now A=B=x+.5delta
2. iteration 2, pixel B=x+.5delta, C=x-delta, they are so now B
3. at this point you have A=B=C=x so there is a reduction in the number of colors from 3 to 1
Related
I want to display an image (e.g.imshow) and use a colormap to represent the values of my data points.
However, colormap only gives the option to be dependent on a single variable, but I want a "2D colormap" which depends on two variables.
For example I have a simple image 2x2 pixels:
img = [
1 1 5 6;
1 2 8 7;
2 1 4 3;
2 2 15 3]
Here the first two values of each row are the coordinates, the other two are the values describing the pixel (call them x and y).
When displaying the image I want to use a 2D colormap. For example something like this, which picks a colour depending on both variables (x and y):
Is there an option in MATLAB do to this, possibly in one of the extra toolboxes?
If not can this be done manually? I was thinking by overlaying a grey scale image given from the first value over a colormap image given by the second value a similar effect could be achieved.
In your 2D colormap you are actually using the HSV color space.
Basically, your x axis is Hue, and Y axis is Saturation. You can convert any value into this space if its properly scaled. If you make sure that you scale your 3rd and 4rd column in the [0-1] interval you can easily do
colorRGB=hsv2rgb([val3,val4,0.5]);
If you perform this operation for each pixel, you'll get the image you want.
I gave a extended explanation of how HSV works here
I want to compute the extent of haze of an image for each block. This is done by finding the dark channel value that is used to reflect the extent of haze. This concept is from Kaiming He's paper on a Single Image Haze Removal using Dark Channel Prior.
The dark channel value for each block is defined as follows:
where I^c (x',y') denotes the intensity at a pixel location (x',y') in color channel c (one of Red, Green, or Blue color channel), and omega(x,y) denotes the neighborhood of the pixel location (x',y').
I'm not sure how to translate this equation in MATLAB?
If I correctly understand what this equation is asking for, you are essentially extracting pixel blocks centered at each (x,y) in the image, you determine the minimum value within this pixel block for the red, green, and blue channels. This results in 3 values where each value is the minimum within the pixel block for each channel. From these 3 values, you choose the minimum of these and that is the final result for a location (x,y) in the image.
We can do this very easily with ordfilt2. What ordfilt2 does is that it applies an order-statistics filter to your image. You specify a mask of which pixels needs to be analyzed in your neighbourhood, it gathers those pixels in the neighbourhood that are deemed valid and sorts their intensities. You then you choose the rank of the pixel you want in the end. A lower rank means a smaller value while a larger rank denotes a larger value. In our case, the mask would be set to all logical true and is the size of the neighbourhood you want to analyze.
Because you want a minimum, you would choose rank 1 of the result.
You would apply this to each red, green and blue channel, then for each spatial location, choose the minimum out of the three. Therefore, supposing your image was stored in im, and you wanted to apply a m x n neighbourhood to the image, do something like this:
%// Find minimum intensity for each location for each channel
out_red = ordfilt2(im(:,:,1), 1, true(m, n));
out_green = ordfilt2(im(:,:,2), 1, true(m, n));
out_blue = ordfilt2(im(:,:,3), 1, true(m, n));
%// Create a new colour image that has these all stacked
out = cat(3, out_red, out_green, out_blue);
%// Find dark channel image
out_dark = min(out, [], 3);
out_dark will contain the dark channel image you desire. The key to calculating what you want is in the last two lines of code. out contains the minimum values for each spatial location in the red, green and blue channels and they are all concatenated in the third dimension to produce a 3D matrix. After, I apply the min operation and look at the third dimension to finally choose which out of the red, green and blue channels for each pixel location will give the output value.
With an example, if I use onion.png which is part of MATLAB's system path, and specify a 5 x 5 neighbourhood (or m = 5, n = 5), this is what the original image looks like, as well as the dark channel result:
Sidenote
If you're an image processing purist, finding the minimum value for pixel neighbourhoods in a grayscale image is the same as finding the grayscale morphological erosion. You can consider each red, green or blue channel to be its own grayscale image. As such, we could simply replace ordfilt2 with imerode and use a rectangle structuring element to generate the pixel neighbourhood you want to use to apply to your image. You can do this through strel in MATLAB and specify the 'rectangle' flag.
As such, the equivalent code using morphology would be:
%// Find minimum intensity for each location for each channel
se = strel('rectangle', [m n]);
out_red = imerode(im(:,:,1), se);
out_green = imerode(im(:,:,2), se);
out_blue = imerode(im(:,:,3), se);
%// Create a new colour image that has these all stacked
out = cat(3, out_red, out_green, out_blue);
%// Find dark channel image
out_dark = min(out, [], 3);
You should get the same results as using ordfilt2. I haven't done any tests, but I highly suspect that using imerode is faster than using ordfilt2... at least on higher resolution images. MATLAB has highly optimized morphological routines and are specifically for images, whereas ordfilt2 is for more general 2D signals.
Or you can use the Visibility Metric to see how hazy an image is. It turns out someone wrote a beautiful code for it as well.The lower the metric, the higher is the haze in the image.
This metric can also be used as a pre-processor to autmatically adjust dehaze parameters.
I've got some images I want to do things in CIE L*a*b* with. What range can I expect the values to be in, given the initial sRGB values are in the range [0,1]?
I get my images like the following:
im_rgb = im2double(imread('/my/file/path/image.jpg'));
% ...do some muddling about with im_rgb, keeping range [0,1]
xform = makecform('srgb2lab');
im_lab = applycform(im_rgb, xform);
For starters, I'm reasonably sure that L* will be 0-100. However, I found this thread, which notes that "... a* and b* are not restricted to lie in the range [-100,100]."
Edit:
Matlab's default whitepoint is evaulated by whitepoint('ICC'), which returns 0.9642, 1, 0.8249. I'm using this value, as I'm not sure what else to use.
As I'm always using the same (default) transformation and the input colors are always real colors (as [0,1] for each of R, G, and B), Their equivalent L*a*b* representations are also real colors. Will these L*a*b* values also be bounded? If so, what are they bounded by, or how can I determine the boundaries?
You are basically asking how to find the boundary of the sRGB space in LAB, right?
So starting with L*, yes it will be bound between 0 and 100. This is by definition. If you look at the formula for conversion from XYZ to LAB you'll see that L = 116*(Y/Ywhitepoint)-16. When you are at sRGB white that Y ratio turns to 1 which makes the equation 116-16 = 100. A similar thing happens as back where the formula basically collapses to 4/29 * 116 -16 = 0.
Things are a little more complicated with the a and b. Since the XYZ -> LAB conversion is not linear the conversion doesn't make a easily described shape. But the outer surface of the sRGB cube will be become the outer boundary of the LAB space. What this means is you can take the extremes such as blue primary sRGB[0, 0, 1], convert to LAB and find what should be the furthest extent on the b axis: approximately -108. When you do that for all the corners of the sRGB cube you'll have a good idea about the extent of the sRGB space in LAB.
Most applications (notably Photoshop), clamp the encoding of the a and b channels between -128 and 127. This works fine in practice, but some large RGB spaces like ProPhoto RGB actually extent beyond this. Generally this doesn't have much practical consequence because most of these colors are imaginary, i.e. they sit outside the spectral locus.
Is there an efficient way to fill in pixels with a value of zero between pixels with non-zero values with the nearest non-zero value, while leaving the rest of pixels at zero untouched?
To clarify, I am looking to inpaint those pixels whose closest distance to a non-zero pixel is lower than a given value (e.g. 4 pixels).
The image is initially represented as a matrix of uint32 integers.
In the example above, all the thin cracks between the colored regions should be filled with the surrounding color, while large black regions should remain the same (i.e. the routine should inpaint the pixels between the colored regions).
I imagine there is a way to do this via interpolation. In either case, I am looking for a relatively efficient solution.
Given an input matrix A:
b = imclose(A==0,ones(3,3)) %only the big zero regions
c = imdilate(A,ones(3,3)) %inpainting all neighboring pixels
d = zeros(size(A));
d(b==0) = c(b==0); %copy the inpainting only in places where there are no big regions
I haven't tested it, so there may be some problems with the code. (if you made changes to the code to make it work please edit my answer)
we are doing a mat lab based robotics project.which actually sorts objects based on its color so we need an algorithm to detect specific color from the image captured from a camera using mat lab.
it will be a great help if some one can help me with it.its the video of the project
In response to Amro's answer:
The five squares above all have the same Hue value in HSV space. Selecting by Hue is helpful, but you'll want to impose some constraints on Saturation and value as well.
HSV allows you to describe color in a more human-meaningful way, but you still need to look at all three values.
As a starting point, I would use the rgb space and the euclidian norm to detect if a pixel has a given color. Typically, you have 3 values for a pixel: [red green blue]. You also have also 3 values defining a target color: [255 0 0] for red. Compute the euclidian norm between those two vectors, and apply a decision threshold to classify the color of your pixel.
Eventually, you want to get rid of the luminance factor (i.e is it a bright red or a dark red?). You can switch to HSV space and use the same norm on the H value. Or you can use [red/green blue/green] vectors. Before that, apply a low pass filter to the images because divisions (also present in the hsv2rgb transform) tend to increase noise.
You probably want to convert to the HSV colorspace, and detect colors based on the Hue values. MATLAB offers the RGB2HSV function.
Here is an example submission on File Exchange that illustrate color detection based on hue.
For obtaining a single color mask, first of all convert the rgb image gray using rgb2gray. Also extract the desired color plane from the rgb image ,(eg for obtaining red plain give rgb_img(:,:,1)). Subtract the given plane from the gray image........