I want to compute the extent of haze of an image for each block. This is done by finding the dark channel value that is used to reflect the extent of haze. This concept is from Kaiming He's paper on a Single Image Haze Removal using Dark Channel Prior.
The dark channel value for each block is defined as follows:
where I^c (x',y') denotes the intensity at a pixel location (x',y') in color channel c (one of Red, Green, or Blue color channel), and omega(x,y) denotes the neighborhood of the pixel location (x',y').
I'm not sure how to translate this equation in MATLAB?
If I correctly understand what this equation is asking for, you are essentially extracting pixel blocks centered at each (x,y) in the image, you determine the minimum value within this pixel block for the red, green, and blue channels. This results in 3 values where each value is the minimum within the pixel block for each channel. From these 3 values, you choose the minimum of these and that is the final result for a location (x,y) in the image.
We can do this very easily with ordfilt2. What ordfilt2 does is that it applies an order-statistics filter to your image. You specify a mask of which pixels needs to be analyzed in your neighbourhood, it gathers those pixels in the neighbourhood that are deemed valid and sorts their intensities. You then you choose the rank of the pixel you want in the end. A lower rank means a smaller value while a larger rank denotes a larger value. In our case, the mask would be set to all logical true and is the size of the neighbourhood you want to analyze.
Because you want a minimum, you would choose rank 1 of the result.
You would apply this to each red, green and blue channel, then for each spatial location, choose the minimum out of the three. Therefore, supposing your image was stored in im, and you wanted to apply a m x n neighbourhood to the image, do something like this:
%// Find minimum intensity for each location for each channel
out_red = ordfilt2(im(:,:,1), 1, true(m, n));
out_green = ordfilt2(im(:,:,2), 1, true(m, n));
out_blue = ordfilt2(im(:,:,3), 1, true(m, n));
%// Create a new colour image that has these all stacked
out = cat(3, out_red, out_green, out_blue);
%// Find dark channel image
out_dark = min(out, [], 3);
out_dark will contain the dark channel image you desire. The key to calculating what you want is in the last two lines of code. out contains the minimum values for each spatial location in the red, green and blue channels and they are all concatenated in the third dimension to produce a 3D matrix. After, I apply the min operation and look at the third dimension to finally choose which out of the red, green and blue channels for each pixel location will give the output value.
With an example, if I use onion.png which is part of MATLAB's system path, and specify a 5 x 5 neighbourhood (or m = 5, n = 5), this is what the original image looks like, as well as the dark channel result:
Sidenote
If you're an image processing purist, finding the minimum value for pixel neighbourhoods in a grayscale image is the same as finding the grayscale morphological erosion. You can consider each red, green or blue channel to be its own grayscale image. As such, we could simply replace ordfilt2 with imerode and use a rectangle structuring element to generate the pixel neighbourhood you want to use to apply to your image. You can do this through strel in MATLAB and specify the 'rectangle' flag.
As such, the equivalent code using morphology would be:
%// Find minimum intensity for each location for each channel
se = strel('rectangle', [m n]);
out_red = imerode(im(:,:,1), se);
out_green = imerode(im(:,:,2), se);
out_blue = imerode(im(:,:,3), se);
%// Create a new colour image that has these all stacked
out = cat(3, out_red, out_green, out_blue);
%// Find dark channel image
out_dark = min(out, [], 3);
You should get the same results as using ordfilt2. I haven't done any tests, but I highly suspect that using imerode is faster than using ordfilt2... at least on higher resolution images. MATLAB has highly optimized morphological routines and are specifically for images, whereas ordfilt2 is for more general 2D signals.
Or you can use the Visibility Metric to see how hazy an image is. It turns out someone wrote a beautiful code for it as well.The lower the metric, the higher is the haze in the image.
This metric can also be used as a pre-processor to autmatically adjust dehaze parameters.
Related
I want to use the function activecontour in matlab to segment a color image, but I don't know how to create the mask.
The documentation says:
For color and multi-channel images, mask must be a 2-D logical array where the first two dimensions match the first two dimensions of the image A.
But I don't understand what has to be done. Any suggestions?
Let's consider that the size of your image is NxM pixels, N is the number of rows, M the number of columns.
If it is a color image, each pixel is probably composed of 3 values, one for intensity of red (R), one intensity of blue (B) and one for intensity of green (G). These are called the color channels. So the real shape of the matrix representing your image is NxMx3.
What the documentation says is that the mask should be 2-D, and the dimensions should match the first two dimension of your image. It means the mask should have the same number of rows and cols than your image, but each pixel of the mask is not composed of 3 values anymore. It is composed of 1 value (a logical value : 0 or 1).
So what you need to do is to give the function a matrix NxM with only 0 and 1 as possible values. The doc says that the mask is the :
Initial contour at which the evolution of the segmentation begins, specified as a binary image the same size as A.
So the mask needs to represent an initial guess of the contour. If you already know that what you want to see is in the upper left corner of the image, you can set the initial contour as a square located in the upper left corner for example.
Now to represent the contour by a matrix of logicals, you simply set all the elements of the matrix to 0 and just the elements representing the contour to 1 I guess.
Lets me know if there is something you don't understand, I'd be glad to answer you.
I implemented SLIC algorithm to find labels and I obtained the labels. I would like to compute a color feature vector that contains the average of the color features for each region. For each pair of neighboring regions, if the Euclidean distance between their feature vectors is less than a threshold, I will merge the two regions. I will do this for all pairs of neighboring regions. Then, I will repeat steps until no pair of regions can be merged. However, I don't know how to implement those steps.
there are a few choices for your color features, and they really depend on your colorspace, and what information you are looking for.
If your objective is to find objects that are the same color (invariant to lighting) I would strongly suggest the hsv colorspace you convert your regular rgb image using rgb2hsv the HSV colorspace has three channels (just like RGB) which are channel 1 = H = Hue = the color, channel 2 = S = Saturation = how vivid the color is, channel 3 = V = brightness Value = how bright a color is. all values are between 0 and 1. again if you wanted to find colors invariant of lighting, your feature would simply be the Hue channel. One thing to not about the hue channel is it is actually cyclic so 0 and 1 are actually the same (red). so your distance would have to wrap around. for instance pixel A has h=.7 pixel B has H=.3 pixel C has H=.01. which is closer to pixel A? you would immediately guess pixel B since delta_H=.4 but actually delta_H for a and c is only 0.31
If you are interested in more than just the simplistic color model by hue other choices are YCbCr, YUV (most people just use YCbCr since there is no TRUE YUV in matlab), CIE (also not completely native to matlab but it is supported as in this example). each of these represent the image brightness in the first channel. and the colors are represented by the last 2 channels. Using the last two channels, you could easily plot colors on a 2d cartesian plane, with one axis being channel2 and the other being channel 3 something like this (this example is specifically YCbCr colorspace)
and the similarity measure could be the euclidean distance between two colors
Im guessing your overall goal is some kind of compression. So what I would do is simply replace pixel values. So if pixel A and B are similar, then make the value of pixel B = pixel A. This means that every iteration you are reducing the total number of different colors in the image. Whereas with averages, you are still maintaining lots of different colors. think of it this way
replace
1. iteration 1, pixel A=x B=x+delta, and they are close enough so you say A=B=x
2. iteration 2, pixel B=x, C=x-delta, they are close so you say B=C=x
3. at this point you have A=B=C=x so there is a reduction in the number of colors from 3 to 1
average
1. iteration 1, pixel A=x B=x+delta, they are close so now A=B=x+.5delta
2. iteration 2, pixel B=x+.5delta, C=x-delta, they are so now B
3. at this point you have A=B=C=x so there is a reduction in the number of colors from 3 to 1
I have this image (8 bit, pseudo-colored, gray-scale):
And I want to create an intensity band of a specific measure around it's border.
I tried erosion and other mathematical operations, including filtering to achieve the desired band but the actual image intensity changes as soon as I use erosion to cut part of the border.
My code so far looks like:
clear all
clc
x=imread('8-BIT COPY OF EGFP001.tif');
imshow(x);
y = imerode(x,strel('disk',2));
y1=imerode(y,strel('disk',7));
z=y-y1;
figure
z(z<30)=0
imshow(z)
The main problem I am encountering using this is that it somewhat changes the intensity of the original images as follows:
So my question is, how do I create such a band across image border without changing any other attribute of the original image?
Going with what beaker was talking about and what you would like done, I would personally convert your image into binary where false represents the background and true represents the foreground. When you're done, you then erode this image using a good structuring element that preserves the roundness of the contours of your objects (disk in your example).
The output of this would be the interior of the large object that is in the image. What you can do is use this mask and set these locations in the image to black so that you can preserve the outer band. As such, try doing something like this:
%// Read in image (directly from StackOverflow) and pseudo-colour the image
[im,map] = imread('http://i.stack.imgur.com/OxFwB.png');
out = ind2rgb(im, map);
%// Threshold the grayscale version
im_b = im > 10;
%// Create structuring element that removes border
se = strel('disk',7);
%// Erode thresholded image to get final mask
erode_b = imerode(im_b, se);
%// Duplicate mask in 3D
mask_3D = cat(3, erode_b, erode_b, erode_b);
%// Find indices that are true and black out result
final = out;
final(mask_3D) = 0;
figure;
imshow(final);
Let's go through the code slowly. The first two lines take your PNG image, which contains a grayscale image and a colour map and we read both of these into MATLAB. Next, we use ind2rgb to convert the image into its pseudo-coloured version. Once we do this, we use the grayscale image and threshold the image so that we capture all of the object pixels. I threshold the image with a value of 10 to escape some quantization noise that is seen in the image. This binary image is what we will operate on to determine those pixels we want to set to 0 to get the outer border.
Next, we declare a structuring element that is a disk of a radius of 7, then erode the mask. Once I'm done, I duplicate this mask in 3D so that it has the same number of channels as the pseudo-coloured image, then use the locations of the mask to set the values that are internal to the object to 0. The result would be the original image, but having the outer contours of all of the objects remain.
The result I get is:
I want to access the red channel of each pixel in my image. I don't want to change it. I just want to identify the pixels with a range of red. I'm looking for pixels that will have the colors like RGB(15,0,0), RGB(120,0,0), RGB(200,0,0) and so on. My image is mostly gray, I want to identify the red boxes on that.
I tried:
image = imread('myimage.jpg');
figure; imshow(image);
redPlane = image(:,:,1);
figure; imshow(redPlane);
The second figure displayed is all gray. It took off the red.
You are visualizing the red channel as a grayscale image. Think about it. The image is essentially a 3D matrix. By doing image(:,:,1);, you are accessing the first slice of that image, which is a 2D matrix and this corresponds to the red components of each pixel. imshow functions such that if the input is a 2D matrix, then the output is automatically visualized as grayscale. If imshow is a 3D matrix, then the output is automatically visualized in colour, where the first, second and third slices of the matrix correspond to the red, green and blue components respectively.
Therefore, by doing imshow on this 2D matrix, it would obviously be grayscale. You're just interpreting the results incorrectly. Here, the whiter the pixel the more red the pixel is in that location of the image. For example, assuming your image is uint8 (unsigned 8-bit integer) if a value has 255 at a particular location, this means that the pixel has a fully red component whereas if you had a value of 0 at a particular location, this means that there is no red component. This would be visualized in black and white.
If you want to display how red a pixel is, then put this into a 3D matrix where the second (green) and third (blue) channels are all zero, while you set the red channel to be from the first slice of your original image. In other words, try this:
imageRed = uint8(zeros(size(image))); %// Create blank image
imageRed(:,:,1) = redPlane; %// Set red channel accordingly
imshow(imageRed); %// Show this image
However, if you just want to process the red channel, then there's no need to visualize it. Just use it straight out of the matrix itself. You said you wanted to look for specific red channel values in your image. Ignoring the green and blue components, you can do something like this. Let's say we want to create an output Boolean map locationMap such that any location that is true / 1 will mean that this is a location has a red value you're looking for, and false / 0 means that it isn't. As such, do something like:
redPlane = image(:,:,1);
% // Place values of red you want to check here
redValuesToCheck = [15 20 100];
%// Initialize a boolean map where true
%// means this is a red value we're looking for and
%// false otherwise
locationMap = false(size(redPlane));
%// For each red value we want to check...
for val = redValuesToCheck
%// Find those locations that share this
%// value, and set to true on the boolean map
locationMap(redPlane == val) = true;
end
%// Show the map
imshow(locationMap);
One small subtlety here that you may or may not notice, but I'll bring it up anyway. locationMap is a Boolean variable, and when you use imshow on this, true gets visualized to white while false gets visualized to black.
Minor note
Using image as a variable name is a very bad idea. image is a pre-defined function already included in MATLAB that takes in a matrix of numbers and visualizes it in a figure. You should use something else instead, as you may have other functions that rely on this function but you won't be able to run them as the functions are expecting the function image, but you have shadowed it over with a variable instead.
I have 10 gray scale images<2559*3105>. These images are taken from X-ray reflectivity measurement. Each image has two spots except first, showing intensity of X-ray. First image has one highest intensity spot. From second to tenth image each has two spots first one is same as in the first image but second one differs with respect to the location and intensity value. I want to search and crop these spots. The problem is when i apply a condition that find() maximum intensity point in the image, it always points to the spot which is common in all images.
here's some basic image processing code that allows you to select the indices of the spots
%# read the image
im=rgb2gray(imread('a.jpg'));
%# select only relevant area
d=im(5:545,5:660);
%# set a threshold and filter
thres = (max([min(max(d,[],1)) min(max(d,[],2))])) ;
filt=fspecial('gaussian', 7,1);
% reduce noise threshold and smooth the image
d=medfilt2(d);
d=d.*uint8(d>thres);
d=conv2(double(d),filt,'same') ;
d=d.*(d>thres);
% find coonected objets 1
L = bwlabel(d);
%# or also
CC = bwconncomp(d);
Both L and CC have information about the indices of the 2 blobs, so you can now select only that part of the image using them