Exchange phase of 2 image's fft and reconstruct [duplicate] - matlab

I'm using MATLAB for image processing and I came across a code with the instruction:
imshow(pixel_labels,[]);
when executed it give a binary image.
I have check the manual of the function on Mathworks.com, the most similar used mode is
imshow(I,[low,high]);
but they don't say a thing about the case where that array is empty ([])
I tried to remove it:
imshow(pixel_labels);
but all I see is a white board. I would like to know what is happening in the first use case (imshow(pixel_labels,[])), I hope from there I will understand why I get a white board in the last use case.

If I type help imshow in MATLAB, the first paragraph reads:
IMSHOW(I,[LOW HIGH]) displays the grayscale image I, specifying the
display
range for I in [LOW HIGH]. The value LOW (and any value less than LOW)
displays as black, the value HIGH (and any value greater than HIGH) displays
as white. Values in between are displayed as intermediate shades of gray,
using the default number of gray levels. If you use an empty matrix ([]) for
[LOW HIGH], IMSHOW uses [min(I(:)) max(I(:))]; that is, the minimum value in
I is displayed as black, and the maximum value is displayed as white.
so [] is simply shorthand for [min(pixel_labels(:)) max(pixel_labels(:))].

Related

Image segmentation algorithm in MATLAB

I need to implement an image segmentation function in MATLAB based on the principles of the connected components algorithm, but with a few modifications. This is intended for very simple, 2D images, with a background color and some objects in different colors.
The idea is that, taking the image as a matrix, I provide a tool to select the background color (it will vary for every image). Then, when the value of the color of the background of the image is selected, I have to segment all the objects in the image, and the result should be a labeled matrix, of the same size of the image, with 0's for the background, and a different number for each object.
This is a graphic example of what I mean:
I understand the idea of how to do it, but I do not know how to implement it on MATLAB. For each pixel (matrix position) I should mark it as visited and then if the value corresponds to the one of the background, assign 0, if not, assign another value. The objects can be formed by different colors, so in the end, I need to segment groups of adjacent pixels, whatever their color is. Also I have to use 8-connectivity, in order to count the green object of the example image as only one object and not 4 different ones. And also, the objects should be counted from top to bottom, and from left to right.
Is there a simple way of doing this in MATLAB? I know the bwlabel function, but it works for binary images only, so I'd like to adapt it to my case.
once you know the background color, you can easily convert your image into a binary mask of the same size:
bw=img!=bg_color;
Once you have a binary mask you can call bwlavel with 8-connectivity argument as you suggested yourself.
Note: you might want to convert your color image from RGB representation to an indexed image using rgb2ind before processing.

How to extract LBP features from facial images in MATLAB?

I'm not familiar with Local Binary Pattern (LBP), could anyone help me to know how to extract LBP features from facial images (I need a simple code example)?
While searching, I found this code, but I didn't understand it.
So first of all you need to split the face into a certain amount of
sections.
For each of these sections you then have to loop through the all of
the pixels contained within that section and get their value (grey scale or colour values).
For each pixel check the value of the pixels which border it in (diagonals and up down left and right) and save them
for each of the directions check if the colour value of. if the colour is greater than the original pixels value you can assign that value a 1 and if it is less you can assign it as a 0.
you should get a list of 1's and 0's from the previous steps. put these numbers together and it will be a large binary number, you should be able to convert this to decimal and you will have a number assigned for that pixel. save this number per pixel.
after you have got a decimal number for each pixel within a section you can average all of the values to get an average number for this section.
This may not be the best description of how this works so here is a useful picture which might help you.
There is an extractLBPFeatures function in the R2015b release of the Computer Vision System Toolbox for MATLAB.

Dicom: Matlab versus ImageJ grey level

I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?
It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)
A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.

Count black spots in an image - iPhone - Objective C

I need to count the number of black spots in an image(Not the percentage of black spots but the count). Can anyone suggest a step wise procedure that is used in image manipulation to count the spots.
Objective : Count black spots in an image
What I've done till now :
1. Converted image to grayscale
2. Read the pixels for their intensity values
3. I have set a threshold to find darker areas
Other implementations:
1. Gaussian blur
2. Histogram equalisations
What i have browsed :
Flood fill algorithms, Water shed algorithms
Thanks a lot..
you should first "label" the image, then count the number of labels you have found.
the label operation is the first operation done in a blob analysis operation: it groups similar adjacent pixels into a single object, and assign a value to this object. the condition for grouping generally is a background/foreground distinction: the label operation will group adjacent pixels which are part of the foreground, where background is defined as pure black or pure white, and foreground is any pixel whose color is not the color of the background.
the label operation is pretty easy to implement and requires not much resources.
_(see the wikipedia article, or this page for more information on labelling. a good paper on the implementation of the label operation is "Two Strategies to Speed up Connected Component Labeling Algorithms" by Kesheng Wu, Ekow Otoo and Kenji Suzuki)_
after labelling, count the number of labels (you can even count the labels while labelling), and you have the number of "black spots".
the next step is defining what a black spot is: converting your input image into a grayscale image (by converting it to HSL and using the luminance plane for example) then applying a threshold should do it. if the illumination of your input image is not even, you may need a better thresholding algorithm (a form of adaptive threshold)...
It sounds like you want to label the black spots (Blobs) using a binary image labelling algorithm. This should give you a place to start

Histogram of image

I have 2 images that look nearly identical. The histogram for one (256 bins) has intensities distributed pretty evenly throughout. The other has intensities at the lowest and highest bin. Why would this be? Then wouldnt it appear binary (thats not the case)?
Think about it this way: Imagine you are taking a histogram of two grayscale images with each pixel represented by a color value 0-255. One image contains pixels that all have gray levels of 128. The second image contains a "checkerboard" pattern (pixels alternate between 0 and 255). If you step back far enough that you no longer see individual pixels, they will appear identical to the naked eye. Your brain "averages" the alternating black and white pixels into a field of gray.
This is what your images are doing. The first image has colors distributed evenly throughout the range and the second image has concentrations of specific colors, but if you calculate an average color for the image (and also for sub-sections within the image) you should see similar values for both.
Never trust in your eyes! They will always lie to you.
Consider this silly example that can be illustrative here. An X-Ray 'photo' is nothing more than black and white dots. But as they are small and mixed along the image, your eyes see different shades of gray.
The same can happen in a digital image, where, although the pixels may have the same size, then can be black and white and 'distributed' in the image in such a way that you see it as having more graylevels. This is called halftone.
Without seeing the images it's hard to say, but it sounds like the second may be slightly clipped.
The difference also could just be a slight difference in contrast in the images that's no visible to the naked eye.