How to search a gray scale image? - matlab

I have 10 gray scale images<2559*3105>. These images are taken from X-ray reflectivity measurement. Each image has two spots except first, showing intensity of X-ray. First image has one highest intensity spot. From second to tenth image each has two spots first one is same as in the first image but second one differs with respect to the location and intensity value. I want to search and crop these spots. The problem is when i apply a condition that find() maximum intensity point in the image, it always points to the spot which is common in all images.

here's some basic image processing code that allows you to select the indices of the spots
%# read the image
im=rgb2gray(imread('a.jpg'));
%# select only relevant area
d=im(5:545,5:660);
%# set a threshold and filter
thres = (max([min(max(d,[],1)) min(max(d,[],2))])) ;
filt=fspecial('gaussian', 7,1);
% reduce noise threshold and smooth the image
d=medfilt2(d);
d=d.*uint8(d>thres);
d=conv2(double(d),filt,'same') ;
d=d.*(d>thres);
% find coonected objets 1
L = bwlabel(d);
%# or also
CC = bwconncomp(d);
Both L and CC have information about the indices of the 2 blobs, so you can now select only that part of the image using them

Related

Creating intensity band across image border using matlab

I have this image (8 bit, pseudo-colored, gray-scale):
And I want to create an intensity band of a specific measure around it's border.
I tried erosion and other mathematical operations, including filtering to achieve the desired band but the actual image intensity changes as soon as I use erosion to cut part of the border.
My code so far looks like:
clear all
clc
x=imread('8-BIT COPY OF EGFP001.tif');
imshow(x);
y = imerode(x,strel('disk',2));
y1=imerode(y,strel('disk',7));
z=y-y1;
figure
z(z<30)=0
imshow(z)
The main problem I am encountering using this is that it somewhat changes the intensity of the original images as follows:
So my question is, how do I create such a band across image border without changing any other attribute of the original image?
Going with what beaker was talking about and what you would like done, I would personally convert your image into binary where false represents the background and true represents the foreground. When you're done, you then erode this image using a good structuring element that preserves the roundness of the contours of your objects (disk in your example).
The output of this would be the interior of the large object that is in the image. What you can do is use this mask and set these locations in the image to black so that you can preserve the outer band. As such, try doing something like this:
%// Read in image (directly from StackOverflow) and pseudo-colour the image
[im,map] = imread('http://i.stack.imgur.com/OxFwB.png');
out = ind2rgb(im, map);
%// Threshold the grayscale version
im_b = im > 10;
%// Create structuring element that removes border
se = strel('disk',7);
%// Erode thresholded image to get final mask
erode_b = imerode(im_b, se);
%// Duplicate mask in 3D
mask_3D = cat(3, erode_b, erode_b, erode_b);
%// Find indices that are true and black out result
final = out;
final(mask_3D) = 0;
figure;
imshow(final);
Let's go through the code slowly. The first two lines take your PNG image, which contains a grayscale image and a colour map and we read both of these into MATLAB. Next, we use ind2rgb to convert the image into its pseudo-coloured version. Once we do this, we use the grayscale image and threshold the image so that we capture all of the object pixels. I threshold the image with a value of 10 to escape some quantization noise that is seen in the image. This binary image is what we will operate on to determine those pixels we want to set to 0 to get the outer border.
Next, we declare a structuring element that is a disk of a radius of 7, then erode the mask. Once I'm done, I duplicate this mask in 3D so that it has the same number of channels as the pseudo-coloured image, then use the locations of the mask to set the values that are internal to the object to 0. The result would be the original image, but having the outer contours of all of the objects remain.
The result I get is:

How to find haze of an image on MATLAB?

I want to compute the extent of haze of an image for each block. This is done by finding the dark channel value that is used to reflect the extent of haze. This concept is from Kaiming He's paper on a Single Image Haze Removal using Dark Channel Prior.
The dark channel value for each block is defined as follows:
where I^c (x',y') denotes the intensity at a pixel location (x',y') in color channel c (one of Red, Green, or Blue color channel), and omega(x,y) denotes the neighborhood of the pixel location (x',y').
I'm not sure how to translate this equation in MATLAB?
If I correctly understand what this equation is asking for, you are essentially extracting pixel blocks centered at each (x,y) in the image, you determine the minimum value within this pixel block for the red, green, and blue channels. This results in 3 values where each value is the minimum within the pixel block for each channel. From these 3 values, you choose the minimum of these and that is the final result for a location (x,y) in the image.
We can do this very easily with ordfilt2. What ordfilt2 does is that it applies an order-statistics filter to your image. You specify a mask of which pixels needs to be analyzed in your neighbourhood, it gathers those pixels in the neighbourhood that are deemed valid and sorts their intensities. You then you choose the rank of the pixel you want in the end. A lower rank means a smaller value while a larger rank denotes a larger value. In our case, the mask would be set to all logical true and is the size of the neighbourhood you want to analyze.
Because you want a minimum, you would choose rank 1 of the result.
You would apply this to each red, green and blue channel, then for each spatial location, choose the minimum out of the three. Therefore, supposing your image was stored in im, and you wanted to apply a m x n neighbourhood to the image, do something like this:
%// Find minimum intensity for each location for each channel
out_red = ordfilt2(im(:,:,1), 1, true(m, n));
out_green = ordfilt2(im(:,:,2), 1, true(m, n));
out_blue = ordfilt2(im(:,:,3), 1, true(m, n));
%// Create a new colour image that has these all stacked
out = cat(3, out_red, out_green, out_blue);
%// Find dark channel image
out_dark = min(out, [], 3);
out_dark will contain the dark channel image you desire. The key to calculating what you want is in the last two lines of code. out contains the minimum values for each spatial location in the red, green and blue channels and they are all concatenated in the third dimension to produce a 3D matrix. After, I apply the min operation and look at the third dimension to finally choose which out of the red, green and blue channels for each pixel location will give the output value.
With an example, if I use onion.png which is part of MATLAB's system path, and specify a 5 x 5 neighbourhood (or m = 5, n = 5), this is what the original image looks like, as well as the dark channel result:
Sidenote
If you're an image processing purist, finding the minimum value for pixel neighbourhoods in a grayscale image is the same as finding the grayscale morphological erosion. You can consider each red, green or blue channel to be its own grayscale image. As such, we could simply replace ordfilt2 with imerode and use a rectangle structuring element to generate the pixel neighbourhood you want to use to apply to your image. You can do this through strel in MATLAB and specify the 'rectangle' flag.
As such, the equivalent code using morphology would be:
%// Find minimum intensity for each location for each channel
se = strel('rectangle', [m n]);
out_red = imerode(im(:,:,1), se);
out_green = imerode(im(:,:,2), se);
out_blue = imerode(im(:,:,3), se);
%// Create a new colour image that has these all stacked
out = cat(3, out_red, out_green, out_blue);
%// Find dark channel image
out_dark = min(out, [], 3);
You should get the same results as using ordfilt2. I haven't done any tests, but I highly suspect that using imerode is faster than using ordfilt2... at least on higher resolution images. MATLAB has highly optimized morphological routines and are specifically for images, whereas ordfilt2 is for more general 2D signals.
Or you can use the Visibility Metric to see how hazy an image is. It turns out someone wrote a beautiful code for it as well.The lower the metric, the higher is the haze in the image.
This metric can also be used as a pre-processor to autmatically adjust dehaze parameters.

How to Crop Multiple Objects in an Image [MATLAB]

I'm freshman to MATLAB & Developing "Rice Quality Identification" Application using MATLAB & NEURAL NETWORK .For my Guidance I'm preferring this Research Paper
This Application Comprises with 5 Phases
Image Acquisition
Image Pre-processing
Image Segmentation and Identifying Region of Interest
Feature Extraction
Training and Testing
I'm now in 3rd Phase , Already developed initial steps for this application
Step 1: Browse Image from Computer and Show it
% Get the orginal image & show , Figure 1
[fileName, pathName] = uigetfile('*.jpg;*.tif;*.png;*.gif','Select the Picture file');
I = fullfile(pathName, fileName);
I = imread(I);
imshow(I)
Step 2: Background subtraction
% selected rice image Background subtraction , Figure 2
% Use Morphological Opening to Estimate the Background
background = imopen(I,strel('disk',7));
I2 = I - background;
figure, imshow(I2);
Step 3:
% get the Black and white Image , Figure 3
% output image BW replaces all pixels in the input image with luminance greater than 0.17 level
BW = im2bw(I2,0.17);
figure, imshow(BW)
Step 4:
% Remove small objects fewer than 30 pixels from binary image
pure = bwareaopen(BW,30);
figure, imshow(pure)
Step 5: Labeling
% Label Black and white & Image bounding box around each object
L=bwlabel(pure,8);
bb=regionprops(L,'BoundingBox');
I'm Sticking at Step 6 since 2 days. Step 6 is crop multiple objects from original image using Labeled Binary Image
which is exactly output should get like below image ,
if I can get this I can easily calculate Morphological Features and Color features for each object in that original image , to use for phase 4 .
Morphological Features
1.Area for each Object
2.scale of X, Y axis for each object in above picture
3.using X, Y axis I can Calculate Aspect Ratio
Color features
1. Red Mean
2. Green Mean
3. Blue Mean
Can you please explain the way to crop multiple objects from original image using Labeled Binary Image which is Step 6.
If I am interpreting Step #6 right, I believe what it's saying is that they want you to segment out the final objects after Step #5 using the binary map that you have produced. Given your comments, you also want to extract the bounding boxes delineated in Step #5 as well. If that's the case, then all you have to do is use the RegionProps structure defined in bb that will help us do this for you. As a bit of review for you, the BoundingBox field of a RegionProps structure for each object extracted from the image returns an array of 4 numbers like so:
[x y w h]
x denotes the column / horizontal co-ordinate, y denotes the row / vertical co-ordinate, and w,h denote the width and height of the bounding box.
All you need to do is create a binary map, and cycle through each bounding box to delineate where we need to cut out of the image. When you're done, use this binary map to extract out your pixels. In other words:
%//Initialize map to zero
bMap = false(size(pure));
%//Go through each bounding box
for i = 1 : numel(bb)
%//Get the i'th bounding box
bbox = bb(i).BoundingBox;
%//Set this entire rectangle to true
%//Make sure we cast off any decimal
%//co-ordinates as the pixel locations
%//are integer
bbox = floor(bbox);
bMap(bbox(2):bbox(4), bbox(1):bbox(3)) = true;
end
%//Now extract our regions
out = zeros(size(I));
out = cast(out, class(I)); %//Ensures compatible types
%//Extract cropped out regions for each channel
for i = 1 : size(out,3)
chanOut = out(:,:,i);
chanIm = I(:,:,i);
chanOut(bMap) = chanIm(bMap);
out(:,:,i) = chanOut;
end
This creates an output image stored in out, and copies only those pixels that are true from each channel over, based on each bounding box given in Step #5.
I believe this is what Step #6 is talking about. Let me know if I have interpreted this properly.

How to extract and recognize the vehicle plate number with matlab?

I want to develop a matlab program that can extract and recognize the plate number of vehicle with template matching method.
Here is my code:
function letters = PengenalanPlatMobil(citra)
%load NewTemplates
%global NewTemplates
citra=imresize(citra,[400 NaN]); % Resizing the image keeping aspect ratio same.
citra_bw=rgb2gray(citra); % Converting the RGB (color) image to gray (intensity).
citra_filt=medfilt2(citra_bw,[3 3]); % Median filtering to remove noise.
se=strel('disk',1);
citra_dilasi=imdilate(citra_filt,se); % Dilating the gray image with the structural element.
citra_eroding=imerode(citra_filt,se); % Eroding the gray image with structural element.
citra_edge_enhacement=imsubtract(citra_dilasi,citra_eroding); % Morphological Gradient for edges enhancement.
imshow(citra_edge_enhacement);
citra_edge_enhacement_double=mat2gray(double(citra_edge_enhacement)); % Converting the class to double.
citra_double_konv=conv2(citra_edge_enhacement_double,[1 1;1 1]); % Convolution of the double image f
citra_intens=imadjust(citra_double_konv,[0.5 0.7],[0 1],0.1); % Intensity scaling between the range 0 to 1.
citra_logic=logical(citra_intens); % Conversion of the class from double to binary.
% Eliminating the possible horizontal lines from the output image of regiongrow
% that could be edges of license plate.
citra_line_delete=imsubtract(citra_logic, (imerode(citra_logic,strel('line',50,0))));
% Filling all the regions of the image.
citra_fill=imfill(citra_line_delete,'holes');
% Thinning the image to ensure character isolation.
citra_thinning_eroding=imerode((bwmorph(citra_fill,'thin',1)),(strel('line',3,90)));
%Selecting all the regions that are of pixel area more than 100.
citra_final=bwareaopen(citra_thinning_eroding,125);
[labelled jml] = bwlabel(citra_final);
% Uncomment to make compitable with the previous versions of MATLAB®
% Two properties 'BoundingBox' and binary 'Image' corresponding to these
% Bounding boxes are acquired.
Iprops=regionprops(labelled,'BoundingBox','Image');
%%% OCR STEP
[letter{1:jml}]=deal([]);
[gambar{1:jml}]=deal([]);
for ii=1:jml
gambar= Iprops(ii).Image;
letter{ii}=readLetter(gambar);
% imshow(gambar);
%
end
end
but the number recognized is always wrong and too much is detected or sometimes too little.
How to fix it?
Here is the images and this one
For number plate extraction you have to follow this algorithm(I used this in my project)
1. Find Histogram variation horizontally(by using imhist)
2. Find the part of histogram where you get maximum variation and get x1 and x2 value.
3. crop that image horizontally by using value of x1 and x2.
4. Repeat same process for vertical cropping.
Explanation:
In order to remove unnecessary information from the image, it requires only edges of the image to work. For detection of the edges, we make use of a built-in MATLAB function. But first we convert the original image to grayscale image.
This grayscale image is converted to binary image by determining a threshold for different intensities in the image. After binarization only, edge detection algorithm can be used. Here we have used ‘ROBERTS’. After extensive testing, it seemed our application the best. Then to determine the region of license plate we have done horizontal and vertical edge processing. First the horizontal histogram is calculated by traversing each column of an image. The algorithm starts traversing with the second pixel from the top of each column of image matrix. The difference between second and first pixel is calculated. If the difference exceeds certain threshold, it is added to total sum of differences. It traverses until the end of the column and the total sum of differences between neighbouring pixels are calculated. At the end, a matrix of the column-wise sum is created. The same process is carried out for vertical histogram. In this case, instead of columns, rows are processed.
After calculating horizontal and vertical histogram we have calculated a threshold value which is 0.434 times of maximum horizontal histogram value. Our next step for extraction is cropping the area of interest i.e. number plate area. For cropping we first crop original image horizontally and then vertically. In horizontal cropping we process image matrix column wise and compare its horizontal histogram value with the predefined threshold value. If certain value in horizontal histogram is more than threshold we mark it as our starting point for cropping and continue until threshold value we find-less than that is our end point. In this process we get many areas which have value more than threshold so we store all starting and end point in a matrix and compare width of each area, width is calculated difference of starting and end point. After that we find set of that staring and end point which map largest width. Then we crop image horizontally by using that starting and end point. This new horizontally cropped image is processed for vertical cropping. In vertical cropping we use same threshold comparison method but only difference is that this time we process image matrix row wise and compare threshold value with vertical histogram values. Again we get different sets of vertical start and end point again we find that set which map largest height and crop image by using that vertical start and end point. After vertical and horizontal cropping we get exact area of number plate from original image in RGB format.
For Recognition use template matching with correlation(using corr2() in matlab)
I would change the loop following character detection to
[gambar{1:jml}]=deal([]);
for ii=1:jml
gambar{ii}= Iprops(ii).Image;
%letter{ii}=readLetter(gambar);
imshow(gambar{ii});
end
I think what you want to do at this point is either
(1) pick the roi in advance before applying character extraction and ocr.
or
(2) apply ocr to all of the characters from the entire image and then use proximity rules or other rules to identify the license plate number.
Edit:
If you run the following loop after character extraction you can get an idea what I mean by "proximity":
[xn yn]=size(citra); % <-- citra is the original image matrix
figure, hold on
[gambar{1:jml}]=deal([]);
for ii=1:jml
gambar{ii}= double(Iprops(ii).Image)*255;
bb=Iprops(ii).BoundingBox;
image([bb(1) bb(1)+bb(3)],[yn-bb(2) yn-bb(2)-bb(4)],gambar{ii});
end
Here is the image after edge detection:
and after character extraction (after running the loop above):

Matlab - Replace successive pixel with pixel from the left

So, I've quantized a grayscale image with four quantized values. I'm trying to maintain the first pixel of each row of the quantized image and replace each successive pixel with the difference from the pixel to its left.
How would you code this in matlab and can someone explain this to me conceptually?
Also, my concern is that because the image is relatively uniform because of the quantization of the dynamic range, most of the image would appear black, no? It seems to me that only the transition areas and the edges will have some difference in quantized values.
To create the difference to the pixel on the left, all you have to do is subtract the pixels in columns 1,2,3... from the columns 2,3,4...
%# create a random image with four values
randomImage = randi(4,[100,90]); %# use different numbers of rows and cols so we know which is which
%# catenate the first column of the image with the difference from the pixel to the left
%# for all pairs of columns in the image
differenceImage = [randomImage(:,1),randomImage(:,1:end-1)-randomImage(:,2:end)];
Yes, you'd expect quite a few uniform patches (which will be gray, since unless you plot the absolute value of the differences, there will be some that are negative).