Identifying White Cars in an Image using Matlab - matlab

I am currently writing a program on Matlab for Image Processing. I am using an image (below) to attempt to count the number of white cars in the image. I have used filtering commands, strel(disk, 2), and have managed to detect the two white cars in the image but due to the way the binary image (below) displays a car it counts one car as two.
Are there any solutions to overcome this problem or are there any particular methods I should be using as an alternative to the code below?
a = imread('Cars2.jpg'); %Read the image Car1.jpg
subplot(3,3,1), imshow (a); %Display RGB image Car1.jpg
b = rgb2gray(a); %Turn Car1 from RGB to greyscale
subplot(3,3,2), imshow (b); %Display greyscale image Car1.jpg
c = graythresh (a); %Automatically set appropriate threshold for foreground & background (Otsu's Method)
d = im2bw (b,0.8); %Convert from greyscale to binary image
subplot (3,3,3), imshow(d); %Display binary image Car1.jpg
subplot(3,3,4), imhist (b,256); %Display histogram for greyscale values (image, samples)
SE = strel ('disk',2); %Set Disk radius for filtering unnecessary pixels
e = imopen (d,SE); %Erode then Dilate image with Disk radius
subplot (3,3,5), imshow(e); %Display openned/filtered image Car1.jpg
B = bwboundaries(e);
imshow(e)
text(10,10,strcat('\color{red}Objects Found:',num2str(length(B))))
hold on
EDIT: As i have under 10 reputation I can't post the image displayed from the code but the theory is pretty generic so I hope you understand what I'm getting across. The images are similar to http://www.mathworks.co.uk/help/images/examples/detecting-cars-in-a-video-of-traffic.html

Instead of using bwboundaries I would use regionprops(e). You can then use some additional logic by looking at the area of the object and the shape of the bounding box to infer if the object is one or two cars.
If you are only interested in detecting white cars, your overall algorithm could be improved by converting the image into HSV colour space and thresholding on the saturation and value channels instead of using im2bw. If you have a video sequence I would segment using vision.ForegroundDetector or another Gaussian mixture model segmentation technique.

Related

how to to make a car park shows different colors for each dots using matlab?

I convert my car park image to binary image and clear the unwanted white dots/region to get this image:
This is my codes:
sceneImage = imread('nocars10green.jpg');
figure;
imshow(sceneImage);
hsvscene = rgb2hsv (sceneImage);
figure;
imshow (hsvscene);
grayscene = rgb2gray (hsvscene);
figure;
imshow (grayscene);
bwScene = im2bw (grayscene);
figure;
imshow (bwScene);
str = strel('disk',4)
bw = imerode(bwScene,str)
figure;
imshow (bw);
How do I convert the binary image after erode so that I can show different colors for different dots?
I read in this journal.
Al-Kharusi, Hilal, and Ibrahim Al-Bahadly. "Intelligent parking management system based on image processing." World Journal of Engineering and Technology 2014 (2014).
it is mentioned:
if (newmatrix(y,x) > 0) % an object is there, if (e(newmatrix(y,x)) = 0) this object has not been seen
(newmatrix(y,x)) = x; make the value and index 3 equal to the current X coordinate.
and this is their output image:
But I don't understand how it works. Can anyone explain to me how it work and how to write the commands to convert my binary image to get the same as their output image in order to get different colors of each dots?
or if there is any other way to convert it?
The term you have to search the web for is connected component labeling or just labeling.
Given the provided image with 10 white dots on black background you can do the following:
Find the blobs:
https://en.wikipedia.org/wiki/Connected-component_labeling
https://de.mathworks.com/help/images/ref/bwlabel.html
Then color them. For example by using label2rgb:
You can display the output matrix as a pseudocolor indexed image. Each
object appears in a different color, so the objects are easier to
distinguish than in the original image. For more information, see
label2rgb.

How can I display a grayscale raster in color in Matlab?

I have a .tif file of a landmass that denotes elevation. I want to display this raster with a color ramp as opposed to a grayscale ramp. How would I do this in Matlab?
I looked at the information associated with the tiff using:
[Z, R] = geotiffread('Landmass.tif')
which denotes the heading 'ColourType' as 'grayscale'. I tried to change this to 'winter' (one of matlabs in-built color schemes) but it made no difference.
At the moment I am using the following commands to display the tiff:
[Z, R] = geotiffread('Landmass.tif');
e=uint8(Z);
mapshow(e,R);
All the higher areas are white and everything else is black...even around the landmass (which I think I may have to cut/mask the landmass out to get rid of).
All the black colour is making it too difficult for me to display other shapefiles on top of the tiff, so I want to change the color scheme from grayscale to something lighter.
How do I do this?
The reason colormap winter is not working is because the output of mapshow(e,R); is RGB image format.
Even when the displayed image is gray, it is actually RGB, when r=g=b for each pixel.
I took Matlab mapshow example, converted boston image to Grayscale, and used mapshow.
For using colormap winter, I got image using getimage, convert it to Grayscale using rgb2gray, and then colormap winter worked when showing the image.
Check the following example:
[boston, R] = geotiffread('boston.tif');
boston = rgb2gray(boston); %Convert to Grayscale for testing.
figure
mapshow(boston, R);
axis image off
%Get image data, note: size of I is 2881x4481x3 (I is not in Grayscale format).
I = getimage(gca);
%Convert I from RGB (R=G=B) formtat to Grayscale foramt, note: size of J is
%2881x4481 (J is Grayscale format).
%%%%%%%Avoid image being rotated%%%%%%%%%%%%%
%Close old image and open new figure
close Figure 1
Figure
J = rgb2gray(I);
imshow(J);
colormap winter %Now it's working...
Boston with winter colormap:

Creating intensity band across image border using matlab

I have this image (8 bit, pseudo-colored, gray-scale):
And I want to create an intensity band of a specific measure around it's border.
I tried erosion and other mathematical operations, including filtering to achieve the desired band but the actual image intensity changes as soon as I use erosion to cut part of the border.
My code so far looks like:
clear all
clc
x=imread('8-BIT COPY OF EGFP001.tif');
imshow(x);
y = imerode(x,strel('disk',2));
y1=imerode(y,strel('disk',7));
z=y-y1;
figure
z(z<30)=0
imshow(z)
The main problem I am encountering using this is that it somewhat changes the intensity of the original images as follows:
So my question is, how do I create such a band across image border without changing any other attribute of the original image?
Going with what beaker was talking about and what you would like done, I would personally convert your image into binary where false represents the background and true represents the foreground. When you're done, you then erode this image using a good structuring element that preserves the roundness of the contours of your objects (disk in your example).
The output of this would be the interior of the large object that is in the image. What you can do is use this mask and set these locations in the image to black so that you can preserve the outer band. As such, try doing something like this:
%// Read in image (directly from StackOverflow) and pseudo-colour the image
[im,map] = imread('http://i.stack.imgur.com/OxFwB.png');
out = ind2rgb(im, map);
%// Threshold the grayscale version
im_b = im > 10;
%// Create structuring element that removes border
se = strel('disk',7);
%// Erode thresholded image to get final mask
erode_b = imerode(im_b, se);
%// Duplicate mask in 3D
mask_3D = cat(3, erode_b, erode_b, erode_b);
%// Find indices that are true and black out result
final = out;
final(mask_3D) = 0;
figure;
imshow(final);
Let's go through the code slowly. The first two lines take your PNG image, which contains a grayscale image and a colour map and we read both of these into MATLAB. Next, we use ind2rgb to convert the image into its pseudo-coloured version. Once we do this, we use the grayscale image and threshold the image so that we capture all of the object pixels. I threshold the image with a value of 10 to escape some quantization noise that is seen in the image. This binary image is what we will operate on to determine those pixels we want to set to 0 to get the outer border.
Next, we declare a structuring element that is a disk of a radius of 7, then erode the mask. Once I'm done, I duplicate this mask in 3D so that it has the same number of channels as the pseudo-coloured image, then use the locations of the mask to set the values that are internal to the object to 0. The result would be the original image, but having the outer contours of all of the objects remain.
The result I get is:

How do I denoise a simple grayscale image

Here is the original image with better vision: we can see a lot of noise around the main skeleton, the circle thing, which I want to remove them, and do not affect the main skeleton. I'm not sure if it called noise
I'm doing it to deblurring a image, and this image is the motion blur kernel which identify the camera motion when the camera capture a image.
ps: this image is the kernel for one case, and what I need is a general method in here. thank you for your help
there is a paper in CVPR2014 named "Separable Kernel for Image Deblurring" which talk about this, I want to extract main skeleton of the image to make the kernel more robust, sorry for the explaination here as my English is not good
and here is the ture grayscale image:
I want it to be like this:
How can I do it using Matlab?
here are some other kernel images:
As #rayryeng well explained, median filtering is the best option to clean noise in the image, which I realized when I had studied about image restoration. However, in your case, what you need to do seems to me not cleaning noise in the image. You want to more likely eliminate the sparks in the image.
Simply I applied single thresholding to your noisy image to eliminate sparks.
Try this:
desIm = imread('http://i.stack.imgur.com/jyYUx.png'); % // Your expected (desired) image
nIm = imread('http://i.stack.imgur.com/pXO0p.png'); % // Your original image
nImgray = rgb2gray(nIm);
T = graythresh(nImgray)*255; % // Thereshold value
S = size(nImgray);
R = zeros(S) + 5; % // Your expected image bluish so I try to close it
G = zeros(S) + 3; % // Your expected image bluish so I try to close it
B = zeros(S) + 20; % // Your expected image bluish so I try to close it
logInd = nImgray > T; % // Logical index of pixel exclude spark component
R(logInd) = nImgray(logInd); % // Get original pixels without sparks
G(logInd) = nImgray(logInd); % // Get original pixels without sparks
B(logInd) = nImgray(logInd); % // Get original pixels without sparks
rgbImage = cat(3, R, G, B); % // Concatenating Red Green Blue channels
figure,
subplot(1, 3, 1)
imshow(nIm); title('Original Image');
subplot(1, 3, 2)
imshow(desIm); title('Desired Image');
subplot(1, 3, 3)
imshow(uint8(rgbImage)); title('Restoration Result');
What I got is:
The only thing I can see that is different between the two images is that there is some quantization noise / error around the perimeter of the object. This resembles salt and pepper noise and the best way to remove that noise is to use median filtering. The median filter basically analyzes local overlapping pixel neighbourhoods in your image, sorts the intensities and chooses the median value as the output for each pixel neighbourhood. Salt and pepper noise corrupts image pixels by randomly selecting pixels and setting their intensities to either black (pepper) or white (salt). By employing the median filter, sorting the intensities puts these noisy pixels at the lower and higher ends and by choosing the median, you would get the best intensity that could have possibly been there.
To do median filtering in MATLAB, use the medfilt2 function. This is assuming you have the Image Processing Toolbox installed. If you don't, then what I am proposing won't work. Assuming that you do have it, you would call it in the following way:
out = medfilt2(im, [M N]);
im would be the image loaded in imread and M and N are the rows and columns of the size of the pixel neighbourhood you want to analyze. By choosing a 7 x 7 pixel neighbourhood (i.e. M = N = 7), and reading your image directly from StackOverflow, this is the result I get:
Compare this image with your original one:
If you also look at your desired output, this more or less mimics what you want.
Also, the code I used was the following... only three lines!
im = rgb2gray(imread('http://i.stack.imgur.com/pXO0p.png'));
out = medfilt2(im, [7 7]);
imshow(out);
The first line I had to convert your image into grayscale because the original image was in fact RGB. I had to use rgb2gray to do that. The second line performs median filtering on your image with a 7 x 7 neighbourhood and the final line shows the image in a separate window with imshow.
Want to implement median filtering yourself?
If you want to get an idea of how to actually write a median filtering algorithm yourself, check out my recent post here. A question poser asked to implement the filtering mechanism without using medfilt2, and I provided an answer.
Matlab Median Filter Code
Hope this helps.
Good luck!

How to remove noise near the edge of an object in an image

I have an image like this:
I would like to remove the background(part A) near the edge of the object. I plan to use color detection since the color of object and noise are a little bit different. But maybe it is not a good idea.
I would appreciate if you could have any idea for me.
Thanks
If you're performing any sort of color-based segmentation of images, you may find it easier to convert to HSV color space first to select specific color ranges. I outline how you can do this sort of thing in an answer I gave to a similar question. The steps you would likely want to follow would be the following:
Convert to HSV and create a binary mask by selecting pixels with hues within a green color range and with a minimum amount of saturation and value.
Erode the resulting mask by a certain amount to remove small spurious clusters.
Dilate the eroded mask to get back to a smoother edge for your selected pixels.
I can't give an exact example of how you would apply this analysis to your data since the image you provide in the question actually has a higher resolution than the data it displays, but here's a general solution that uses the functions RGB2HSV, IMERODE, and IMDILATE (the last two are from the Image Processing Toolbox):
rgbImage = imread('data.jpg'); %# Load the RGB image
hsvImage = rgb2hsv(rgbImage); %# Convert to HSV color space
hPlane = 360.*hsvImage(:,:,1); %# Get the hue plane, scaled from 0 to 360
vPlane = hsvImage(:,:,3); %# Get the value plane
mask = (hPlane >= 80) & (hPlane <= 140) & (vPlane >= 0.3); %# Select a mask
SE = strel('disk',10); %# Create a disk-shaped element
mask = imdilate(imerode(mask,SE),SE); %# Erode and dilate the mask
And here's some code to visualize the edges created by the above analysis:
edgeMask = mask-imerode(mask,strel('disk',1)); %# Create an edge mask
edgeImage = zeros([size(edges) 3]); %# Create an RGB image for the edge
edgeImage(find(edgeMask)) = 1; %# that's colored red
image(rgbImage); %# Plot the original image
hold on;
image(edgeImage,'AlphaData',edgeMask); %# Plot the edge image over it