imnoise gaussian and dct2() in matlab - matlab

I'm trying to do dicrete cosine transformation on black and white image and also I'm trying to do that transformation on the same image but with some noise on it. The problem is that imnoise adds "RGB noise" I would say, and because of that I cant do dct2() on my image. Easy example that I tried:
B = imnoise(A,'gaussian');
C = dct2(B);
How can I put noise on my image so it stays in grayscale mode?

Related

Canny Edge detector in Matlab returns "ant race" image

I am trying to use the edge function in Matlab on an image of black and white shock waves. The image that I get from using this function is an "ant race" image. I am wondering if there is a different way to go about finding the edge of the shockwaves. Below is my code:
edgeimage = edge(Imagematrix(:,:,45),'Canny');
This is the original picture
Imagematrix is a three dimensional 2048x2040x90 matrix.
This is the image I get when I run the edge function
You whole image is "non-flat" so edges are everywhere. Try using the 3rd parameter of edge, threshold. Example using your image
% ofc this is not your real data, but close
I=imread('http://i.stack.imgur.com/ZaVGh.png');
edgeimage = edge(I(:,:,1),'Canny',0.22);
imshow(edgeimage)
Play with different values of threshold. You can also play with the sigma variable, it defines the size of the kernels

Creating intensity band across image border using matlab

I have this image (8 bit, pseudo-colored, gray-scale):
And I want to create an intensity band of a specific measure around it's border.
I tried erosion and other mathematical operations, including filtering to achieve the desired band but the actual image intensity changes as soon as I use erosion to cut part of the border.
My code so far looks like:
clear all
clc
x=imread('8-BIT COPY OF EGFP001.tif');
imshow(x);
y = imerode(x,strel('disk',2));
y1=imerode(y,strel('disk',7));
z=y-y1;
figure
z(z<30)=0
imshow(z)
The main problem I am encountering using this is that it somewhat changes the intensity of the original images as follows:
So my question is, how do I create such a band across image border without changing any other attribute of the original image?
Going with what beaker was talking about and what you would like done, I would personally convert your image into binary where false represents the background and true represents the foreground. When you're done, you then erode this image using a good structuring element that preserves the roundness of the contours of your objects (disk in your example).
The output of this would be the interior of the large object that is in the image. What you can do is use this mask and set these locations in the image to black so that you can preserve the outer band. As such, try doing something like this:
%// Read in image (directly from StackOverflow) and pseudo-colour the image
[im,map] = imread('http://i.stack.imgur.com/OxFwB.png');
out = ind2rgb(im, map);
%// Threshold the grayscale version
im_b = im > 10;
%// Create structuring element that removes border
se = strel('disk',7);
%// Erode thresholded image to get final mask
erode_b = imerode(im_b, se);
%// Duplicate mask in 3D
mask_3D = cat(3, erode_b, erode_b, erode_b);
%// Find indices that are true and black out result
final = out;
final(mask_3D) = 0;
figure;
imshow(final);
Let's go through the code slowly. The first two lines take your PNG image, which contains a grayscale image and a colour map and we read both of these into MATLAB. Next, we use ind2rgb to convert the image into its pseudo-coloured version. Once we do this, we use the grayscale image and threshold the image so that we capture all of the object pixels. I threshold the image with a value of 10 to escape some quantization noise that is seen in the image. This binary image is what we will operate on to determine those pixels we want to set to 0 to get the outer border.
Next, we declare a structuring element that is a disk of a radius of 7, then erode the mask. Once I'm done, I duplicate this mask in 3D so that it has the same number of channels as the pseudo-coloured image, then use the locations of the mask to set the values that are internal to the object to 0. The result would be the original image, but having the outer contours of all of the objects remain.
The result I get is:

How do I denoise a simple grayscale image

Here is the original image with better vision: we can see a lot of noise around the main skeleton, the circle thing, which I want to remove them, and do not affect the main skeleton. I'm not sure if it called noise
I'm doing it to deblurring a image, and this image is the motion blur kernel which identify the camera motion when the camera capture a image.
ps: this image is the kernel for one case, and what I need is a general method in here. thank you for your help
there is a paper in CVPR2014 named "Separable Kernel for Image Deblurring" which talk about this, I want to extract main skeleton of the image to make the kernel more robust, sorry for the explaination here as my English is not good
and here is the ture grayscale image:
I want it to be like this:
How can I do it using Matlab?
here are some other kernel images:
As #rayryeng well explained, median filtering is the best option to clean noise in the image, which I realized when I had studied about image restoration. However, in your case, what you need to do seems to me not cleaning noise in the image. You want to more likely eliminate the sparks in the image.
Simply I applied single thresholding to your noisy image to eliminate sparks.
Try this:
desIm = imread('http://i.stack.imgur.com/jyYUx.png'); % // Your expected (desired) image
nIm = imread('http://i.stack.imgur.com/pXO0p.png'); % // Your original image
nImgray = rgb2gray(nIm);
T = graythresh(nImgray)*255; % // Thereshold value
S = size(nImgray);
R = zeros(S) + 5; % // Your expected image bluish so I try to close it
G = zeros(S) + 3; % // Your expected image bluish so I try to close it
B = zeros(S) + 20; % // Your expected image bluish so I try to close it
logInd = nImgray > T; % // Logical index of pixel exclude spark component
R(logInd) = nImgray(logInd); % // Get original pixels without sparks
G(logInd) = nImgray(logInd); % // Get original pixels without sparks
B(logInd) = nImgray(logInd); % // Get original pixels without sparks
rgbImage = cat(3, R, G, B); % // Concatenating Red Green Blue channels
figure,
subplot(1, 3, 1)
imshow(nIm); title('Original Image');
subplot(1, 3, 2)
imshow(desIm); title('Desired Image');
subplot(1, 3, 3)
imshow(uint8(rgbImage)); title('Restoration Result');
What I got is:
The only thing I can see that is different between the two images is that there is some quantization noise / error around the perimeter of the object. This resembles salt and pepper noise and the best way to remove that noise is to use median filtering. The median filter basically analyzes local overlapping pixel neighbourhoods in your image, sorts the intensities and chooses the median value as the output for each pixel neighbourhood. Salt and pepper noise corrupts image pixels by randomly selecting pixels and setting their intensities to either black (pepper) or white (salt). By employing the median filter, sorting the intensities puts these noisy pixels at the lower and higher ends and by choosing the median, you would get the best intensity that could have possibly been there.
To do median filtering in MATLAB, use the medfilt2 function. This is assuming you have the Image Processing Toolbox installed. If you don't, then what I am proposing won't work. Assuming that you do have it, you would call it in the following way:
out = medfilt2(im, [M N]);
im would be the image loaded in imread and M and N are the rows and columns of the size of the pixel neighbourhood you want to analyze. By choosing a 7 x 7 pixel neighbourhood (i.e. M = N = 7), and reading your image directly from StackOverflow, this is the result I get:
Compare this image with your original one:
If you also look at your desired output, this more or less mimics what you want.
Also, the code I used was the following... only three lines!
im = rgb2gray(imread('http://i.stack.imgur.com/pXO0p.png'));
out = medfilt2(im, [7 7]);
imshow(out);
The first line I had to convert your image into grayscale because the original image was in fact RGB. I had to use rgb2gray to do that. The second line performs median filtering on your image with a 7 x 7 neighbourhood and the final line shows the image in a separate window with imshow.
Want to implement median filtering yourself?
If you want to get an idea of how to actually write a median filtering algorithm yourself, check out my recent post here. A question poser asked to implement the filtering mechanism without using medfilt2, and I provided an answer.
Matlab Median Filter Code
Hope this helps.
Good luck!

Identifying White Cars in an Image using Matlab

I am currently writing a program on Matlab for Image Processing. I am using an image (below) to attempt to count the number of white cars in the image. I have used filtering commands, strel(disk, 2), and have managed to detect the two white cars in the image but due to the way the binary image (below) displays a car it counts one car as two.
Are there any solutions to overcome this problem or are there any particular methods I should be using as an alternative to the code below?
a = imread('Cars2.jpg'); %Read the image Car1.jpg
subplot(3,3,1), imshow (a); %Display RGB image Car1.jpg
b = rgb2gray(a); %Turn Car1 from RGB to greyscale
subplot(3,3,2), imshow (b); %Display greyscale image Car1.jpg
c = graythresh (a); %Automatically set appropriate threshold for foreground & background (Otsu's Method)
d = im2bw (b,0.8); %Convert from greyscale to binary image
subplot (3,3,3), imshow(d); %Display binary image Car1.jpg
subplot(3,3,4), imhist (b,256); %Display histogram for greyscale values (image, samples)
SE = strel ('disk',2); %Set Disk radius for filtering unnecessary pixels
e = imopen (d,SE); %Erode then Dilate image with Disk radius
subplot (3,3,5), imshow(e); %Display openned/filtered image Car1.jpg
B = bwboundaries(e);
imshow(e)
text(10,10,strcat('\color{red}Objects Found:',num2str(length(B))))
hold on
EDIT: As i have under 10 reputation I can't post the image displayed from the code but the theory is pretty generic so I hope you understand what I'm getting across. The images are similar to http://www.mathworks.co.uk/help/images/examples/detecting-cars-in-a-video-of-traffic.html
Instead of using bwboundaries I would use regionprops(e). You can then use some additional logic by looking at the area of the object and the shape of the bounding box to infer if the object is one or two cars.
If you are only interested in detecting white cars, your overall algorithm could be improved by converting the image into HSV colour space and thresholding on the saturation and value channels instead of using im2bw. If you have a video sequence I would segment using vision.ForegroundDetector or another Gaussian mixture model segmentation technique.

LED Screen recognition in image using MATLAB

I'm trying to detect the screen border from the image (In need the 4 corners).
This is the Image:
I used HOUGH transform to detect lines and intersection points (the black circles) and this is the result:
Now I need to find the 4 corners or the 4 lines.. everything that will help me to crop the image, What can I do?
Maybe use the screen aspect ratio? but how?
I'm using Matlab.
Thanks.
A naive first approach that would do the trick if and only if you have same image conditions (background and laptop).
Convert your image to HSV (examine that in HSV the image inside the
screen is the only portion of the image with high Saturation, Value
values)
Create a mask by hard thresholding the Saturation and Value channels
Dilate the mask to connect disconnected regions
Compute the convex hull to get the mask boundaries
See a quick result:
Here is the same mask with the original image portion that makes it through the mask:
Here is the code to do so:
img = imread( 'imagename.jpg'); % change the image name
hsv = rgb2hsv( img);
mask = hsv(:,:,2)>0.25 & hsv(:,:,3)>0.5;
strel_size = round(0.025*max(size(mask)));
dilated_mask=imdilate(mask,strel('square',strel_size));
s=regionprops(dilated_mask,'BoundingBox','ConvexHull');
% here Bounding box produces a box with the minimum-maximum white pixel positions but the image is not actually rectangular due to perspective...
imshow(uint8(img.*repmat(dilated_mask,[1 1 3])));
line(s.ConvexHull(:,1),s.ConvexHull(:,2),'Color','red','LineWidth',3);
You may, of course, apply some more sophisticated processing to be a lot more accurate and to correct the convex hull to be just a rectangular shape with perspective, but this is just a 5 minutes attempt just to showcase the approach...