I am trying using MATLAB activecontour code to segment the region. the example was used grayscale image while i am using binary image. it turn out ok when i run the code by calling the binary image. however, when i combined the code, nothing happen. it skips the iteration part, and generate the sama binary image. for your reference, below is my code.
%% snake
figure
x = imread('1.jpg');
threshold = 160;
I = rgb2gray(x);
I = Igray>threshold;
imshow (I);
I = imresize(I,.5);
imshow(I)
title('Original Image')
mask = zeros(size(I));
mask(25:end-25,25:end-25) = 1;
imshow(mask)
title('Initial Contour Location')
bw = activecontour(I,mask,1300);
imshow(bw)
title('Segmented Image, 300 Iterations')
no process happen starting from snake's code. it eventually only generate binary image. I hope someone could try run this and help me to find my mistake. Thank you in advance
Matlab's activecontour function uses the Chan–Vese (active contours without edges) method by default, like Cris said. The implementation "uses the Sparse-Field level-set method, similar to the method described in [3]", citing Whitaker, "A level-set approach to 3d reconstruction from range data". (Besides Chan–Vese, activecontour has an optional method arg that can be set to 'edge' to use an alternative "edge-based model" based on the (older) geodesic active contours method of Caselles, Kimmel, and Sapiro.)
The Chan–Vese method segments a grayscale image by looking for a binary image equal to "c1" inside the contour and "c2" outside the contour that both has a smooth contour and is a good approximation to the original image. The method optimizes c1, c2, and the shape of the contour, beginning from some initial contour and evolving it by an iterative process.
If you'll excuse a self-citation, you can find an article, open source C code, and online demo about Chan–Vese on the IPOL journal at http://www.ipol.im/pub/art/2012/g-cv/, which you may find helpful.
So why is it not working in your case? Some thoughts:
In your use, since the input image is already binary, it is clearly tempting for the method to simply set c1=0, c2=1, and the contour to the edges of the input, so "nothing happens". Try setting the optional 'SmoothFactor' arg (possibly to a large value) to force the method to look for a smoother contour.
It's conceivably a datatype problem, since image I is passed as a logical array to activecontour, but normally the function takes a numeric array. Try casting I to a double array before passing.
Related
I'd like to know if it is possible to create a custom heatmap in MATLAB. What I mean by this, is if it's possible to superimpose an image onto MATLAB (the image in this case would be the state I reside in) and produce a heat color in a particular region of the map. If it is, please send me a link with which I can teach myself how to make a script for this.
Totally possible! Instructions here: https://www.mathworks.com/help/thingspeak/create-heatmap-overlay-image.html
The first step is reading in an image file and showing it on the "graph" that you are forming. This step is detailed by the link as follows:
picture = imread('https://www.mathworks.com/help/examples/optim/win64/officeassign_01.png');
[height,width,depth] = size(picture);
imshow(picture);
hold on
The next step for you, presuming you already have a state map, is going to be a little trickier: you are going to have to know the x,y positions you would like to map heat onto and the intensity at those points. You are going to need an overlay, either blocks (like a grid) or a smooth map. I assume you'll want some smoothing so you will use a linear interpolation between points. Once you've decided on the x,y,heat intensity mapping, you can do the following:
OverlayImage=[];
F = scatteredInterpolant(Y, X, strengthPercent,'linear');
for i = 1:height-1
for j = 1:width-1
OverlayImage(i,j) = F(i,j);
end
end
alpha = (~isnan(OverlayImage))*0.6;
To deconstruct what they were doing here a little, they first made an empty overlay. They then made a map that "blended" between points using a linear interpolation. Finally, they made an alpha layer that was a fraction of that intensity.
The final and most important step -- more central to what you are trying to do -- is place the "image" of that heat matrix over the actual image:
OverlayImage = imshow( OverlayImage );
% Set the color limits to be relative to the data values
caxis auto
colormap( OverlayImage.Parent, jet );
colorbar( OverlayImage.Parent );
% Set the AlphaData to be the transparency matrix created earlier
set( OverlayImage, 'AlphaData', alpha );
The final line of this sets the transparency of the layer (to alpha) allowing for viewing of the image under the heat map. Really, in combination with the first code block, these are the two steps that should set you on your way. Let me know if you need any help!
I'm trying to find a starting point, but I can't seem to find the right answer. I'd be very grateful for some guidance. I also don't know the proper terminology, hence the title.
I took an image of a bag with a black background behind it.
And I want to extract the bag, similar to this.
And if possible, find the center, like this.
Essentially, I want to be able to extract the blob of pixels and then find the center point.
I know these are two separate questions, but I figured if someone can do the latter, then they can do the first. I am using MATLAB, but would like to write my own code and not use their image processing functions, like edge(). What methods/algorithms can I use? Any papers/links would be nice (:
Well, assuming that your image only consists of a black background and a bag inside it, a very common way to perform what you're asking is to threshold the image, then find the centroid of all of the white pixels.
I did a Google search and the closest thing that I can think of that matches what you want looks like this:
http://ak.picdn.net/shutterstock/videos/3455555/preview/stock-footage-single-blank-gray-shopping-bag-loop-rotate-on-black-background.jpg
This image is RGB for some reason, even though it's grayscale so we're going to convert this to grayscale. I'm assuming you can't use any built-in MATLAB functions and so rgb2gray is out. You can still implement it yourself though as rgb2gray implements the SMPTE Rec. 709 standard.
Once we read in the image, you can threshold the image and then find the centroid of all of the white pixels. That can be done using find to determine the non-zero row and column locations and then you'd just find the mean of both of them separately. Once we do that, we can show the image and plot a red circle where the centroid is located. As such:
im = imread('http://ak.picdn.net/shutterstock/videos/3455555/preview/stock-footage-single-blank-gray-shopping-bag-loop-rotate-on-black-background.jpg');
%// Convert colour image to grayscale
im = double(im);
im = 0.299*im(:,:,1) + 0.587*im(:,:,2) + 0.114*im(:,:,3);
im = uint8(im);
thresh = 30; %// Choose threshold here
%// Threshold image
im_thresh = im > thresh;
%// Find non-zero locations
[rows,cols] = find(im_thresh);
%// Find the centroid
mean_row = mean(rows);
mean_col = mean(cols);
%// Show the image and the centroid
imshow(im); hold on;
plot(mean_col, mean_row, 'r.', 'MarkerSize', 18);
When I run the above code, this is what we get:
Not bad! Now your next concern is the case of handling multiple objects. As you have intelligently determined, this code only detects one object. For the case of multiple objects, we're going to have to do something different. What you need to do is identify all of the objects in the image by an ID. This means that we need to create a matrix of IDs where each pixel in this matrix denotes which object the object belongs to. After, we iterate through each object ID and find each centroid. This is performed by creating a mask for each ID, finding the centroid of that mask and saving this result. This is what is known as finding the connected components.
regionprops is the most common way to do this in MATLAB, but as you want to implement this yourself, I will defer you to my post I wrote a while ago on how to find the connected components of a binary image:
How to find all connected components in a binary image in Matlab?
Mind you, that algorithm is not the most efficient one so it may take a few seconds, but I'm sure you don't mind the wait :) So let's deal with the case of multiple objects now. I also found this image on Google:
We'd threshold the image as normal, then what's going to be different is performing a connected components analysis, then we iterate through each label and find the centroid. However, an additional constraint that I'm going to enforce is that we're going to check the area of each object found in the connected components result. If it's less than some number, this means that the object is probably attributed to quantization noise and we should skip this result.
Therefore, assuming that you took the code in the above linked post and placed it into a function called conncomptest which has the following prototype:
B = conncomptest(A);
So, take the code in the referenced post, and place it into a function called conncomptest.m with a function header such that:
function B = conncomptest(A)
where A is the input binary image and B is the matrix of IDs, you would do something like this:
im = imread('http://cdn.c.photoshelter.com/img-get2/I0000dqEHPhmGs.w/fit=1000x750/84483552.jpg');
im = double(im);
im = 0.299*im(:,:,1) + 0.587*im(:,:,2) + 0.114*im(:,:,3);
im = uint8(im);
thresh = 30; %// Choose threshold here
%// Threshold image
im_thresh = im > thresh;
%// Perform connected components analysis
labels = conncomptest(im_thresh);
%// Find the total number of objects in the image
num_labels = max(labels(:));
%// Find centroids of each object and show the image
figure;
imshow(im);
hold on;
for idx = 1 : num_labels
%// Find the ith object mask
mask = labels == idx;
%// Find the area
arr = sum(mask(:));
%// If area is less than a threshold
%// don't process this object
if arr < 50
continue;
end
%// Else, find the centroid normally
%// Find non-zero locations
[rows,cols] = find(mask);
%// Find the centroid
mean_row = mean(rows);
mean_col = mean(cols);
%// Show the image and the centroid
plot(mean_col, mean_row, 'r.', 'MarkerSize', 18);
end
We get:
I have no intention of detracting from Ray's (#rayryeng) excellent advice and, as usual, beautifully crafted, reasoned and illustrated answer, however I note that you are interested in solutions other than Matlab and actually want to develop your own code, so I though I would provide some additional options for you.
You could look to the excellent ImageMagick, which is installed in most Linux distros and available for OS X, other good operating systems and Windows. It includes a "Connected Components" method and if you apply it to this image:
like this at the command-line:
convert bags.png -threshold 20% \
-define connected-components:verbose=true \
-define connected-components:area-threshold=600 \
-connected-components 8 -auto-level output.png
The output will be:
Objects (id: bounding-box centroid area mean-color):
2: 630x473+0+0 309.0,252.9 195140 srgb(0,0,0)
1: 248x220+0+0 131.8,105.5 40559 srgb(249,249,249)
7: 299x231+328+186 507.5,304.8 36620 srgb(254,254,254)
3: 140x171+403+0 458.0,80.2 13671 srgb(253,253,253)
12: 125x150+206+323 259.8,382.4 10940 srgb(253,253,253)
8: 40x50+339+221 357.0,248.0 1060 srgb(0,0,0)
which shows 6 objects, one per line, and gives the bounding boxes, centroids and mean-colour of each. So, the 3rd line means a box 299 pixels wide by 231 pixels tall, with its top-left corner at 328 across from the top-left of the image and 186 pixels down from the top-left corner.
If I draw in the bounding boxes, you can see them here:
The centroids are also listed for you.
The outout image from the command above is like this, showing each shape shaded in a different shade of grey. Note that the darkest one has come up black so is VERY hard to see - nearly impossible :-)
If, as you say, you wish to look at writing your own connected component code, you could look at my code in another answer of mine... here
Anyway, I hope this helps and you see it just as an addition to the excellent answer Ray has already provided.
The image below shows a cow where the boundary has been detected using a combination of thresholding and subtracting a background from a 3D depth image.
My goal is to perform feature extraction on the area INSDIE the boundary. I have read the other questions and have struggled to implement the steps refereed to in similar questions. I do not want to extract the area in the boundary, I simply want to use it for feature extraction.
Please could someone offer a solution that is perhaps simpler? For example, is there a way to give the extractSURFFeatures the boundary coordinates from which to work within?
Below is my boundary code which recieves my processed thresholded image (BW1).
figure(1);
imshow(ImageCell_int{i-269});
%title('Outlines, from bwboundaries()'); axis square;
hold on;
boundaries = bwboundaries(BW1);
numberOfBoundaries = size(boundaries);
for k = 1 : numberOfBoundaries
thisBoundary = boundaries{k};
plot(thisBoundary(:,2), thisBoundary(:,1), 'g', 'LineWidth', 2);
end
hold off;
I would be extremely grateful for any assistance on this.
Great, now I see the cow! :)
You cannot specify an irregularly-shaped region of interest for the detectSURFFeatures function. However, you can detect the features in the whole image, and then create a binary mask of the region of interest, and use it to exclude keypoints, which are outside it.
Edit: If your boundary is represented as a polygon, you can use roipoly function to create a binary mask from it.
Having said that, features that are outside your object's boundary can actually be useful, because they capture information about the shape of the object.
Also, what is your final goal? If you want to recognize individual cows, then local features may not be the best approach. You may do better with a global HOG descriptor (extractHOGFeatures) or with a color histogram, or both.
This answer was discovered on Matlab Central and completely solves the problem above for anyone struggling with a similar issue.
Start with a grey scale outline of the object of interest (BW1).
% Make the mask black and white
double(BW1);
BW2 = logical(BW1);
Next the mask is created and forced to be the same size as the normal image.
mask = cast(BW2, class(normalImage));
maskedImage = normalImage .* mask;
imshow(maskedImage);
Yields the following result:
It is now possible to perform feature extraction on the object of interest.
i am trying to plot the figure of FFT magnitude of an image using the following code in the command window:
a= imread('lena','png')
figure,imshow(a)
ffta=fft2(a)
fftshift1=fftshift(ffta)
magnitude=abs(fftshift1)
figure,imshow(magnitude),title('magnitude')
However, the figure with the title magnitude shows nothing, even though MATLAB shows that it has computed abs() on fftshift. The figure is still empty, and there is no error. Also, why do we need to compute the phase shift before magnitude?
The reason why this is probably happening is because of the following things:
When you take the 2D fft of your image, it will produce a double valued result, even though your image is mostly unsigned 8-bit integer. MATLAB assumes that double formatted images have their intensities / colours between [0,1]. By doing imshow on just the magnitude itself, you will most likely get an entirely white image because I suspect a good majority of the FFT coefficients are bigger than 1. This is probably the blank figure that you're referring to.
Even if you rescale the magnitude so that it is between [0,1], the DC coefficient will be so large that if you try to display the image, you'll only see a white dot in the middle while every other component will be black.
As a side note, the reason why you are doing fftshift is because by default, MATLAB assumes that the origin of the FFT for 2D is located at the top left corner. Doing fftshift will allow the origin to be in the middle, which is what we would intuitively expect of the 2D FFT.
In order to remedy this situation, I would suggest doing a log transformation on the FFT coefficients so you can visually see the results. I would also normalize the coefficients once you log transform it so that they go between [0,1]. Do not actually modify the FFT coefficients as this would be improper. You need to leave them the same way that it is because if you intend to do any processing on the spectrum, you would start by working on the raw image. Doing filter design or anything of that sort will require the raw spectrum, as the final filter will depend on these coefficients untouched. Unless you actually want to do a log operation as part of your pipeline, then leave these coefficients as is. As such, this can be done through the following MATLAB code:
imshow(log(1 + magnitude), []);
I'm going to show an example, using your code that you have provided but using another image as you haven't provided one here. I'm going to use the cameraman.tif image that's part of the MATLAB system path. As such:
a= imread('cameraman.tif');
figure,imshow(a);
ffta=fft2(a);
fftshift1=fftshift(ffta);
magnitude=abs(fftshift1);
figure;
imshow(log(1 + magnitude), []); %// NEW
title('magnitude')
This is what I get:
As you can see, the magnitude is displayed more nicely. Also, the DC coefficient is in the middle of the spectrum thanks to fftshift.
If you want to apply this for colour images, fft2 should still work. It will apply the 2D fft to each colour plane by itself. However, if you want this to work, you'll not only need to take the log transform, but you'll also need to normalize each plane separately. You have to do this because if we tried doing the imshow command we did earlier, it would normalize it so that the greatest value in the spectrum of the colour image gets normalized to 1. This will inevitably produce that same small dot effect that we talked about earlier.
Let's try a colour image that's built-in to MATLAB: onion.png. We will use the same code that you used above, but we need an additional step of normalizing each colour plane by itself. As such:
a = imread('onion.png');
figure,imshow(a);
ffta=fft2(a);
fftshift1=fftshift(ffta);
magnitude=abs(fftshift1);
logMag = log(1 + magnitude); %// New
for c = 1 : size(a,3); %// New - normalize each plane
logMag(:,:,c) = mat2gray(logMag(:,:,c));
end
figure; imshow(logMag); title('magnitude');
Note that I had to loop through each colour plane and use mat2gray to normalize each plane to [0,1]. Also, I had to create a new variable called logMag because I have to modify each colour plane individually, and you can't do this with a single imshow call.
With this, these are the results I get:
What's different with this spectrum is that we are applying the FFT to each colour plane separately, and so you'll see a whole bunch of colour spatters because for each location in this image, we are visualizing a linear combination of components from the red, green and blue channels. For each location, we have a value in between [0,1] for each colour plane, and the combination of these give you a colour at this location. You could say that darker colours are for locations that have a relatively low magnitude for at least one of the colour channels, while locations that are brighter have a relatively high magnitude for at least one of the colour channels.
Hope this helps!
Can't be sure about your version of "lena.png", but if it's a color RGB picture, you need to convert it first to grayscale, or at least select which RGB plane you want to examine.
I.e., the following works for http://optipng.sourceforge.net/pngtech/img/lena.png (color png):
clear; close all;
a = imread('lena','png');
ag = rgb2gray(a);
ag = im2double(ag);
figure(1);
imshow(ag);
F = fftshift( fft2(ag) ); % also try fft2(ag, N, N) where N < image size. Say N=128.
magnitude=abs(F);
figure(2);
imshow(magnitude);
I got a code from the book Feature Extraction & Image Processing.
As I am a total beginner in Matlab, I don't know how to run these codes to see results.
Are they complete?
First one : Hough Transform for Lines
%Polar Hough Transform for Lines
function HTPLine(inputimage)
%image size
[rows,columns]=size(inputimage);
%accumulator
rmax=round(sqrt(rows^2+columns^2));
acc=zeros(rmax,180);
%image
for x=1:columns
for y=1:rows
if(inputimage(y,x)==0)
for m=1:180
r=round(x*cos((m*pi)/180)+y*sin(m*pi)/180);
if(r0) acc(r,m)=acc(r,m)+1; end
end
end
end
end
Second one : Hough Transform for Circles
%Hough Transform for Circles
function HTCircle(inputimage,r)
%image size
[rows,columns]=size(inputimage);
%accumulator
acc=zeros(rows,columns);
%image
for x=1:columns
for y=1:rows
if(inputimage(y,x)==0)
for ang=0:360
t=(ang*pi)/180;
x0=round(x-r*cos(t));
y0=round(y-r*sin(t));
if(x00 & y00)
acc(y0,x0)=acc(y0,x0)+1;
end
end
end
end
end
Third one : Hough Transform for Elipses
%Hough Transform for Ellipses
function HTEllipse(inputimage,a,b)
%image size
[rows,columns]=size(inputimage);
%accumulator
acc=zeros(rows,columns);
%image
for x=1:columns
for y=1:rows
if(inputimage(y,x)==0)
for ang=0:360
t=(ang*pi)/180;
x0=round(x-a*cos(t));
y0=round(y-b*sin(t));
if(x00 & y0< rows & y0>0)
acc(y0,x0)=acc(y0,x0)+1;
end
end
end
end
end
I have images (png) that I need to run these programs with.
But I cannot seem to run it.
I create new script, paste the code, save it and in main window I run the function name sending path to the image as a parameter. It does nothing, no message or so.
Your functions don't return any value, that means that you have to add a return argument to those functions (in case of the hough trafo, you'd like to return the accumulator array acc) and as described in the manual (http://www.mathworks.de/de/help/matlab/ref/function.html), change the functions headers to:
function acc = HTPLine(inputimage)
and then also call it like this from the commandline (or from another script):
IMG = imread('some_image.jpg');
% e.g. convert to grayscale:
IMG = rgb2gray(IMG);
acc = HTPLine(IMG);
you then still need to find the maximum (or several maxima, depending on how many lines you'd like to fit) in the accumulator and display the fitted line (ellipse/circle) on your own via plot in a figure...
EDIT: Looking at your code, I don't know what the variable r0 should be!? It will def. give you an errormsesage as it's not defined anywhere.
And why do you the if(inputimage(y,x)==0) in your code? The pixel at the current point doesn't have to be black in ordner to calculate the hough transformation for that single pixel (just think of a grey-image with a white line in it -> your code wouldn't do anything as the image wouldn't contain any black pixels). So that's what you need to rethink about.
If you want to, you could also use the built-in hough transformation of MATLAB: http://www.mathworks.de/de/help/images/ref/hough.html // http://www.mathworks.de/de/help/images/ref/houghpeaks.html