I have a set of points which I want to propagate on to the edge of shape boundary defined by a binary image. The shape boundary is defined by a 1px wide white edge.
I have the coordinates of these points stored in a 2 row by n column matrix. The shape forms a concave boundary with no holes within itself made of around 2500 points. I have approximately 80 to 150 points that I wish to propagate on the shape boundary.
I want to cast a ray from each point from the set of points in an orthogonal direction and detect at which point it intersects the shape boundary at. The orthogonal direction has already been determined. For the required purposes it is calculated taking the normal of the contour calculated for point, using point-1 and point+1.
What would be the best method to do this?
Are there some sort of ray tracing algorithms that could be used?
Thank you very much in advance for any help!
EDIT: I have tried to make the question much clearer and added a image describing the problem. In the image the grey line represents the shape contour, the red dots the points
I want to propagate and the green line an imaginary orthongally cast ray.
alt text http://img504.imageshack.us/img504/3107/orth.png
ANOTHER EDIT: For clarification I have posted the code used to calculate the normals for each point. Where the xt and yt are vectors storing the coordinates for each point. After calculating the normal value it can be propagated by using the linspace function and the requested length of the orthogonal line.
%#derivaties of contour
dx=[xt(2)-xt(1) (xt(3:end)-xt(1:end-2))/2 xt(end)-xt(end-1)];
dy=[yt(2)-yt(1) (yt(3:end)-yt(1:end-2))/2 yt(end)-yt(end-1)];
%#normals of contourpoints
l=sqrt(dx.^2+dy.^2);
nx = -dy./l;
ny = dx./l;
normals = [nx,ny];
It depends on how many unit vectors you want to test against one shape. If you have one shape and many tests, the easiest thing to do is probably to convert your shape coordinates to polar coordinates which implicitly represent your solution already. This may not be a very effective solution however if you have different shapes and only a few tests for every shape.
Update based on the edited question:
If the rays can start from arbitrary points, not only from the origin, you have to test against all the points. This can be done easily by transforming your shape boundary such that your ray to test starts in the origin in either coordinate direction (positive x in my example code)
% vector of shape boundary points (assumed to be image coordinates, i.e. integers)
shapeBoundary = [xs, ys];
% define the start point and direction you want to test
startPoint = [xsp, ysp];
testVector = unit([xv, yv]);
% now transform the shape boundary
shapeBoundaryTrans(:,1) = shapeBoundary(:,1)-startPoint(1);
shapeBoundaryTrans(:,2) = shapeBoundary(:,2)-startPoint(2);
rotMatrix = [testVector(2), testVector(1); ...
testVector(-1), testVector(2)];
% somewhat strange transformation to keep it vectorized
shapeBoundaryTrans = shapeBoundaryTrans * rotMatrix';
% now the test is easy: find the points close to the positive x-axis
selector = (abs(shapeBoundaryTrans(:,2)) < 0.5) & (shapeBoundaryTrans(:,1) > 0);
shapeBoundaryTrans(:,2) = 1:size(shapeBoundaryTrans, 1)';
shapeBoundaryReduced = shapeBoundaryTrans(selector, :);
if (isempty(shapeBoundaryReduced))
[dummy, idx] = min(shapeBoundaryReduced(:, 1));
collIdx = shapeBoundaryReduced(idx, 2);
% you have a collision with point collIdx of your shapeBoundary
else
% no collision
end
This could be done in a nicer way probably, but you get the idea...
If I understand your problem correctly (project each point onto the closest point of the shape boundary), you can
use sub2ind to convert the "2 row by n column matrix" description to a BW image with white pixels, something like
myimage=zeros(imagesize);
myimage(imagesize, x_coords, y_coords) = 1
use imfill to fill the outside of the boundary
run [D,L] = bwdist(BW) on the resulting image, and just read the answers from L.
Should be fairly straightforward.
Related
I'm trying to get the Trajectory from Voronoi Diagram using the library voronoi from Matlab. I'm using this code:
vo = (all the obstacles from a binary picture, plotted in a figure), where:
vo(1,:) : x-axis points
vo(2,:) : y-axis points
Code:
figure; hold on;
plot(vo(1,:),vo(2,:),'sr');
[vx,vy] = voronoi(vo(1,:),vo(2,:));
plot(vx,vy,'-b');
Obtaining:
In other words, how can I separate all the useless lines from the real trajectory?
The distinguishing property of the useless lines in this case is that they contain at least one vertex inside the polygonal obstacles.
There are many ways you might choose to decide if a vertex satisfies this condition but given that the coordinates come from a binary image, one of the most straightforward might be to check if the vertex comes within a pixel's distance of any point in vo:
[~,D] = knnsearch(vo,[vx(:),vy(:)]);
inObstacle = any(reshape(D,size(vx)) < 1);
plot(vx(:,~inObstacle),vy(:,~inObstacle),'-b');
If you cannot rely on the obstacles being filled with pixels (e.g. they may be hollow) then you probably need to determine which pixels belong to the same obstacle (perhaps using kmeans or bwconncomp) and then eliminate Voronoi edges that intrude into each object's convex hull. For a given obstacle made up of the pixels in vo listed by the linear indices idx:
K = convhull(vo(idx,1),vo(idx,2));
inObstacle = inpolygon(vx,vy,vo(idx(K),1),vo(idx(K),2));
for my project i have to segment the abnormalities in a CT brain image.
I want to do that by comparing the right side of the brain with the left side. This could be done
by using the intensity difference of the image. For example, blood is brighter than the brain tissue in
CT images. Due to the fact that the right and left side of the brain are nearly symmetric, it is possible
to find a abnormality in one side by comparing that with the other side. Using Matlab, I want to work
with the Dicom files of the CT images. I want to segment the abnormal area by comparing both sides of the brain.
After segmenting the abnormalities in 2D, I want to register the 2D images and create a 3D reconstruction.
Does anyone perhaps know, what the best coding method is (in Matlab) for comparing the left and right side of a Dicom image?
Maybe take a look at this paper: http://www.sciencedirect.com/science/article/pii/S0167865503000497
It explains how to find the symmetry plane in 3D MRI images, but the method should also work on CT. First you search for the center of mass in your image. Next you calculate the axes of the ellipsoid of inertia and evaluate the symmetry. Finally, you can improve the symmetry plane using the downhill simplex method.
Hope this helps!
EDIT: here is how I would tackle this problem:
search for the center of mass R
ind = find(ones(size(image)));
ind = reshape(ind, size(image,1), size(image,2), size(image,3)); %for a 3D volume
[x,y,z] = ind2sub(size(image), ind); %for a 3D volume
Rx = image.*x;
Ry = image.*y;
Rz = image.*z;
Rx = round(1/sum(image(:)) * sum(Rx(:)));
Ry = round(1/sum(image(:)) * sum(Ry(:)));
Rz = round(1/sum(image(:)) * sum(Rz(:)));
Rx, Ryand Rznow contain the position of the center of mass in your image. The code is easily adaptable to 2D.
Now, look for the axes of the ellipsoid of inertia:
for p=0:2
for q=0:2
for r=0:2
if p+q+r==2
integr = image.*(x-Rx).^p.*(y-Ry).^q.*(z-Rz).^r;
m = sum(integr(:));
if p==2, xx=1; yy=1;
elseif p>0 && q>0, xx=1; yy=2;
elseif p>0 && r>0, xx=1; yy=3;
elseif q==2, xx=2; yy=2;
elseif q>0 && r>0, xx=2; yy=3;
elseif r==2, xx=3; yy=3;
end
M(xx,yy) = m;
M(yy,xx) = m;
end
end
end
end
[V,~] = eig(M);
The matrix V contains the directions of the axes of the ellipsoid of inertia. These are first guesses for the symmetry plane.
Evaluate the symmetry. This is the hard part, because you have to rotate the image around all three (or two, in 2D) possible symmetry planes. I have used the affine3D and imwarp commands, but it's quite cumbersome. Make sure to define the different axes through the center of mass found before. A possible measure of symmetry is mu = 1 - ||image - mirrored_ image||^2 / (2*||image||^2). The axis with the highest mu value is the best symmetry plane.
If you are not happy about the symmetry axis, you can improve it using the downhill simplex method, see e.g. https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method
Now you have you original image and the mirrored image around the midsagittal plane. Subtracting both should give you an idea of abnormalities.
I hope this is clear. For more information, please check the excellent paper by Tuzikov et al. mentioned above.
I have written a MATLAB code to create the figure attached using the voronoi. My region of interest is the red circle. Hence the seeds for voronoi were kept within the region.
Idea: One approach would be to use a homothetic transformation of the Voronoi cell C{k} about the corresponding point X(k,:), with a ratio R such as 0 < R < 1. The shape of the cells—the number of corners and their associated angles—will be preserved, and the areas will be reduced proportionally (i.e. by a factor R2, and not by a constant value).
Please note that this will "destroy" your cells, because the reduced Voronoi cells will not share anymore vertices/edges, thus the [V,C] representation doesn't work anymore as it is. Also, the distances between what were once common edges will depend on the areas of the original cells (bigger cells, bigger distances between adjacent edges).
Transformation example for 2 2D points:
A = [1,2]; %'Center'
B = [10,1]; %'To be transformed'
R = 0.8; %'Transformation ratio'
trB = A + R*(B-A); %'Transformed'
couldn't follow your implementation of CST-link's idea, but here is one that works (i tested it in matlab, but not yet in abaqus, the code it spits out looks like abaqus should be happy with it)
rng(0);
x=rand(40,2);
plot(x(:,1),x(:,2),'x')
[v,c]=voronoin(x);
fact=0.9;
for i=1:length(c)
cur_cell=c{i};
coords=v(cur_cell,:);
if isfinite(coords)
%fact=somefunctionofarea?;
centre=x(i,:); % i used the voronoi seeds as my centres
coords=bsxfun(#minus,coords,centre); %move everything to a local coord sys centred on the seed point
[theta,rho] = cart2pol(coords(:,1),coords(:,2));
[xnew, ynew]= pol2cart(theta,rho*fact);
xnew=xnew+centre(1); % put back in real coords.
ynew=ynew+centre(2);
xnew2=circshift(xnew,1);
ynew2=circshift(ynew,1);
fprintf('s1.Line(point1=(%f,%f),point2=(%f,%f))\n',...
[xnew, ynew, xnew2,ynew2]');
line(xnew,ynew); %testing purposes - doesn't plot last side in matlab
end
end
Having seen the results of this one, i think you will need a different way factor to shrink your sides. either to subtract a fixed area or some other formula.
What we want is to draw several solid circles at random locations, with random gray scale colors, on a dark gray background. How can we do this? Also, if the circles overlap, we need them to change color in the overlapping part.
Since this is an assignment for school, we are not looking for ready-made answers, but for a guide which tools to use in MATLAB!
Here's a checklist of things I would investigate if you want to do this properly:
Figure out how to draw circles in MATLAB. Because you don't have the Image Processing Toolbox (see comments), you will probably have to make a function yourself. I'll give you some starter code:
function [xout, yout] = circle(x,y,r,rows,cols)
[X,Y] = meshgrid(x-r:x+r, y-r:y+r);
ind = find(X.^2 + Y.^2 <= r^2 & X >= 1 & X <= cols & Y >= 1 & Y <= rows);
xout = X(ind);
yout = Y(ind);
end
What the above function does is that it takes in an (x,y) co-ordinate as well as the radius of
the circle. You also will need to specify how many rows and how many columns you want in your image. The reason why is because this function will prevent giving you co-ordinates that are out of bounds in the image that you can't draw. The final output of this will give you co-ordinates of all values inside and along the boundary of the circle. These co-ordinates will already be in integer so there's no need for any rounding and such things. In addition, these will perfectly fit when you're assigning these co-ordinates to locations in your image. One caveat to note is that the co-ordinates assume an inverted Cartesian. This means that the top left corner is the origin (0,0). x values increase from left to right, and y values increase from top to bottom. You'll need to keep this convention in mind when drawing circles in your image.
Take a look at the rand class of functions. rand will generate random values for you and so you can use these to generate a random set of co-ordinates - each of these co-ordinates can thus serve as your centre. In addition, you can use this class of functions to help you figure out how big you want your circles and also what shade of gray you want your circles to be.
Take a look at set operations (logical AND, logical OR) etc. You can use a logical AND to find any circles that are intersecting with each other. When you find these areas, you can fill each of these areas with a different shade of gray. Again, the rand functions will also be of use here.
As such, here is a (possible) algorithm to help you do this:
Take a matrix of whatever size you want, and initialize all of the elements to dark gray. Perhaps an intensity of 32 may work.
Generate a random set of (x,y) co-ordinates, a random set of radii and a random set of intensity values for each circle.
For each pair of circles, check to see if there are any co-ordinates that intersect with each other. If there are such co-ordinates, generate a random shade of gray and fill in these co-ordinates with this new shade of gray. A possible way to do this would be to take each set of co-ordinates of the two circles and draw them on separate temporary images. You would then use the logical AND operator to find where the circles intersect.
Now that you have your circles, you can plot them all. Take a look at how plot works with plotting matrices. That way you don't have to loop through all of the circles as it'll be inefficient.
Good luck!
Let's get you home, shall we? Now this stays away from the Image Processing Toolbox functions, so hopefully these must work for you too.
Code
%%// Paramters
numc = 5;
graph_size = [300 300];
max_r = 100;
r_arr = randperm(max_r/2,numc)+max_r/2
cpts = [randperm(graph_size(1)-max_r,numc)' randperm(graph_size(2)-max_r,numc)']
color1 = randperm(155,numc)+100
prev = zeros(graph_size(1),graph_size(2));
for k = 1:numc
r = r_arr(k);
curr = zeros(graph_size(1),graph_size(2));
curr(cpts(k,1):cpts(k,1)+r-1,cpts(k,2):cpts(k,2)+r-1)= color1(k)*imcircle(r);
common_blob = prev & curr;
curr = prev + curr;
curr(common_blob) = min(color1(1),color1(2))-50;
prev = curr;
end
figure,imagesc(curr), colormap gray
%// Please note that the code uses a MATLAB file-exchange tool called
%// imcircle, which is available at -
%// http://www.mathworks.com/matlabcentral/fileexchange/128-imcircle
Screenshot of a sample run
As you said that your problem is an assignment for school I will therefore not tell you exactly how to do it but what you should look at.
you should be familiar how 2d arrays (matrices) work and how to plot them using image/imagesc/imshow ;
you should look at the strel function ;
you should look at the rand/randn function;
such concepts should be enough for the assignment.
I found an implementation of the Hough transform in MATLAB at Rosetta Code, but I'm having trouble understanding it. Also I would like to modify it to show the original image and the reconstructed lines (de-Houghing).
Any help in understanding it and de-Houghing is appreciated. Thanks
Why is the image flipped?
theImage = flipud(theImage);
I can't wrap my head around the norm function. What is its purpose, and can it be avoided?
EDIT: norm is just a synonym for euclidean distance: sqrt(width^2 + height^2)
rhoLimit = norm([width height]);
Can someone provide an explanation of how/why rho, theta, and houghSpace is calculated?
rho = (-rhoLimit:1:rhoLimit);
theta = (0:thetaSampleFrequency:pi);
numThetas = numel(theta);
houghSpace = zeros(numel(rho),numThetas);
How would I de-Hough the Hough space to recreate the lines?
Calling the function using a 10x10 image of a diagonal line created using the identity (eye) function
theImage = eye(10)
thetaSampleFrequency = 0.1
[rho,theta,houghSpace] = houghTransform(theImage,thetaSampleFrequency)
The actual function
function [rho,theta,houghSpace] = houghTransform(theImage,thetaSampleFrequency)
%Define the hough space
theImage = flipud(theImage);
[width,height] = size(theImage);
rhoLimit = norm([width height]);
rho = (-rhoLimit:1:rhoLimit);
theta = (0:thetaSampleFrequency:pi);
numThetas = numel(theta);
houghSpace = zeros(numel(rho),numThetas);
%Find the "edge" pixels
[xIndicies,yIndicies] = find(theImage);
%Preallocate space for the accumulator array
numEdgePixels = numel(xIndicies);
accumulator = zeros(numEdgePixels,numThetas);
%Preallocate cosine and sine calculations to increase speed. In
%addition to precallculating sine and cosine we are also multiplying
%them by the proper pixel weights such that the rows will be indexed by
%the pixel number and the columns will be indexed by the thetas.
%Example: cosine(3,:) is 2*cosine(0 to pi)
% cosine(:,1) is (0 to width of image)*cosine(0)
cosine = (0:width-1)'*cos(theta); %Matrix Outerproduct
sine = (0:height-1)'*sin(theta); %Matrix Outerproduct
accumulator((1:numEdgePixels),:) = cosine(xIndicies,:) + sine(yIndicies,:);
%Scan over the thetas and bin the rhos
for i = (1:numThetas)
houghSpace(:,i) = hist(accumulator(:,i),rho);
end
pcolor(theta,rho,houghSpace);
shading flat;
title('Hough Transform');
xlabel('Theta (radians)');
ylabel('Rho (pixels)');
colormap('gray');
end
The Hough Transform is a "voting" approach where each image point casts a vote on the existence of a certain line (not a line segment) in an image. The voting is carried out in the parameter space for a line: the polar coordinate representation of normal vectors.
We discretize the parameter space and allow each image point to suggest parameters which would be compatible with a line through the point. Each of your questions can be addressed in terms of how the parameter space is treated in code. Wikipedia has a good article with worked examples that might clarify things (if you are having any conceptual troubles).
For your specific questions:
The image is flipped so the origin is the bottom right corner. As far as I can tell this step is not technically necessary. It does change the outcome somewhat due to discretization issues. The other implementations on Rosetta Code do not flip the image.
rhoLimit holds the maximum radius of an image point in polar coordinates (recall the norm of a vector is its magnitude).
rho and theta are discretizations of the polar coordinate plane according to a sampling rate. houghSpace creates a matrix with an element for each possible combination of the discrete rho/theta values.
The Hough Transform does not specify the lengths of putative lines; the peaks in the voting space just specify the polar coordinates of the normal vector of the line. You can "de-Hough" by selecting the peaks and drawing the corresponding lines, or perhaps by drawing every possible line and using the number of votes as a grayscale weight. It is not possible to re-create the original image from the Hough Transform, just the lines identified by the transform (and your thresholding scheme on the votes).
Following the example from the question produces the following graph. The placement of grid lines and the datatips cursor can be a bit misleading (though the variable values in the 'tip are correct). Since this is an image of the parameter space and not the image space the sampling rate we chose is determining the number of bins in each variable. At this sampling rate, the image points are compatible with more than one possible line; in other words our lines have subpixel resolution, in the sense that they cannot be drawn without overlap in a 10x10 image.
Once we have chosen a peak, such as that corresponding to the line with normal (rho,theta) = (6.858,0.9), we can draw that line in an image however we choose. Automated peak picking, that is thresholding to find the highly up-voted lines, is its own problem - you could ask a another question about the topic in DSP or about a particular algorithm here.
For example methods see the code and documentation of MATLAB's houghpeaks and houghlines functions.