Determine the orientation of a triangle in Matlab - matlab

I am working on a project to perform automatically the landing of a quadrotor by visual recognition on a target. I have the code to detect the target through HOG features. Now the idea is to find the triangle, which is isosceles, and measure the lines so that I can determine the orientation that way. I have tried Hough, but I cannot manage to succeed.
The target is a proposed one
, and it consists of an isosceles triangle inside a circle. But if you can think of a better one, please let me know.
Please, ask any questions if anything is unclear. Thank you very much
Update 1:
#McMa 's idea works well when I deal only with the target as an image. This is the code:
clc; close all;
im=imread('target.bmp');
im=rgb2gray(im);
im2=imcrop(im,[467.51 385.51 148.98 61.98]);
im2=imcomplement(im2);
im2=imrotate(im2,0);
s=regionprops(im2,'Area','Centroid','Extrema','Orientation');
[imH,imW]=size(im2);
if imH-s(end).Centroid(2) < imH/2
state=1; % Upright
else
state=2; % Upside down
end
imshow(im2);hold on
plot(s(end).Centroid(1), s(end).Centroid(2), 'b*')
if s(end).Orientation>0
degrees=s(end).Orientation;
else
degrees=s(end).Orientation+180;
end
if (0<degrees)&&(degrees<89.99) && state==2
degrees=degrees+180;
elseif (90<degrees) && (degrees<179) && state==1
degrees=degrees+180;
end
fprintf('The orientation is %g degrees\n',degrees)
Update 2:
Now I have another problem: I need to know somehow whether the camera is seeing the whole target or only the small circle+triangle. I need this before computing the orientation.
I have tried many options. For example, I wanted to count the number of circles, so if there are 2, it is seeing the big target, and if there is 1, just the small one. But they are not well detected. Even if I play with the sensitivity, it's not going to be a robust method.
Image: https://www.dropbox.com/s/7mbpna3xfquq5n7/P0016.bmp?dl=0
Classifer: https://www.dropbox.com/s/236vm3romw56983/Cascade1Matlab.xml?dl=0
im=imread('P0016.bmp');
detector = vision.CascadeObjectDetector('Cascade1Matlab.xml');
bbox = step(detector, im); % Detect the target.
detectedImg = insertObjectAnnotation(im, 'rectangle', bbox, 'target'); % Insert bounding boxes and return marked image.
imshow(detectedImg)
BW=rgb2gray(im);
BW=imcrop(BW,bbox(1,:) +[0 0 10 10]);
[imH,imW]=size(im);
centers = imfindcircles(im,[1 round(imH)]);
figure;hold on;
imshow(im);
plot(centers(:,1),centers(:,2),'r*','LineWidth',4)
I also tried with other approaches such as the Euler number, but with no success, I can't find anything that works properly.

I think the easiest and fastest way would be finding your target and binarize the image. Afterwards use regionprops() and read the "Orientation" property to read the orientation.
If you can't use that toolbox the function is very easily implement by calculating the covariance matrix of your region. Let me know if you need some tips on this.
Edit:
I just so happend to have some nicely vectorized functions around here ;) so if speed is a top priority, you can easily write your own regionprops() trimmed to the bare minimum like this:
function M=ImMoment(Image,ii,jj)
ImSize=size(Image);
K=repmat((1:ImSize(1))',1,ImSize(2)).^ii;
J=repmat(1:ImSize(2),ImSize(1),1).^jj;
M=K.*J.*Image;
M=sum(M(:));
end
for the image moments and
function [Matrix,Centroid,Angle]=CovMat(Image)
Centroid=[ImMoment(Image,0,1)/ImMoment(Image,0,0),...
ImMoment(Image,1,0)/ImMoment(Image,0,0)];
Miu20=ImMoment(Image,0,2)/ImMoment(Image,0,0)-Centroid(1)^2;
Miu02=ImMoment(Image,2,0)/ImMoment(Image,0,0)-Centroid(2)^2;
Miu11=ImMoment(Image,1,1)/ImMoment(Image,0,0)-Centroid(1)*Centroid(2);
Matrix=[Miu20,Miu11 %Covariance Matrix in case you need it for anything...
Miu11,Miu02];
Angle=1/2*atand(2*Miu11/(Miu20-Miu02)); %Your orientation
end
for your orientation and covariance matrix. more about it here.
Image moments are very might, have fun!

I have a very blunt solution in mind. It may work. I have not actually tried it, since there is no image to work on. So if it fails, post the error.
Assumptions:- You have filtered the image and obtained the binary image that contains only triangle "OR" with uniform noise.
Now you can take a 0 degree image (image1). Filter it and obtain binary image (bw1).
So when you are trying to land your quadrotor, take image (image2), convert it binary (bw2).
Now find the correlation between these two images {corr2(bw1, bw2)}. Store this in a variable.
Rotate image with a step angle. Let angle be 5 degrees. {imrotate(bw2, 5)}
Now again find correlation between these two images.
Do this for all angles.
The orientation would the angle (no. of rotation * 5) where the correlation is maximum.
The term maximum signifies that, you may not find the correlation to be 1 as this highly depends on your filtering techniques to obtain perfect binary image.
I also accept that computing the correlation for all the angles requires high computation speed as well as long time. This would be really difficult to achieve in real time if you do not have high computation speed. (In this case you can look into Parallel Computing Toolbox) specially parfor.
Hope this was useful to you. Post a comment if you face any error.
Finally good luck. Nice project.
P.S. Pad white or black pixels depending on your binary image while rotating image.

Related

Rotating a template to match an edge

(I use MATLAB R2015a). I have a plot of edge points of an object (obtained using edge detection), and I have a plot of the template of the object. I want to rotate the template until it matches with the detected edge points. (Figure link included: solid blue - template, red dots - edge detected points; the rotation is subtle, but it's there.)
I plan to rotate the template in a loop about the centroid through different values of thetas (which I know how to do), and ask the code to 'stop executing when it matches with the edge' (which is what I want to know how) and return the corresponding theta.
The number of points making up the template and that making up the edges are not the same, so splitting the plots into 3 lines and 1 (half) ellipse and directly comparing does not work.
Using regionprops 'orientation' does not give the expected result for each frame because of the way the edges are being detected in each frame. (I can elaborate more on this if required)
I have intentionally plotted points using plot, rather than keeping the edge as a BW image because, otherwise, I'm having to round off indices while creating the template, and for my application, I cannot afford to lose precision like that.
I'm not lazy, I don't want somebody to just code it up for me. None of my ideas worked and I'm unable to think any differently, so perhaps somebody with a fresh mind and more experience in Matlab will have some idea.
Assuming both image and template are bitmaps (= template isn't given as 3x lines + half of ellipse), and you know the position and size of the template, just not the angle:
For each edge, find the closest point on the template, and sum all the distances, or perhaps sum of sqrt of distances, or some other metric that penalizes many small outliers - many points will be slightly wrong if rotation is slightly wrong. Few points that are completely wrong are noise to ignore. So, something like:
minDist = inf;
minAngle = -1;
for t = 1 : length(thetas)
templatePoints = ...; % calculate template points.
for i = 1 : length(edges)
edge = edges(i); % assuming this edge is (x, y) edge point
mind = inf; % Min distance
for j = 1 : length(templatePoints)
d = sum(sqrt((edge - templatePoints).^2));
if (d < mind)
mind = d;
end
end
end
currentDist = sum(mind);
if (currentDist < minDist)
...
end
When you complete the full circle of template rotation select the rotation with the lowest difference.
This procedure might be a bit problematic because you might have template rotated by 10.5 deg and you are going in a step of 1 deg, so you will not have totally optimal angle in the end. Plus you will try tons of completely wrong angles, slowing you down.
But to find the optimal angle you can change angle step, say you first try every 10 deg rotation, then every 1deg around the minimum, then every 0.1deg etc. Or use optimization method. Gradient descent should work fine if you don't have too much noise, use simulated annealing for noisy images. Use the same code as above, currentDist is a good enough optimization parameter - it should be as close to 0 as possible.
If you have unknown template size too, or unknown template position, you definitely should use simulated annealing, gradient descent will almost surely get stuck in a local minimum. Use code similar to above to calculate difference between the edges and template, and put all the unknown parameters to the method.
One option is to make an image of your result for each rotation, but do not make it binary, make it grayscale. There are ways of making this so the "maximum interpolated edge" is where your continuous function is (such as Xiaolin Wu's line algorithm, for example).
As you are matching an image, you are not loosing precision, but putting the precision of your target to the same level as your image.
Once you get this, for each rotation, use a metric to evaluate how the 2 grayscale (yes, not binary) images match, using i.e. correlation coefficient, mutual information, universal quality index (UQI), etc.

How to separate overlapping blobs? [duplicate]

I'm using the function regionprops to detect the number of trees on a image taked by drone.
First I removed the ground using Blue NDVI:
Image with threshold:
Then I used the function regionprops to detect the number of trees on image:
But there are a problem on region 15, because all trees on that region are connected and it detects as one tree.
I tried to separate the trees on that region using Watershed Segmentation, but its not working:
Am I doing this the wrong way?
Is there a better method to separate the trees?
If anyone can help me with this problem I will appreciate. Here is the region 15 without the ground:
If it helps, here is the Gradient Magnitude image:
It has been some time since this question was asked. I hope it is not too late for an answer. I see a general problem of using watershed segmentation in similar questions. Sometimes the objects are apart, not touching each other like in this example . In such cases, only blurring the image is enough to use watershed segmentation. Sometimes the objects are located closely and touch each other, thus the boundaries of objects are not clear like in this example. In such cases, using distance transform-->blur-->watershed helps. In this question, the logical approach should be using distance transform. However, this time the boundaries are not clear due to shadows on and nearby the trees. In such cases, it is good to use any information that helps to separate the objects as in here or emphasise objects itself.
In this question, I suggest using colour information to emphasise tree pixels.
Here are the MATLAB codes and results.
im=imread('https://i.stack.imgur.com/aBHUL.jpg');
im=im(58:500,86:585,:);
imOrig=im;
%% Emphasize trees
im=double(im);
r=im(:,:,1);
g=im(:,:,2);
b=im(:,:,3);
tmp=((g-r)./(r-b));
figure
subplot(121);imagesc(tmp),axis image;colorbar
subplot(122);imagesc(tmp>0),axis image;colorbar
%% Transforms
% Distance transform
im_dist=bwdist(tmp<0);
% Blur
sigma=10;
kernel = fspecial('gaussian',4*sigma+1,sigma);
im_blured=imfilter(im_dist,kernel,'symmetric');
figure
subplot(121);imagesc(im_dist),axis image;colorbar
subplot(122);imagesc(im_blured),axis image;colorbar
% Watershed
L = watershed(max(im_blured(:))-im_blured);
[x,y]=find(L==0);
figure
subplot(121);
imagesc(imOrig),axis image
hold on, plot(y,x,'r.','MarkerSize',3)
%% Local thresholding
trees=zeros(size(im_dist));
centers= [];
for i=1:max(L(:))
ind=find(L==i & im_blured>1);
mask=L==i;
[thr,metric] =multithresh(g(ind),1);
trees(ind)=g(ind)>thr*1;
trees_individual=trees*0;
trees_individual(ind)=g(ind)>thr*1;
s=regionprops(trees_individual,'Centroid');
centers=[centers; cat(1,[],s.Centroid)];
end
subplot(122);
imagesc(trees),axis image
hold on, plot(y,x,'r.','MarkerSize',3)
subplot(121);
hold on, plot(centers(:,1),centers(:,2),'k^','MarkerFaceColor','r','MarkerSize',8)
You could try out a marker-based watershed. Vanilla watershed transforms never work out of the box in my experience. One way to perform one would be to first create a distance map of the segmented area by using imdist(). Then you could suppress local maxima by calling imhmax(). Then calling watershed() will usually perform noticeably better.
Here's a sample script on how to do it:
bwTrees = imopen(bwTrees, strel('disk', 10));
%stabilize the borders to lessen oversegmentation
distTrees = -bwDist(~bwTrees); %Distance transform
distTrees(~bwTrees) = -Inf; %set background to -Inf
distTrees = imhmin(distTrees, 3); %suppress local minima
basins = watershed(distTrees);
ridges = basins == 0;
segmentedTrees = bwTrees & ~ridges; %segment
segmentedTrees = imopen(segmentedTrees, strel('disk', 2));
%remove 'segmentation trash' caused by oversegmentation near the borders.
I fiddled around with the parameters for ~10min but got fairly poor results:
You'd need to pour work into this. Mostly in pre- and post-processing via the morphology. More curvature would help the segmentation, if you could lower the sensitivity of the segmentation in the first part. The size of the h-minima transform is also an paramater of interest. You can probably get adequate results this way.
Probably a better approach would come from the world of clustering techniques. If you have or can find a way to estimate the number of trees in the forest you should be able to use traditional clustering methods to segment out the trees. A Gaussian mixture model or a k-means with k-trees would probably work much better than a marker based watershed if you get even nearly the right amount of trees. Normally I'd estimate the number of trees based on the number of suppressed maxima on a h-maxima transform, but your labels might be a bit too sausagey for that. It's worth a try though.

MatLab - Segmentation to separate touching objects in an image

I'm using the function regionprops to detect the number of trees on a image taked by drone.
First I removed the ground using Blue NDVI:
Image with threshold:
Then I used the function regionprops to detect the number of trees on image:
But there are a problem on region 15, because all trees on that region are connected and it detects as one tree.
I tried to separate the trees on that region using Watershed Segmentation, but its not working:
Am I doing this the wrong way?
Is there a better method to separate the trees?
If anyone can help me with this problem I will appreciate. Here is the region 15 without the ground:
If it helps, here is the Gradient Magnitude image:
It has been some time since this question was asked. I hope it is not too late for an answer. I see a general problem of using watershed segmentation in similar questions. Sometimes the objects are apart, not touching each other like in this example . In such cases, only blurring the image is enough to use watershed segmentation. Sometimes the objects are located closely and touch each other, thus the boundaries of objects are not clear like in this example. In such cases, using distance transform-->blur-->watershed helps. In this question, the logical approach should be using distance transform. However, this time the boundaries are not clear due to shadows on and nearby the trees. In such cases, it is good to use any information that helps to separate the objects as in here or emphasise objects itself.
In this question, I suggest using colour information to emphasise tree pixels.
Here are the MATLAB codes and results.
im=imread('https://i.stack.imgur.com/aBHUL.jpg');
im=im(58:500,86:585,:);
imOrig=im;
%% Emphasize trees
im=double(im);
r=im(:,:,1);
g=im(:,:,2);
b=im(:,:,3);
tmp=((g-r)./(r-b));
figure
subplot(121);imagesc(tmp),axis image;colorbar
subplot(122);imagesc(tmp>0),axis image;colorbar
%% Transforms
% Distance transform
im_dist=bwdist(tmp<0);
% Blur
sigma=10;
kernel = fspecial('gaussian',4*sigma+1,sigma);
im_blured=imfilter(im_dist,kernel,'symmetric');
figure
subplot(121);imagesc(im_dist),axis image;colorbar
subplot(122);imagesc(im_blured),axis image;colorbar
% Watershed
L = watershed(max(im_blured(:))-im_blured);
[x,y]=find(L==0);
figure
subplot(121);
imagesc(imOrig),axis image
hold on, plot(y,x,'r.','MarkerSize',3)
%% Local thresholding
trees=zeros(size(im_dist));
centers= [];
for i=1:max(L(:))
ind=find(L==i & im_blured>1);
mask=L==i;
[thr,metric] =multithresh(g(ind),1);
trees(ind)=g(ind)>thr*1;
trees_individual=trees*0;
trees_individual(ind)=g(ind)>thr*1;
s=regionprops(trees_individual,'Centroid');
centers=[centers; cat(1,[],s.Centroid)];
end
subplot(122);
imagesc(trees),axis image
hold on, plot(y,x,'r.','MarkerSize',3)
subplot(121);
hold on, plot(centers(:,1),centers(:,2),'k^','MarkerFaceColor','r','MarkerSize',8)
You could try out a marker-based watershed. Vanilla watershed transforms never work out of the box in my experience. One way to perform one would be to first create a distance map of the segmented area by using imdist(). Then you could suppress local maxima by calling imhmax(). Then calling watershed() will usually perform noticeably better.
Here's a sample script on how to do it:
bwTrees = imopen(bwTrees, strel('disk', 10));
%stabilize the borders to lessen oversegmentation
distTrees = -bwDist(~bwTrees); %Distance transform
distTrees(~bwTrees) = -Inf; %set background to -Inf
distTrees = imhmin(distTrees, 3); %suppress local minima
basins = watershed(distTrees);
ridges = basins == 0;
segmentedTrees = bwTrees & ~ridges; %segment
segmentedTrees = imopen(segmentedTrees, strel('disk', 2));
%remove 'segmentation trash' caused by oversegmentation near the borders.
I fiddled around with the parameters for ~10min but got fairly poor results:
You'd need to pour work into this. Mostly in pre- and post-processing via the morphology. More curvature would help the segmentation, if you could lower the sensitivity of the segmentation in the first part. The size of the h-minima transform is also an paramater of interest. You can probably get adequate results this way.
Probably a better approach would come from the world of clustering techniques. If you have or can find a way to estimate the number of trees in the forest you should be able to use traditional clustering methods to segment out the trees. A Gaussian mixture model or a k-means with k-trees would probably work much better than a marker based watershed if you get even nearly the right amount of trees. Normally I'd estimate the number of trees based on the number of suppressed maxima on a h-maxima transform, but your labels might be a bit too sausagey for that. It's worth a try though.

Evaluate straightness of an arbitrary countour

I want to get a metric of straightness of contour in my binary image (relatively faster). The image looks as follows:
Now, the contours in the red box are the ones which I would like to be removed preferably. Since they are not straight. These are the things I have tried. I am as of now implementing in MATLAB.
1.Collect row and column coordinates of each contour and then take derivative. For straight objects (such as rectangle), derivative will be mostly low with a few spikes (along the corners of the rectangle).
Problem: The coordinates collected are not in order i.e. the order in which the contour will be traversed if we imaging it as a path. Therefore, derivative gives absurdly high values sometimes. Also, the contour is not absolutely straight, its an output of edge detection algorithm, so you can imagine that there might be some discontinuity (see the rectangle at the bottom, human eye can understand that it is a rectangle though it is not absolutely straight).
2.Tried to think about polyfit, but again this contour issue comes up. Since its a rectangle I don't know how to apply polyfit to that point set.
Also, I would like to remove contours which are distributed vertically/horizontally. Basically this is a lane detection algorithm. So lanes cannot be absolutely vertical/horizontal.
Any ideas?
You should look into the features of regionprops more. To be fair I stole the script from this answer, but here it is:
BW = imread('lanes.png');
BW = im2bw(BW);
figure(1),
subplot(1,2,1);
imshow(BW);
cc = bwconncomp(BW);
l = labelmatrix(cc);
a_rp = regionprops(CC,'Area','MajorAxisLength','MinorAxislength','Orientation','PixelList','Eccentricity');
idx = ([a_rp.Eccentricity] > 0.99 & [a_rp.Area] > 100 & [a_rp.Orientation] < 70 & [a_rp.Orientation] > -90);
BW2 = ismember(l,find(idx));
subplot(1,2,2);
imshow(BW2);
You can mess around with the properties. 'Orientation', 'Eccentricity', and 'Area' are probably the parameters you want to mess with. I also messed with the ratios of the major/minor axis lengths but eccentricity basically does this (eccentricity is a measure of how "circular" an ellipse is). Here's the output:
I actually saw a good video specifically from matlab for lane detection using regionprops. I'll try to see if I can find it and link it.
You can segment your image using bwlabel, then work separately on each bwlabel connected object, using find. This should help solve your order problem.
About a metric, the only thing that come to mind at the moment is to fit to an ellipse, and set the a/b (major axis/minor axis) ratio (basically eccentricity) a parameter. For example a straight line (even if not perfect) will be fitted to an ellipse with a very big major axis and a very small minor axis. So say you set a ratio threshold of >10 etc... Fitting to an ellipse can be done using this FEX submission for example.

How to remove camera noises in CMOS camera

Here with i have attached two consecutive frames captured by a cmos camera with IR Filter.The object checker board was stationary at the time of capturing images.But the difference between two images are nearly 31000 pixels.This could be affect my result.can u tell me What kind of noise is this?How can i remove it.please suggest me any algorithms or any function possible to remove those noises.
Thank you.Sorry for my poor English.
Image1 : [1]: http://i45.tinypic.com/2wptqxl.jpg
Image2: [2]: http://i45.tinypic.com/v8knjn.jpg
That noise appears to result from camera sensor (Bayer to RGB conversion). There's the checkerboard pattern still left.
Also lossy jpg contributes a lot to the process. You should first have an access to raw images.
From those particular images I'd first try to use edge detection filters (Sobel Horizontal and Vertical) to make a mask that selects between some median/local histogram equalization for the flat areas and to apply some checker board reducing filter to the edges. The point is that probably no single filter is able to do good for both jpeg ringing artifacts and to the jagged edges. Then the real question is: what other kind of images should be processed?
From the comments: if corner points are to be made exact, then the solution more likely is to search for features (corner points with subpixel resolution) and make a mapping from one set of points to the other images set of corners, and search for the best affine transformation matrix that converts these sets to each other. With this matrix one can then perform resampling of the other image.
One can fortunately estimate motion vectors with subpixel resolution without brute force searching all possible subpixel locations: when calculating a matched filter, one gets local maximums for potential candidates of exact matches. But this is not all there is. One can try to calculate a more precise approximation of the peak location by studying the matched filter outputs in the nearby pixels. For exact match the output should be symmetric. Otherwise the 'energies' of the matched filter are biased towards the second best location. (A 2nd degree polynomial fit + finding maximum can work.)
Looking closely at these images, I must agree with #Aki Suihkonen.
In my view, the main noise comes from the jpeg compression, that causes sharp edges to "ring". I'd try a "de-speckle" type of filter on the images, and see if this makes a difference. Some info that can help you implement this can be found in this link.
In a more quick and dirty fashion, you apply one of the many standard tools, for example, given the images are a and b:
(i) just smooth the image with a Gaussian filter, this can reduce noise differences between the images by an order of magnitude. For example:
h=fspecial('gaussian',15,2);
a=conv2(a,h,'same');
b=conv2(b,h,'same');
(ii) Reduce Noise By Adaptive Filtering
a = wiener2(a,[5 5]);
b = wiener2(b,[5 5]);
(iii) Adjust ntensity Values Using Histogram Equalization
a = histeq(a);
b = histeq(b);
(iv) Adjust Intensity Values to a Specified Range
a = imadjust(a,[0 0.2],[0.5 1]);
b = imadjust(b,[0 0.2],[0.5 1]);
If your images are supposed to be black and white but you have captured them in gray scale there could be difference due to noise.
You can convert the images to black and white by defining a threshold, any pixel with a value less than that threshold should be assigned 0 and anything larger than that threshold should be assigned 1, or whatever your gray scale range is (maybe 255).
Assume your image is I, to make it black and white assuming your gray scale image level is from 0 to 255, assume you choose a threshold of 100:
ind = find(I < 100);
I(ind) = 0;
ind = find(I >= 100);
I(ind) = 255;
Now you have a black and white image, do the same thing for the other image and you should get very small difference if the camera and the subject have note moved.