breadth-first search in coordinate based system - matlab

I am not an experienced programmer (programming language MATLAB) and hence have barely any knowledge of advanced algorithms. One of these is the breadth-first search. The concept I understand but implementing it into my problem is difficult for me.
My problem:
I place disks, of equal sizes, randomly in a square and will place the coordinates of the disks into separate matrices when they are one connected network. I colorized them for clarity (see image). Now, I have to find the shortest path from left to right of the network which spans from left to right and want to do this based on the coordinates. The disks have to touch in order to be connected to each other. They cannot form a path if they are not touching.
So this is what I currently have:
I have a matrix with coordinates x and coordinates y in columns 1 and 2, every row representing one of the disks (for ease, let's just take the coordinates of all the connecting disks, excluding those which are not spanning from left to right when connected).
The diameter of the disks is known (0.2).
We can easily identify which disks are on the left boundary of the square and which disks are on the right boundary of the square. These represent the possible starting coordinates and the possible goal coordinates.
% Coordinates of group of disks, where the group connects from left to right.
0.0159 0.1385
0.0172 0.2194
0.0179 0.4246
0.0231 0.0486
0.0488 0.1392
0.0709 0.2109
0.0813 0.0595
0.0856 0.3530
0.1119 0.3756
0.1275 0.2530
0.1585 0.4751
0.1702 0.2926
0.1908 0.3828
0.1961 0.3277
0.2427 0.4001
0.2492 0.4799
0.2734 0.4788
0.3232 0.3547
0.3399 0.3275
0.3789 0.3716
0.4117 0.3474
0.4579 0.3961
0.4670 0.3394
0.4797 0.3279
0.4853 0.4786
0.3495 0.4455
0.4796 0.2736
0.0693 0.0746
0.1288 0.4204
0.1271 0.4071
0.1218 0.4646
0.1255 0.3080
0.4154 0.2926
Positions of disks and colored the connecting disks. Image is very schematic and many more disks should be expected in a much larger area (keeping same size disks).
My strategy was to set up a breadth-first search, taking the starting coordinates as one of the disks (can be any) on the left side of the square. The goal will be to find the shortest path to the right side of the square.
To my understanding, I want to pick a starting coordinate and check all disks if they are within a diameter distance (middle point to middle point of the disks) of my starting coordinate. If they are within range of my starting coordinate I want to place them in a 'queue' (natively not supported by MATLAB? but let's set one up ourselves). Then, the next step is to take the first disk which was close enough and do the same for this one. I can do this but once I have to do the second disk which was within my first disk, I am lost in how and/or what data structure I should take and how to save the 'path' which it is finding. This means I can find a path but not all paths and hence also not the shortest path.
I appreciate any help! Maybe some documentation which I have not seen yet or maybe an example which is very comparable.
Best regards,

If they are within range of my starting coordinate I want to place
them in a 'queue'
Before you add it to the queue you want to make sure this disk was not processed (put in the queue) before. Each disk should be processed only once, so you make sure the "neighbor" disk has not been processed before, then mark it as processed and add it to the queue.
the next step is to take the first disk which was close enough and do
the same for this one.
Actually the next disc to process is simply the one at the head of the queue.
Continue to do so until you hit the target / stop criteria.
how to save the 'path' which it is finding
There are several techniques to do so. An easy one would be to maintain a "come from" value to each disk. This value points to the "parent" of the disk.
Since each disk is processed once (at most) it will have one "come from" value or none.
When the target is reached the path can be reconstructed starting from the "come from" value of the target.

This question has now been solved!
The way I have solved this was close to what was already suggested in my question but also with help from some of the comments.
The distance between coordinates can be put into a matrix. Let us look at coordinate (disk) 1 and coordinate (disk 3). This means that we will be at elements (1,3) and (3,1). If the disks are within touching distance, these two elements will indicate a 1 and otherwise a 0. This is done for all disks and this creates the adjacency matrix.
I created a 'graph' with the built-in function G = Graph(adjacency matrix) we can create an undirected graph. Then with the built in function [path, distance of path] = shortestpath(G,s,t) where G is the graph and s and t are the starting disks (in this case, indicated by integers), the shortest path can be found from disk s to t.
There is however one thing that we must pay attention to and that is representing the actual distance between disks. If we look at G, we can actually see that it contains two objects. One representing the nodes and the other representing the edges. The edges is crucial for the coordinate based distances as we can set the 'weight' of the edge as the distance between two disks. This can simply be done by looping over the nodes and calculating the distance between the neighbouring nodes and inserting them into the weight (G.Edges.Weight(i) = distance between the respective nodes).
How do I find the optimal path from left to right? I loop over all starting disks (defined as touching the left side of the square) and find the shortest path to all disks that touch the right side of the square. Saving the distances of the paths the actual shortest path can be found.
To give you an example of what can be achieved, the following video shows what paths from every starting disk can be found and the final frame shows the shortest path. Video of path finding. The shortest path I have also attached here:
Shortest path left to right.
If there are any questions you would like to ask me about specifics, let me know.

Related

How to perform matching of markers from two images which are taken from different perspective?

I have a markered robot with circular markers and two images from different perspective as shown: (Circular white rings are the markers)
I want to match the markers in the two images, by matching I mean the bottommost marker of 1st image should be treated as correspondence point of bottom most marker of 2nd image and so on.
The finger-like robot given in the image can bend in any direction given in space (can also bend in a U-like manner).
If it helps, the camera geometry is fixed and known beforehand.
I am lost, as simple correspondence algorithm would not work, since the perspectives are very different. How should I go about matching the two images?
You can start like this:
You know the position of the mounting point on the base panel for each perspective.
You know the positions of the white rings for each perspective as discussed here.
You can derive the direction of the arm at each ring by its tilt.
So you can easily determine the sequence of the positions starting with the mounting point stepping from ring to ring even if the arm is bent. With this you can match the rings from both images. If you have any situation where this fails, please add an according example to your question!
Unfortunately, you don't have matching points but matching curves. You might try to fit ellipses on the rings and take the ellipse centers for points to be matched.
This is an approximation, as the center of a circle does not exactly project as the center of the ellipse, but I don't think that this will be the major source of error: as you only see half circles, the fitting will not be that accurate.
If all nine circles remain visible and are ordered vertically, the matching of the centers is trivial. If they are not ordered but don't form a loop, you can probably start from the lowest and follow the chain of nearest neighbors.

Intersecting Frustum

I am trying to find a way to determine whether two frusta intersect and, if so, how big of an intersection that is (example 100% if the two frusta are in exact same location, 0% if they don't touch).
I have the position, volume and all sort of data about the two frusta, I just have no idea how to use it. I took a look at the Separating Axis Theorem for collision detection but I can't figure out exactly whether it's what I'm looking for.
Does anybody have any suggestion on the direction to go?
The SAT will only tell you if they are touching. It won't be able to give you a percentage overlap. To calculate the frusta overlap percentage, I think you will need to calculate the volume of the poyhedron created by intersecting the frusta and divide by the volume of the "main" frustum.
Calculating the intersection of the frusta will tell you if they are overlapping. One way to do it is to build a bsp out of each one, and do a CSG Intersection operation.
Once you have the interesection polyhedron, you can calculate its volume by splitting it up into tetrahedrons and adding up all the volumes of the tetrahedrons. There are academic papers out there that do tetrahedralization directly from the BSP representation.

Why do MSER results have overlapping pixels

First, I'm using opencv MSER via matlab via 2 different methods:
Through the matlab's detectMSERFeatures function. This seems to call the
opencv MSER (via the call to ocvExtractMSER in the detectMSERFeatures function)
Through a more direct approach: opencv 3.0 wrapper/matlab bindings found in https://github.com/kyamagu/mexopencv
Either way, I can get back lists of lists of pixels (aka regions) that I imagine are a translation of the opencv MSER::detectRegions 2nd arg, "std::vector< std::vector< Point > > &msers"
The result can end up a list of multiple regions each region with its own set of points. However, the points of the regions are not mutually exclusive. In fact, they typically, for my data in which the foreground is typically roundish blobs, tend to all be part of the same single connected component. This is true even if the blob doesn't even have any holes (I might understand if the regions corresponded to contours and the blob had holes).
I'm assuming that this many-region-to-one mapping of regions to even a solid blob is due to opencv's MSER, in its native C++(?) implementation, doing the same but I confess I haven't verified that (but I surely don't understand it.)
So, does anybody know why MSER would yield multiple overlapping regions for a single solid connected component? Is there any sense to choosing one and if so how? (Right now I just combine them all)
EDIT - I tried an image with one blob which then I replicated to have a single image where the left half was the same as the right (each half being the same, each with the same blob). MSER returned 9 lists/regions all corresponding to the two blobs. So, I wold have to do connected component analysis just to figure out which subsets of the regions belonged to what blob and so apparently there can't be any straightforward way to choose a particular subset of the returned regions that would give the best representation of the two blobs (if such a thing was even sensible if you knew there was just one blob as per my last pre-edit question)
The picture below was made by plotting all 4 regions (lists of points) returned for my single blob image. The overlay was created by:
obj = cv.MSER('MinArea',20,'MaxArea',3000,'Delta',2.5);
[chains, bboxes] = obj.detectRegions(Region8b)
a=cellfun(#(x) cat(1,x{:}),chains,'UniformOutput',false) % get rid of extra layer of cells that detectRegions seems to give it.
% b=cat(1,a{:}); % all the regions points in a single list. Not used here.
ptsstrs={'rx','wo','cd','k.'};
for k=1:4
plot(a{k}(:,1),a{k}(:,2),ptsstrs{k},'MarkerSize',15);
end
So, you can see they overlap but there also seems to be an order to it where I think each subsequent region/list is a superset of the list before it.
"The MSER detector incrementally steps through the intensity range of the input image to detect stable regions. The ThresholdDelta parameter determines the number of increments the detector tests for stability. " This from Matlab help. It's reasonable that you find overlap and subsets. Apparently, the region changes as the algorithm moves up or down in intensity.

Identify Lobed and bumps of Leaves

I need some help, I have to make a project about leaves.
I want to make it by MATLAB.
my input is an image of one leaf (with a white background) and I need to know two things about the leaf:
1) find the lobed leaf (the pixels of each lobed leaf):
Lay the leaf on a table or work space where you can examine it.
Look at the leaf you are trying to identify. If the leaf looks like it has fingers, these are considered lobes. There can be
anywhere from two to many lobes on a leaf.
Distinguish pinnate leaves from palmate leaves by looking at the veins on the underside of the leaf. If the veins all come from
the same place at the base of the leaf it is considered palmately
lobed. If they are formed at various places on the leaf from one
centre line, the leaf is pinnately lobed.
Identify the type of leaf by using a leaf dictionary.
2) find approximately the number of bumps of the leaf:
in other words, find the "swollen points" of each leaf.
these are examples of leaves:
I've found some leaves examples in here.
Here is my attempt to solve the problem.
In the images that I've found, the background is completely black. If it is not so in your images, you should use Otsu's thresholding method.
I assumed that there can be only 3 types of leaves, according to your image:
The idea is to do blob analysis. I use the morphological operation of opening, to separate the leaves. If there is only one blob after the opening, I assume it is not compound. If the leaves are not compound, I analyze the solidity of the blobs. Non-solid enough means they are lobed.
Here are some examples:
function IdentifyLeaf(dirName,fileName)
figure();
im = imread(fullfile(dirName,fileName));
subplot(1,3,1); imshow(im);
% thresh = graythresh( im(:,:,2));
imBw = im(:,:,2) > 0;
subplot(1,3,2);imshow(imBw);
radiusOfStrel = round( size(im,1)/20 ) ;
imBwOpened = imopen(imBw,strel('disk',radiusOfStrel));
subplot(1,3,3);imshow(imBwOpened);
rpOpened = regionprops(imBwOpened,'Area');
if numel(rpOpened)>1
title('Pinnately Compound');
else
rp = regionprops(imBw,'Area','Solidity');
%Leave only largest blob
area = [rp.Area];
[~,maxIndex] = max(area);
rp = rp(maxIndex);
if rp.Solidity < 0.9
title('Pinnately Lobed');
else
title('Pinnately Veined');
end
end
end
I would approach this problem by converting it from 2d to 1d by scanning in a vector the perimeter of the leaf using "right hand on the wall" -algorithm.
From that data, I presume, one can find a dominant axis of symmetry (e.g. fitting a line); the distance of the perimeter would be calculated from that axis and then one could simply use a threshold+filtering to find local maxima and minima to reveal the number lobes/fingers... The histogram of distance could differentiate between pinnately lobed and pinnately compound leaves.
Another single metrics to check the curvature of the perimeter (from two extreme points) would be http://en.wikipedia.org/wiki/Sinuosity
Recognizing veins is unfortunately a complete different topic.

How do I optimize point-to-circle matching?

I have a table that contains a bunch of Earth coordinates (latitude/longitude) and associated radii. I also have a table containing a bunch of points that I want to match with those circles, and vice versa. Both are dynamic; that is, a new circle or a new point can be added or deleted at any time. When either is added, I want to be able to match the new circle or point with all applicable points or circles, respectively.
I currently have a PostgreSQL module containing a C function to find the distance between two points on earth given their coordinates, and it seems to work. The problem is scalability. In order for it to do its thing, the function currently has to scan the whole table and do some trigonometric calculations against each row. Both tables are indexed by latitude and longitude, but the function can't use them. It has to do its thing before we know whether the two things match. New information may be posted as often as several times a second, and checking every point every time is starting to become quite unwieldy.
I've looked at PostgreSQL's geometric types, but they seem more suited to rectangular coordinates than to points on a sphere.
How can I arrange/optimize/filter/precalculate this data to make the matching faster and lighten the load?
You haven't mentioned PostGIS - why have you ruled that out as a possibility?
http://postgis.refractions.net/documentation/manual-2.0/PostGIS_Special_Functions_Index.html#PostGIS_GeographyFunctions
Thinking out loud a bit here... you have a point (lat/long) and a radius, and you want to find all extisting point-radii combinations that may overlap? (or some thing like that...)
Seems you might be able to store a few more bits of information Along with those numbers that could help you rule out others that are nowhere close during your query... This might avoid a lot of trig operations.
Example, with point x,y and radius r, you could easily calculate a range a feasible lat/long (squarish area) that could be used to help rule it out if needless calculations against another point.
You could then store the max and min lat and long along with that point in the database. Then, before running your trig on every row, you could Filter your results to eliminate points obviously out of bounds.
If I undestand you correctly then my first idea would be to cache some data and eliminate most of the checking.
Like imagine your circle is actually a box and it has 4 sides
you could store the base coordinates of those lines much like you have lines (a mesh) on a real map. So you store east, west, north, south edge of each circle
If you get your coordinate and its outside of that box you can be sure it won't be inside the circle either since the box is bigger than the circle.
If it isn't then you have to check like you do now. But I guess you can eliminate most of the steps already.