I have a task which I should split a connected object into two split ones.
on right image, at the bottom right of the image the two objects has connectivity which I should cut this side. I need to have two objects for my further processes.
final task is to find the slope of inner surface as in the first picture
you can see the image:
i want to have something right left this.
if you assume the object is thiner at the junction, you can apply morphological opening (using the imopen function, you need image processing toolbox). On your example this sould work, but it depends on the other cases you need to process.
Related
I am totally new to openscad.
I am trying to generate two overlapping polygons (2D). I would just like to observe the outlines of both shapes at the same time.
I have managed to generate two different shapes. Although the shapes overlap, the renderer appears to show the outline of the combined shape with the inner part is filled in with colour.
How might I achieve my goal if the shapes were simply two overlapping squares?
I would be glad to see your code to understand exactly what you're trying to describe.
In any case, you must know first that openscad have several rendering types:
the f5 one is the quicker and doesn't really calculates the final result, only its image on the screen (that's why you cannot export with it)
the f6 one which does all the calculation of the points of the mesh and then render it (that's what you do to export)
the debug ones that are similar to f5
I think f5 could be your solution but it will look a bit "glitchy" because of the superposition of the two shapes. The fact is that I don't think openscad is made for what you want to do because you can consider that all that you put in your script is in a big union() block so when you press f6 all the independent shapes are combined into one and I don't think there is a way to prevent that. I should add that I think that the 2D functions of openscad are probably made to be used with the extrude functions to make 3D volumes for which the overlapping doesn't have a lot of sense.
I am using SUMO to simulate the LuST scenario from https://github.com/lcodeca/LuSTScenario. However, since the scenario is rather large, I would like to start with a simulation constrained to region of interest. Is there a straight forward way to select such a region and have vehicles only simulated in that part of the map?
You can crop the network either using netedit by selecting the region of interest (change to select mode and then draw a rectangle holding the shift key), then inverting the selection (invert button) and deleting the rest. Or if you already know the boundaries or the edges you want to keep you can use for instance with netconvert --keep-edges.in-boundary minX,minY,maxX,maxY -s large.net.xml -o small.net.xml. See here for more netconvert options.
The next step is cutting the routes, which usually means a call like this:
$SUMO_HOME/tools/route/cutRoutes.py small.net.xml large.rou.xml --routes-output small.rou.xml --orig-net large.net.xml
This will not only remove the edges but also try to adapt departure times.
First, I'm using opencv MSER via matlab via 2 different methods:
Through the matlab's detectMSERFeatures function. This seems to call the
opencv MSER (via the call to ocvExtractMSER in the detectMSERFeatures function)
Through a more direct approach: opencv 3.0 wrapper/matlab bindings found in https://github.com/kyamagu/mexopencv
Either way, I can get back lists of lists of pixels (aka regions) that I imagine are a translation of the opencv MSER::detectRegions 2nd arg, "std::vector< std::vector< Point > > &msers"
The result can end up a list of multiple regions each region with its own set of points. However, the points of the regions are not mutually exclusive. In fact, they typically, for my data in which the foreground is typically roundish blobs, tend to all be part of the same single connected component. This is true even if the blob doesn't even have any holes (I might understand if the regions corresponded to contours and the blob had holes).
I'm assuming that this many-region-to-one mapping of regions to even a solid blob is due to opencv's MSER, in its native C++(?) implementation, doing the same but I confess I haven't verified that (but I surely don't understand it.)
So, does anybody know why MSER would yield multiple overlapping regions for a single solid connected component? Is there any sense to choosing one and if so how? (Right now I just combine them all)
EDIT - I tried an image with one blob which then I replicated to have a single image where the left half was the same as the right (each half being the same, each with the same blob). MSER returned 9 lists/regions all corresponding to the two blobs. So, I wold have to do connected component analysis just to figure out which subsets of the regions belonged to what blob and so apparently there can't be any straightforward way to choose a particular subset of the returned regions that would give the best representation of the two blobs (if such a thing was even sensible if you knew there was just one blob as per my last pre-edit question)
The picture below was made by plotting all 4 regions (lists of points) returned for my single blob image. The overlay was created by:
obj = cv.MSER('MinArea',20,'MaxArea',3000,'Delta',2.5);
[chains, bboxes] = obj.detectRegions(Region8b)
a=cellfun(#(x) cat(1,x{:}),chains,'UniformOutput',false) % get rid of extra layer of cells that detectRegions seems to give it.
% b=cat(1,a{:}); % all the regions points in a single list. Not used here.
ptsstrs={'rx','wo','cd','k.'};
for k=1:4
plot(a{k}(:,1),a{k}(:,2),ptsstrs{k},'MarkerSize',15);
end
So, you can see they overlap but there also seems to be an order to it where I think each subsequent region/list is a superset of the list before it.
"The MSER detector incrementally steps through the intensity range of the input image to detect stable regions. The ThresholdDelta parameter determines the number of increments the detector tests for stability. " This from Matlab help. It's reasonable that you find overlap and subsets. Apparently, the region changes as the algorithm moves up or down in intensity.
Given are two monochromatic images of same size. Both are prealigned/anchored to one common point. Some points of the original image did move to a new position in the new image, but not in a linear fashion.
Below you see a picture of an overlay of the original (red) and transformed image (green). What I am looking for now is a measure of "how much did the "individual" points shift".
At first I thought of a simple average correlation of the whole matrix or some kind of phase correlation, but I was wondering whether there is a better way of doing so.
I already found that link, but it didn't help that much. Currently I implement this in Matlab, but this shouldn't be the point I guess.
Update For clarity: I have hundreds of these image pairs and I want to compare each pair how similar they are. It doesn't have to be the most fancy algorithm, rather easy to implement and yielding in a good estimate on similarity.
An unorthodox approach uses RASL to align an image pair. A python implementation is here: https://github.com/welch/rasl and it also
provides a link to the RASL authors' original MATLAB implementation.
You can give RASL a pair of related images, and it will solve for the
transformation (scaling, rotation, translation, you choose) that best
overlays the pixels in the images. A transformation parameter vector
is found for each image, and the difference in parameters tells how "far apart" they are (in terms of transform parameters)
This is not the intended use of
RASL, which is designed to align large collections of related images while being indifferent to changes in alignment and illumination. But I just tried it out on a pair of jittered images and it worked quickly and well.
I may add a shell command that explicitly does this (I'm the author of the python implementation) if I receive encouragement :) (today, you'd need to write a few lines of python to load your images and return the resulting alignment difference).
You can try using Optical Flow. http://www.mathworks.com/discovery/optical-flow.html .
It is usually used to measure the movement of objects from frame T to frame T+1, but you can also use it in your case. You would get a map that tells you the "offset" each point in Image1 moved to Image2.
Then, if you want a metric that gives you a "distance" between the images, you can perhaps average the pixel values or something similar.
I'm just beginning with Image analysis in MATLAB.
My goal is to do an automated image segmention on images of plant leaves.
I have had reasonable success here thanks to multiple online resources.
The current objective, the reason why I'm placing this question here, is to able to place 25 equidistant points along each half of the margin/outline of leaf, like described in following image:
For the script to be able to recognize each half of the leaf, user can put two points within the GUI. One of these user-defined points will be on the base of leaf and the other on tip of leaf. It would be even better if a script would be able to automatically recognize these two features of the leaf.
For the output, I would like a plain text format file containing image coordinate of each point.
I'm not asking for a ready made script here, but looking for a starting point.
One way I think this can be done is by linearizing/open up the outline in such a way that it becomes a straight line. This can be done by treating any of user placed point/landmark as breakpoint. Once a linear outline is obtained it can again be broken into two halves at other user defined point and now points can be placed. One point to bear in mind here is that the placement of points for each half should start from the end that corresponds to the same breakpoint/user-defined point in each half. Now these straight lines can be superimposed on original image for reconstruction.
Thank you very much.
Parashar