How to incrementally loop an expression in qGIS geometry generator? - qgis

In qGIS, I have a self-made floor plan layer that uses polygon features. In an effort to enhance the appearance of the floor plan with simple architectural features, I've added a separate linestring layer that allows me to overlay simple line segments that use the 'geometry generator' to render features such as left or right swing doors.
I'm now in the process of making a 'stair' feature and have come up with the following command / expression that will draw one perpendicular line (representing one stair tread) at the starting point of a two segment line, i.e. a line in which the first segment is snapped to the length of the stairwell poly and the second segment is snapped to the width of the stairwell poly:
make_line(
line_interpolate_point($geometry,0),make_point(x(end_point($geometry))- x(point_n($geometry,2))+x(line_interpolate_point($geometry,0)),y(end_point($geometry))-y(point_n($geometry,2))+y(line_interpolate_point($geometry,0))))
The second stair tread (which should be 10" or .833 feet from the first) would be drawn with the following expression:
make_line(
line_interpolate_point($geometry,.833),make_point(x(end_point($geometry))-x(point_n($geometry,2))+x(line_interpolate_point($geometry,.833)),y(end_point($geometry))-y(point_n($geometry,2))+y(line_interpolate_point($geometry,.833))))
And so on, repeating as many as 10 or more times.
I would assume the incremental, repetitive process of drawing each 'stair tread" along the line segment could also be accomplished with a "For..." expression or some other type of looping structure but I have been unable to find specific "how to" instructions or an example.
In short, how do I 'build out' my simplistic expression with a more elegant looping feature?

Related

How to perform matching of markers from two images which are taken from different perspective?

I have a markered robot with circular markers and two images from different perspective as shown: (Circular white rings are the markers)
I want to match the markers in the two images, by matching I mean the bottommost marker of 1st image should be treated as correspondence point of bottom most marker of 2nd image and so on.
The finger-like robot given in the image can bend in any direction given in space (can also bend in a U-like manner).
If it helps, the camera geometry is fixed and known beforehand.
I am lost, as simple correspondence algorithm would not work, since the perspectives are very different. How should I go about matching the two images?
You can start like this:
You know the position of the mounting point on the base panel for each perspective.
You know the positions of the white rings for each perspective as discussed here.
You can derive the direction of the arm at each ring by its tilt.
So you can easily determine the sequence of the positions starting with the mounting point stepping from ring to ring even if the arm is bent. With this you can match the rings from both images. If you have any situation where this fails, please add an according example to your question!
Unfortunately, you don't have matching points but matching curves. You might try to fit ellipses on the rings and take the ellipse centers for points to be matched.
This is an approximation, as the center of a circle does not exactly project as the center of the ellipse, but I don't think that this will be the major source of error: as you only see half circles, the fitting will not be that accurate.
If all nine circles remain visible and are ordered vertically, the matching of the centers is trivial. If they are not ordered but don't form a loop, you can probably start from the lowest and follow the chain of nearest neighbors.

QGIS - Create a shape layer polygon within the empty space of other shapes

I've got a few shape-layers with some polygons which all join up. I want to create a new shape which is the the hole in the middle of the other layers.
I've tried 'snapping', but it seems to lock to vertex and requires a lot of manual accuracy. Ideally i'd like to select the lines where they join and then 'fill in' the area. Though I don't know how to do this in QGIS.
You could use an algorithmic approach like the following (it assumes a situation as the one in the question, so no other holes and polygons are from different layers):
Merge vector layers to combine your different layers into one layer
Dissolve to combine all your features into one feature with one hole
Delete holes to get a layer with hole in the middle filled
Symmetrical Difference of output of 3. and 2. to get a layer where overlapping areas are removed i.e. only the hole should remain as a new layer

how can i split an image based on longest horizontal edge?

For example , how can i split the two row of books of this shelf based on horizontal edge? I have used sobel edge detector to detect the edges but i don't know how to or what condition to use to split the image.
I can recommend you two different approach to solve this problem.
1) Machine learning approach. This requires some labeled data, indicating the y coordinate of the edge position, then HOG feature plus a random forest classifier will do the job.
2) Image processing approach. First, let's see the output of what i have done:
the blue color indicating the score of being the desired y position of the separation edge.
Such approach always relies on some assumptions on your data, here we suppose that the target horizontal edge separating books, which contains a lot of vertical lines. Namely, we are looking for y coordinate where locate long horizontal lines which are not cut by vertical lines.
Once define our objective, the rest begin very easy.
First we need a straight line detector, hough transform will do.
Secondly, we vote for each y coordinates for being the best separator using two scores:
1) 1st score describes how many long horizontal lines (found previously) are located in the neighborhood. Let's call it s_h.
2) 2nd score describes how many long vertical lines are located in the neighborhood. Let's call it s_v.
Finally, we only need to combine s_v and s_h to make a final score. For example,
s = s_h / (s_v + 1)
Using this, we get the first scoring map posted at the beginning. Some further post processing need to be done, but should not be difficult.
Here is just one possibility to solve it, here you find my code presented in a notebook.

Automated placement of points/landmarks on shape outline using MATLAB

I'm just beginning with Image analysis in MATLAB.
My goal is to do an automated image segmention on images of plant leaves.
I have had reasonable success here thanks to multiple online resources.
The current objective, the reason why I'm placing this question here, is to able to place 25 equidistant points along each half of the margin/outline of leaf, like described in following image:
For the script to be able to recognize each half of the leaf, user can put two points within the GUI. One of these user-defined points will be on the base of leaf and the other on tip of leaf. It would be even better if a script would be able to automatically recognize these two features of the leaf.
For the output, I would like a plain text format file containing image coordinate of each point.
I'm not asking for a ready made script here, but looking for a starting point.
One way I think this can be done is by linearizing/open up the outline in such a way that it becomes a straight line. This can be done by treating any of user placed point/landmark as breakpoint. Once a linear outline is obtained it can again be broken into two halves at other user defined point and now points can be placed. One point to bear in mind here is that the placement of points for each half should start from the end that corresponds to the same breakpoint/user-defined point in each half. Now these straight lines can be superimposed on original image for reconstruction.
Thank you very much.
Parashar

Kink detection in drawn polylines

Users can sketch in my app using a very simple tool (move mouse while holding LMB). This results in a series of mousemove events and I record the cursor location at each event. The resulting polyline curve tends to be rather dense, with recorded points almost every other pixel. I'd like to smooth this pixelated polyline, but I don't want to smooth intended kinks. So how do I figure out where the kinks are?
The image shows the recorded trail (red pixels) and the 'implied' shape as a human would understand it. People tend to slow down near corners, so there is usually even more noise here than on the straight bits.
Polyline tracker http://www.freeimagehosting.net/uploads/c83c6b462a.png
What you're describing may be related to gesture recognition techniques, so you could search on them for ideas.
The obvious approach is to apply a curve fit, but that will have the effect of smoothing away all the interesting details and kinks. Another approach suggested is to look at speeds and accelerations, but that can get hairy (direction changes can be very fast or very slow and deliberate)
A fairly basic but effective approach is to simplify the samples directly into a polyline.
For example, work your way through the samples (e.g.) from sample 1 to sample 4, and check if all 4 samples lie within a reasonable error of the straight line between 1 & 4. If they do, then extend this to points 1..5 and repeat until such a time as the straight line from the start point to the end point no longer provides a resonable approximation to the curve defined by those samples. Create a line segment up to the previous sample point and start accumulating a new line segment.
You have to be careful about your thresholds when the samples are too close to each other, so you might want to adjust the sensitivity when regarding samples fewer than 4-5 pixels away from each other.
This will give you a set of straight lines that will follow the original path fairly accurately.
If you require additional smoothing, or want to create a scalable vector graphic, then you can then curve-fit from the polyline. First, identify the kinks (the places in your polyline where the angle between one line and the next is sharp - e.g. anything over 140 degrees is considered a smooth curve, anything less than that is considered a kink) and break the polyline at those discontinuities. Then curve-fit each of these sub-sections of the original gesture to smooth them. This will have the effect of smoothing the smooth stuff and sharpening the kinks. (You could go further and insert small smooth corner fillets instead of these sharp joints to reduce the sharpness of the joins)
Brute force, but it may just achieve what you want.
Rather than trying to do this from the resultant data, have you considered looking at the timing of the data as it comes in? If the mouse stops or slows noticably, you use the trend since the last 'kink' (the last time the mouse slowed) to establish the direction of travel. If the user goes off in a new direction, you call it a kink, otherwise, you ignore the current slowing trend and start waiting for the next one.
Well, one way would be to use a true curve-fitting algorithm. Generate a bezier curve (with exact endpoints, using Catmull-Rom or something similar), then optimize & recursively subdivide (using distance from actual line points as a cost metric). This may be too complicated for your use-case, though.
Record the order the pixels are drawn in. Then, compute the slope between pixels that are "near" but not "close". I'm guessing a graph of the slope between pixel(i) and pixel(i+7) might exhibit easily identifable "jumps" around kinks in the curve.