Create/export x,y,z coordinates of roads from a shp file in QGIS: how to get the endnodes of road network? - qgis

In relation to What is the way to create/export x,y,z coordinates of roads from a shp file in QGIS?, and in particular, to:
(1) Try using the Extract Vertices tool (QGIS Version 3.20.2). This may be called Extract Nodes in previous versions.
(2) Once you have the output from the nodes, you can use Add Coordinates to Points to get the X, Y, and Z values.
I got the Vertices, and I am trying to get "Add coordinates to points" (I got some issues/errors with SAGA, and then I installed SAGA Next Gen, which looks like working, even though it takes a lot of time!)
Now, how can I get just the endnodes of the TLM3D road network and not all the vertices of TLM3D?
EDIT:
The part about "Add coordinates to points" did not finish and I got this error:
C:\Users\los\Desktop>exit
Execution completed in 2446.05 seconds (40 minutes 46 seconds)
Results:
{'OUTPUT': 'C:/Users/los/AppData/Local/Temp/processing_zCjQzI/c11c256eed374e348c4b4664cd4881f7/OUTPUT.shp'}
Loading resulting layers The foll owing layers were not correctly generated.
C:/Users/los/AppData/Local/Temp/processing_zCjQzI/c11c256eed374e348c4b4664cd4881f7/OUTPUT.shp
You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.

Related

How to get the right coordinates on a QGIS map?

I am using QGIS and I imported the Google maps sattelite map. Then I drew a line and measured the distance using the Measure Tool, but the distance is inaccurate (it says about 1200 m, but I know it should be 780 m). Also, when I look at the coordinates of a point on the map (shown in Figure 1: coordinates of the point with a star on it), it is different from the coordinates I find when looking up the coordinates online (https://www.gps-coordinates.net/) (shown in Figure 2: coordinates of the same point as in Figure 1), so there is probably something wrong there.
I imported the Google maps sattelite map via: browser panel --> XYZ tiles -> Sattelite -> New Connection -> URL = http://mt0.google.com/vt/lyrs=s&hl=en&x={x}&y={y}&z={z}.
I drew the line in a 'lines layer'.
I already changed the CRS to ETRS89/UTM zone 32N (I am looking at a place in eastern Germany) both in the general project properties and in the layer which includes the line I drew. I also checked whether the unit of distance was right, and it is indeed meters. Lastly, I changed the coordinates from X and Y to degrees/minutes/seconds. Nothing worked and the result stays about 1200 m.
I hope you can help, thanks in advance!
I just figured out how to fix the problem. It turns out the map I was using did not use the right coordinates (I still don't know why). I now added another map (QuickMapServices) and this one does use the right coordinates. The Measuring Tool also gives the right distance now.

Visualising road segments as heatmap in Leaflet efficiently

I have data consisting of parts of road segments of a city, with different number of visits. I want to plot the data on a Map and visualise it in the form of a heatmap.
I have two related questions:
I have the data from Open Street Maps (OSM) in the form of pairs of node ID's, where node ID correspond to the unique ID being assigned to a point by OSM. I also have a mapping for each node Id to its corresponding coordinates. Is there any Leaflet or Mapbox utility or plugin, which can plot a trip / highlight the road segment using 2 node ID's. I can always do it manually (by using the coordinate mapping and converting it into GeoJSON), but the problem occurs with the line width -- I have to make it exactly overlap with the width of the road, so that it seems that I am highlighting a road segment.
Is there any plugin / utility for Leaflet or Mapbox, which can be used for plotting polylines or geojson as heatmap efficiently? My current approach is calculating the color for each polyline and encoding that as a geojson property. But the problem is that with the increase in the number of lines (> 1K) the rendering becomes a pain and the method is not feasible. There are some plugins for Leaflet out there for plotting heatmap, but all of them are for points only and not lines. Any approach using WebGL would be really great.
An approach which I thought of could be converting my data into a shape file, upload to Mapbox Studio and use as a layer directly. But I have no idea how to go about doing that i.e. creating a shapes file, encoding the information in such a way that the complete road segment gets highlighted in the correct color.

Check intersections between points and polygons in QGIS

I have two data layers, one with points and one with polygons. Both layers have ID's and I would like to check whether the points with ID x lay inside or outside the polygon with ID x.
Does someone know how to do this?
Thanks,
Marie
One potential solution which gives you a comma separated list in the python console is to run a small script from the python console:
mapcanvas = iface.mapCanvas()
layers = mapcanvas.layers()
for a in layers[0].getFeatures():
for b in layers[1].getFeatures():
if a.geometry().intersects(b.geometry()):
print a.id(),",",b.id()
This should produce a result of cases where one feature intersects the other. The order of the layers did not matter in my testing, however, both layers had to use the same coordinate reference system, so you might need to re-project your data if both layers have different reference systems. This worked for points in polygons and polygons intersecting polygons (I'm sure it would work with lines as well).
Answers such as this: https://gis.stackexchange.com/questions/168266/pyqgis-a-geometry-intersectsb-geometry-wouldnt-find-any-intersections may help with further refinement of such a script, and was a primary source on this answer.

Region of Interest in nighttime vehicle detection

I am developing a project of detecting vehicles' headlights in night scene. I am working on a demo on MATLAB. My problem is that I need to find region of interest (ROI) to get low computing requirement. I have researched in many papers and they just use a fixed ROI like this one, the upper part is ignored and the bottom is used to analysed later.
However, if the camera is not stable, I think this approach is inappropriate. I want to find a more flexible one, which alternates in each frame. My experiments images are shown here:
If anyone has any idea, plz give me some suggestions.
I would turn the problem around and say that we are looking for headlights
ABOVE a certain line rather than saying that the headlights are below a certain line i.e. the horizon,
Your images have a very high reflection onto the tarmac and we can use that to our advantage. We know that the maximum amount of light in the image is somewhere around the reflection and headlights. We therefore look for the row with the maximum light and use that as our floor. Then look for headlights above this floor.
The idea here is that we look at the profile of the intensities on a row-by-row basis and finding the row with the maximum value.
This will only work with dark images (i.e. night) and where the reflection of the headlights onto the tarmac is large.
It will NOT work with images taking in daylight.
I have written this in Python and OpenCV but I'm sure you can translate it to a language of your choice.
import matplotlib.pylab as pl
import cv2
# Load the image
im = cv2.imread('headlights_at_night2.jpg')
# Convert to grey.
grey_image = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Smooth the image heavily to mask out any local peaks or valleys
We are trying to smooth the headlights and the reflection so that there will be a nice peak. Ideally, the headlights and the reflection would merge into one area
grey_image = cv2.blur(grey_image, (15,15))
Sum the intensities row-by-row
intensity_profile = []
for r in range(0, grey_image.shape[0]):
intensity_profile.append(pl.sum(grey_image[r,:]))
Smooth the profile and convert it to a numpy array for easy handling of the data
window = 10
weights = pl.repeat(1.0, window)/window
profile = pl.convolve(pl.asarray(intensity_profile), weights, 'same')
Find the maximum value of the profile. That represents the y coordinate of the headlights and the reflection area. The heat map on the left show you the distribution. The right graph shows you the total intensity value per row.
We can clearly see that the sum of the intensities has a peak.The y-coordinate is 371 and indicated by a red dot in the heat map and a red dashed line in the graph.
max_value = profile.max()
max_value_location = pl.where(profile==max_value)[0]
horizon = max_value_location
The blue curve in the right-most figure represents the variable profile
The row where we find the maximum value is our floor. We then know that the headlights are above that line. We also know that most of the upper part of the image will be that of the sky and therefore dark.
I display the result below.
I know that the line in both images are on almost the same coordinates but I think that is just a coincidence.
You may try downsampling the image.

Joining discontinuous skeleton shapes and discontinuous lines

I have a binary image, I want to detect discontinuous lines and linking them.
I don't know anything (coordinates, angle etc).
Can any one guide me how should I start? Suppose I have this image:
I want to join discontinuous lines. And I want to store information of lines joining (in an array) to use afterward.
I found your problem interesting and I will try to give you just some ideas but unfortunately not a complete algorithm (you know, it takes time...). I will also leave you with some unanswered questions.
I consider the image you posted as a binary image, that is the black pixels have value of zero and the white pixels have value of one. I ignore the red pixels because I think you drew them in order to highlight where you would like to connect the broken lines; ignoring the red pixels means I will set their value to zero.
First of all we need some definitions.
A non-border pixel has 8 neighbor (north-west, north, north-east, east, south-east, south, south-west, west) pixels:
abc
h*d
gfe
in the above diagram the pixel is indicated by * and its 8 neighbor by a,b,c,d,e,f,g and h.
I define an endpoint pixel as a pixel with value of one and just one neighbor with value of one, the remaining neighbor have a value of zero, so for example this diagram shows an endpoint pixel
000
011
000
because d=1 and all the remaining neighbors are zero.
The following diagram shows instead a pixel which is not and endpoint pixel because it has two neighbors equal to one (a=1 and e=1)
100
010
001
Now we can start to describe a part of a simple algorithm.
In the first step find all the endpoint pixels and put them in a vector: in the following image I marked the endpoints from 1 to 15 (note that the endpoint 15 was not highlighted in the image you posted).
In the second step, for each endpoint, find its closest endpoint: for example consider the endpoint 4, its closest endpoint is 5. Now, if you follow the simple rule of connecting one endpoint with its closest endpoint you will have segments connecting 4-5, 10-11, 13-14, which are all fine. But consider 1: its closest endpoint is 2 or maybe it is 3, but I would like that the algorithm just connected 2 and 3 while connecting 1 to the leftmost vertical line. I would also like the same behavior for 6, 9 and 12.
Now a different situation: what about 6, 7 and 8? Ignore for a moment 8, the closest endpoint of 6 is then 7 but they are already connected, how we can manage this case?
And, last, consider 15: why did not highlight it in the image you posted? Maybe it should be ignored?
May be this would help.
Increase the thickness of the lines to at-least 2.
Find those runs of consecutive foreground pixel in vertical direction in the image whose previous or next column has all background pixel within the run-length. This would give the location of the points, which needs to be processed.
For each point find the nearest steep point. This would be either a branch point or a point where an abrupt change in angle has occured. (Example point 15 in the image of previous answer). If there is no such point there is obviously another end-point. Call these as reference point. The vector between an endpoint and reference point would give the direction of extension.
Now there can be various ways to decide which point to join with. You can take the nearest foreground point in that direction. You can also pick some features depending on the angle and distance between an endpoint and a point of extension to use in a KNN classifier.
Hope it helps.