Set manually the map scale on tmap - tmap

I am creating a map with tmap to plot specific coordinates as dots.
I would like to save my map as .png and this works well using tmap_leaflet and mapshot (see code below).
library(sf)
library(tmap)
library(mapview)
coord <- data.frame(Lat=c(0.92, 0.92, 0.93, 0.92, 0.93, 0.93, 1.00, 1.00, 0.99, 0.93),
Lon=c(104.58, 104.51, 104.57, 104.50, 104.55, 104.51, 104.59, 104.49, 104.6, 104.61))
sdat <- st_as_sf(coord, coords = c("Lon", "Lat"),
crs = "+proj=longlat +datum=WGS84 +no_defs")
tmap_mode("view")
Map <- tm_basemap("Esri.WorldTopoMap") +
tm_shape(sdat) +
tm_dots(alpha = 1,
title = "Location")
lf <- tmap_leaflet(Map)
mapshot(lf, file = "Map.png") # save map
My issue is the default area that gets visualized in tmap view mode.
I would like to zoom out to visualise a larger area of the map (setting specific boundaries for instance) and not only the area really close to the points plotted. I couldn't find a solution online yet.
This is how I get the .png:
And this is what I would like to get for instance (made with a screenshot):

You can adjust the zoom levels using tm_view():
library(sf)
library(tmap)
library(mapview)
coord <- data.frame(Lat=c(0.92, 0.92, 0.93, 0.92, 0.93, 0.93, 1.00, 1.00, 0.99, 0.93),
Lon=c(104.58, 104.51, 104.57, 104.50, 104.55, 104.51, 104.59, 104.49, 104.6, 104.61))
sdat <- st_as_sf(coord, coords = c("Lon", "Lat"),
crs = "+proj=longlat +datum=WGS84 +no_defs")
tmap_mode("view")
Map <- tm_basemap("Esri.WorldTopoMap") +
tm_shape(sdat) +
tm_dots(alpha = 1,
title = "Location") +
tm_view(set.zoom.limits = c(10, 20))
lf <- tmap_leaflet(Map)
mapshot(lf, file = "Map.png")

Related

How to display labels for overlapping polygons with leaflet

I'm trying to display labels of several overlapping polygons but unfortunately only the upper polygons gets a label.
Here is a reproductible example :
poly_a <-st_sf(st_sfc(st_polygon(list(as.matrix(data.frame(lng = c(0, 0.5, 2, 3,0),
lat = c(0, 4, 4, 0,0))))), crs = 4326))%>%
rename(geometry=st_sfc.st_polygon.list.as.matrix.data.frame.lng...c.0..0.5..2..)%>%
mutate(ID= "poly_a")
poly_b <- st_sf(st_sfc(st_polygon(list(as.matrix(data.frame(lng = c(1, 1.5, 1.8,1),
lat = c(2, 3, 2,2))))), crs = 4326))%>%
rename(geometry=st_sfc.st_polygon.list.as.matrix.data.frame.lng...c.1..1.5..1.8..)%>%
mutate(ID= "poly_b")
layer <- rbind(poly_a,poly_b)
pal <- colorFactor(
palette = c("red","green"),
domain = layer$ID)
label <- sprintf(
"<strong>%s",
layer$ID
) %>% lapply(htmltools::HTML)
leaflet(layer) %>%
addProviderTiles(providers$CartoDB.Positron) %>%
addPolygons(
fillColor = ~pal(ID),
color="black",
label=label,
opacity = 0.5,
fillOpacity = 0.5)%>%
addLegend("topleft", pal = pal, values = layer$ID,
title = "Legend",
opacity = 0.25)
I would like to see "poly_a, poly_b" when I pass my mouse on the smallest polygon and "poly_a" when it s only on the biggest one.
I saw maybe I could use the plugin "point in polygon" but I don't know how.
Thank you for your help !
Paco

How to plot a 3d graph in Matlab with my data?

Right now I am doing a parameter sweep and I am trying to convert my data to a 3D graph to show the results in a very nice fashion. The problem is that I don't quite know how to plot it as I am having an issue with the result variable.
mute_rate = [0.5, 0.25, 0.125, 0.0625, 0.03125, 0.015625]
mute_step = linspace(-2.5, 2.5, 6)
results = [949.58, 293.53, 57.69, 53.65, 293.41, 1257.49;
279.19, 97.94, 32.60, 29.52, 90.52, 286.94;
32.96, 28.06, 19.56, 6.44, 13.47, 55.80;
2.01, 1.52, 5.38, 1.00, 0.89, 1.41;
0.61, 0.01, 18.59, 0.03, 0.56, 1.22;
1.85, 1.51, 18.64, 18.57, 18.54, 6.90]
So the first row in the result variable presents the results of the mute rate and mute step performed on the population from my genetic algorithm. For example:
0.5, -2.5 = 949.58,
0.5, -1.5 = 293.53,
0.5, -0.5 = 57.69
etc
It sounds like you want something akin to:
mesh(mute_step, mute_rate, results);
shading interp;
Other styles of plot would be surf or pcolor (for a 2d view).

Find pixel coordinate of world/geographic coordinate in tile

I'm trying to use Mapbox Terrain RGB to get elevation for specific points in space. I used mercantile.tile to get the coordinates of the tile containing my point at zoom level 15, which for -43º, -22º (for simplicity sake) is 12454, 18527, then mercantile.xy to get the corresponding world coordinates: -4806237.7150042495, -2621281.2257876047.
Shouldn't the integer part of -4806237.7150042495 / 256 (tile size) equal the x coordinate of the tile containing the point, that is, 12454? If this calculation checked out I'd figure that I'm looking for the pixel column (x axis) corresponding to the decimal part of the result, like column 127(256 * 0,5) for 12454,5. However, the division results in -18774.366, (which is curiously close to the tile y coordinate, but it looks like a coincidence). What am I missing here?
As an alternative, I thought of using mercantile.bounds, assigning the first and last pixel columns to the westmost and eastmost longitudes, and finding my position with interpolation, but I wanted to check if I'm doing this the right/recommended way. I'm interested in point elevations, so everything said here goes for the Y axis as well.
Here's what I got so far:
def correct_altitude_mode(kml):
with open(kml, "r+") as f:
txt = f.read()
if re.search("(?<=<altitudeMode>)relative(?=<\/altitudeMode>)", txt):
lat = round(float(find_with_re("latitude", txt)), 5)
lng = round(float(find_with_re("longitude", txt)), 5)
alt = round(float(find_with_re("altitude", txt)), 5)
z = 15
tile = mercantile.tile(lng, lat, z)
westmost, southmost, eastmost, northmost = mercantile.bounds(tile)
pixel_column = np.interp(lng, [westmost, eastmost], [0,256])
pixel_row = np.interp(lat, [southmost, northmost], [256, 0])
response = requests.get(f"https://api.mapbox.com/v4/mapbox.terrain-rgb/{z}/{tile.x}/{tile.y}.pngraw?access_token=pk.eyJ1IjoibWFydGltcGFzc29zIiwiYSI6ImNra3pmN2QxajBiYWUycW55N3E1dG1tcTEifQ.JFKSI85oP7M2gbeUTaUfQQ")
buffer = BytesIO(response.content)
tile_img = png.read_png_int(buffer)
_,R,G,B = (tile_img[int(pixel_row), int(pixel_column)])
print(tile_img[int(pixel_row), int(pixel_column)])
height = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1)
print(f"R:{R},G:{G},B:{B}\n{height}")
plt.hlines(pixel_row, 0.0, 256.0, colors="r")
plt.vlines(pixel_column, 0.0, 256.0, colors="r")
plt.imshow(tile_img)

How to create Bezier curves from B-Splines in Sympy?

I need to draw a smooth curve through some points, which I then want to show as an SVG path. So I create a B-Spline with scipy.interpolate, and can access some arrays that I suppose fully define it. Does someone know a reasonably simple way to create Bezier curves from these arrays?
import numpy as np
from scipy import interpolate
x = np.array([-1, 0, 2])
y = np.array([ 0, 2, 0])
x = np.r_[x, x[0]]
y = np.r_[y, y[0]]
tck, u = interpolate.splprep([x, y], s=0, per=True)
cx = tck[1][0]
cy = tck[1][1]
print( 'knots: ', list(tck[0]) )
print( 'coefficients x: ', list(cx) )
print( 'coefficients y: ', list(cy) )
print( 'degree: ', tck[2] )
print( 'parameter: ', list(u) )
The red points are the 3 initial points in x and y. The green points are the 6 coefficients in cx and cy. (Their values repeat after the 3rd, so each green point has two green index numbers.)
Return values tck and u are described scipy.interpolate.splprep documentation
knots: [-1.0, -0.722, -0.372, 0.0, 0.277, 0.627, 1.0, 1.277, 1.627, 2.0]
# 0 1 2 3 4 5
coefficients x: [ 3.719, -2.137, -0.053, 3.719, -2.137, -0.053]
coefficients y: [-0.752, -0.930, 3.336, -0.752, -0.930, 3.336]
degree: 3
parameter: [0.0, 0.277, 0.627, 1.0]
Not sure starting with a B-Spline makes sense: form a catmull-rom curve through the points (with the virtual "before first" and "after last" overlaid on real points) and then convert that to a bezier curve using a relatively trivial transform? E.g. given your points p0, p1, and p2, the first segment would be a catmull-rom curve {p2,p0,p1,p2} for the segment p1--p2, {p0,p1,p2,p0} will yield p2--p0, and {p1, p2, p0, p1} will yield p0--p1. Then you trivially convert those and now you have your SVG path.
As demonstrator, hit up https://editor.p5js.org/ and paste in the following code:
var points = [{x:150, y:100 },{x:50, y:300 },{x:300, y:300 }];
// add virtual points:
points = points.concat(points);
function setup() {
createCanvas(400, 400);
tension = createSlider(1, 200, 100);
}
function draw() {
background(220);
points.forEach(p => ellipse(p.x, p.y, 4));
for (let n=0; n<3; n++) {
let [c1, c2, c3, c4] = points.slice(n,n+4);
let t = 0.06 * tension.value();
bezier(
// on-curve start point
c2.x, c2.y,
// control point 1
c2.x + (c3.x - c1.x)/t,
c2.y + (c3.y - c1.y)/t,
// control point 2
c3.x - (c4.x - c2.x)/t,
c3.y - (c4.y - c2.y)/t,
// on-curve end point
c3.x, c3.y
);
}
}
Which will look like this:
Converting that to Python code should be an almost effortless exercise: there is barely any code for us to write =)
And, of course, now you're left with creating the SVG path, but that's hardly an issue: you know all the Bezier points now, so just start building your <path d=...> string while you iterate.
A B-spline curve is just a collection of Bezier curves joined together. Therefore, it is certainly possible to convert it back to multiple Bezier curves without any loss of shape fidelity. The algorithm involved is called "knot insertion" and there are different ways to do this with the two most famous algorithm being Boehm's algorithm and Oslo algorithm. You can refer this link for more details.
Here is an almost direct answer to your question (but for the non-periodic case):
import aggdraw
import numpy as np
import scipy.interpolate as si
from PIL import Image
# from https://stackoverflow.com/a/35007804/2849934
def scipy_bspline(cv, degree=3):
""" cv: Array of control vertices
degree: Curve degree
"""
count = cv.shape[0]
degree = np.clip(degree, 1, count-1)
kv = np.clip(np.arange(count+degree+1)-degree, 0, count-degree)
max_param = count - (degree * (1-periodic))
spline = si.BSpline(kv, cv, degree)
return spline, max_param
# based on https://math.stackexchange.com/a/421572/396192
def bspline_to_bezier(cv):
cv_len = cv.shape[0]
assert cv_len >= 4, "Provide at least 4 control vertices"
spline, max_param = scipy_bspline(cv, degree=3)
for i in range(1, max_param):
spline = si.insert(i, spline, 2)
return spline.c[:3 * max_param + 1]
def draw_bezier(d, bezier):
path = aggdraw.Path()
path.moveto(*bezier[0])
for i in range(1, len(bezier) - 1, 3):
v1, v2, v = bezier[i:i+3]
path.curveto(*v1, *v2, *v)
d.path(path, aggdraw.Pen("black", 2))
cv = np.array([[ 40., 148.], [ 40., 48.],
[244., 24.], [160., 120.],
[240., 144.], [210., 260.],
[110., 250.]])
im = Image.fromarray(np.ones((400, 400, 3), dtype=np.uint8) * 255)
bezier = bspline_to_bezier(cv)
d = aggdraw.Draw(im)
draw_bezier(d, bezier)
d.flush()
# show/save im
I didn't look much into the periodic case, but hopefully it's not too difficult.

Convert coordinates between two rotated systems

I have a map with coordinates in meters and an overlaying building plan with pixel coordinates:
I already know the scale factor and I am able to convert coordinates between the two systems if they are aligned (i.e. the overlay image is exactly horizontal with no rotation)--> conversionfactor (= number of overlay pixels in one meter on the map)
MAPx(ImageX) = centerpointX + ImageX * conversionfactor
MAPy(ImageY) = centerpointY + ImageY * conversionfactor
How can I convert between the coordinates if the overlay is rotated assuming that I have above formulas and I want to include a rotation angle?
EDIT (#tsauerwein):
Here is the marker style that you have requested:
planStyle = function(feature, resolution){
var style = new ol.style.Style({
image: new ol.style.Icon({
src: feature.dataURL,
scale: feature.resolution / resolution,
rotateWithView: true,
rotation: feature.rotation * (Math.PI / 180),
anchor: [.5, .5],
anchorXUnits: 'fraction',
anchorYUnits: 'fraction',
opacity: feature.opacity
})
})
return [style];
};
Assuming that you are using ol.source.ImageStatic: When you configure your layer, you have the size of the image in pixels(e.g. width=500, height=200) and also the extent that this image covers in coordinates.
Now, if you have a coordinate, you can easily check if the coordinate is inside the image extent (ol.extent.containsXY(extent, x, y)). Then you can also translate the real-world coordinate to a pixel coordinate:
// image size
var width = 500;
var height = 250;
// image extent
var extent = [2000, 0, 4000, 1000];
// coordinates
var x = 3000;
var y = 500;
if (ol.extent.containsXY(extent, x, y)) {
var pixelX = width * (x - extent[0]) / ol.extent.getWidth(extent);
var pixelY = height * (y - extent[1]) / ol.extent.getHeight(extent);
}
Doing it like this, it doesn't matter if the map is rotated or not.