Converting from WGS84 to EPSG:27700 raster tiles without drawing a map - leaflet

Using this example from OS Data Hub - https://labs.os.uk/public/os-data-hub-examples/os-maps-api/zxy-27700-basic-map
I can get a list of tiles displayed on the map, I would like to get the coordinates of the tile without drawing the map.
Starting from a single point in WGS84 (lat/long) I can convert this to EPSG:27700 using Proj4js
var source = new proj4.Proj('EPSG:4326');
proj4.defs("EPSG:27700","+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +datum=OSGB36 +units=m +no_defs");
var dest = new proj4.Proj('EPSG:27700');
var coords=proj4.transform(source, dest, [X,Y]);
I then need to translate this into coordinates for the raster tile, which is done in the leaflet example with this code:
var crs = new L.Proj.CRS('EPSG:27700', '+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +towgs84=446.448,-125.157,542.06,0.15,0.247,0.842,-20.489 +units=m +no_defs', {
resolutions: [ 896.0, 448.0, 224.0, 112.0, 56.0, 28.0, 14.0, 7.0, 3.5, 1.75 ],
origin: [ -238375.0, 1376256.0 ]
});
How can i replicate this step to produce the tile coordinates, without having to draw the leaflet map?
I ultimately want to use the coordinates to grab & save a single tile from the OS Data Hub with this format:
https://api.os.uk/maps/raster/v1/zxy/layer/%7Bz%7D/%7Bx%7D/%7By%7D.png?key=

Using the EPSG:27700 coords calculated using proj4, and the zoom level resolutions (which are meters per pixel) and tile grid origin coordinates used in the definition you can calculate the {x} and {y} values in https://api.os.uk/maps/raster/v1/zxy/layer/{z}/{x}/{y}.png?key= for any zoom level {z} based on the standard tile size of 256 pixels as
x = Math.floor((coords[0] - origin[0]) / (resolutions[z] * 256));
y = Math.floor((origin[1] - coords[1]) / (resolutions[z] * 256));

Related

PNG Overlay not correct using L.CRS.EPSG3857 projection

I am using leaflet and openstreet maps. I created a png using EPSG3857 however the image is not laying correctly on the map.
if you look at the Baja Region and Florida you will see the data on land. The data should be over the water not the land.
var map = L.map('map', { editable: true },{crs: L.CRS.EPSG3857}).setView(initialCoordinates , initialZoom),
tilelayer =
L.tileLayer(url_tile,
{
noWrap: true,
maxZoom: 12, minZoom: 2, attribution: 'Data \u00a9 OpenStreetMap Contributors Tiles \u00a9 HOT'
}).addTo(map);
var overlay_image = 'images/webmercator-google.png';
imageBounds = [[-90, -180], [90, 180]];
L.imageOverlay(overlay_image, imageBounds, { opacity: 0.8 }).addTo(map);
When using EPSG:3857, Leaflet clamps all latitude data to +/-85.05° (or, to be precise, +/-20037508.34 on the EPSG:3857 Y coordinate). This is done to prevent data appearing outside of the coverage area of default EPSG:3857 tiles.
To illustrate this, consider the following bit of code:
for (var i=83; i<90; i+=0.1) {
L.marker([i, i]).addTo(map);
}
That should (naïvely) display a lot of markers in a diagonal-ish line. But when actually doing that, the result looks like:
See how the markers don't go north of the 85.01° parallel, and how that fits the limit of tiles (blue sea versus grey out-of-map background).
Remember, EPSG:3857 and any other (non-traverse, non-oblique) cylindrical projections cannot display the north/south poles because they get projected to an infinite Y coordinate.
OK, so what does this have to do with your problem? You're using:
imageBounds = [[-90, -180], [90, 180]];
But, since Leaflet will clamp latitudes, that's actually the same as doing:
imageBounds = [[-85.01, -180], [85.01, 180]];
Keep this in mind when using L.ImageOverlays that cover areas near the poles. You probably want to recreate your image using a narrower band of latitudes.

Leaflet convert meters to pixels

I am creating an app using leaflet
I have the leaflet map up and running and have a Geoserver service that return points that are shown on the map.
I have one additional field that is distance between my Points on the Map and it is in meters ?
My question is how could I convert it to pixels? Is there any Leaflet function or anything ?
You can use this function with the L.GeometryUtil library.
L.GeometryUtil CDN
function disToPixeldistance(distance){
var l2 = L.GeometryUtil.destination(map.getCenter(),90,distance);
var p1 = map.latLngToContainerPoint(map.getCenter())
var p2 = map.latLngToContainerPoint(l2)
return p1.distanceTo(p2)
}
But think of that the pixel distance is changing every time you zoom

Assigning a 3d polygon based on value in Mapbox Studio

I am pretty new to GIS as a whole. I have a simple flat file in a csv format, as an example:
name, detail, long, lat, value
a, 123, 103, 22, 5000
b, 356, 103, 45, 6000
What I am trying to achieve is to assign a 3d polygon in Mapbox such as in this example. While the settings might be quite straight forward in Mapbox where you assign a height and color value based on a data range, it obviously does not work in my case.
I think I am missing out other files such as mentioned in the blog post, like shapefiles or some other file that is required to assign 3d layouts to the 3d extrusion.
I need to know what I am missing out in configuring a 3d polygon, say a cube in Mapbox based on the val data column in my csv.
So I figured what I was missing was the coordinates that make up the polygons I want to display. This can easily be defined in a geojson file format, if you are interested in the standards, refer here. For the visual I need, I would require:
Points (typically your long and lat coordinates)
Polygon (a square would require 5 vertices, the lines connecting and
defining your polygon)
Features (your data points)
FeatureCollection (a collection of features)
This are all parts of the geojson format, I used Python and its geojson module which comes with everything I need to do the job.
Using a helper function below, I am able to compute square/rectangular boundaries based on a single point. The height and width defines how big the square/rectangle appears.
def create_rec(pnt, width = 0.00005, height = 0.00005):
pt1 = (pnt[0] - width, pnt[1] - height)
pt2 = (pnt[0] - width, pnt[1] + height)
pt3 = (pnt[0] + width, pnt[1] + height)
pt4 = (pnt[0] + width, pnt[1] - height)
pt5 = (pnt[0] - width, pnt[1] - height)
return Polygon([[pt1,pt2,pt3,pt4,pt5]]) #assign to a Polygon class from geojson
From there it is pretty straight forward to append them into list of features, FeatureCollection and output as a geojson file:
with open('path/coordinates.csv', 'r') as f:
headers = next(f)
reader = csv.reader(f)
data = list(reader)
transform = []
for i in data:
#3rd last value is x and 2nd last is the y
point = Point([float(i[-3]), float(i[-2])])
polygon = create_rec(point['coordinates'])
#in my case I used a collection to store both points and polygons
col = GeometryCollection([point, polygon])
properties = {'Name':i[0]}
feature = Feature(geometry = col, properties = properties)
transform.append(feature)
fc = FeatureCollection(transform)
with open('target_doc_u.geojson', 'w') as f:
dump(fc, f)
The output file target_doc_u would contain all the listed items above that allows me to plot my point, as well as continue of the blog post in Mapbox to assign my filled extrusion

Displaying georeferenced images using OpenLayers 5

I'm trying to make an application where the user can georeference scanned maps. You can look at an example here: https://codesandbox.io/s/2o99jvrnyy
There are two images:
assets/test.png - without rotation
assets/test_rotation.png - with rotation
The first image is loaded correctly on the map but the one with rotation is not.
I can't find information on whether OpenLayers 5 can handle images with transformation parameters stored in world file. Probably I'm missing something but can't figure out what.
This is how my logic works:
Transformation parameters are calculated with affine transformation using 4 points. You can see the logic in Affine.js file. At least 4 points are picked up from the source image and the map. Then using these 4 points the transformation parameters are calculated. After that I'm calculating the extent of the image:
width = image.width in pixels
height = image.height in pixels
width *= Math.sqrt(Math.pow(parameters.A, 2) + Math.pow(parameters.D, 2));
height *= Math.sqrt(Math.pow(parameters.B, 2) + Math.pow(parameters.E, 2));
// then the extent in projection units is
extent = [parameters.C, parameters.F - height, parameters.C + width, parameters.F];
World file parameters are calculated as defined here.
Probably the problem is that the image with rotation is not rotated when loaded as static image in OpenLayers 5, but can't find a way to do it.
I tried to load both images in QGIS and ArcMap with calculated parameters and both of them are loaded correctly. You can see the result for the second picture:
You can see the parameters for each image here:
Image: test.png
Calculated extent: [436296.79726721847, 4666723.973240128, 439864.3389057907, 4669253.416495154]
Calculated parameters (for world file):
3.8359372067274027
-0.03146800786355865
-0.03350636818089405
-3.820764346376064
436296.79726721847
4669253.416495154
Image: test_rotation.png
Calculated extent: [437178.8291026594, 4667129.767589236, 440486.91675884253, 4669768.939256327]
Calculated parameters (for world file):
3.506332904308879
-1.2831186688536016
-1.3644002712982917
-3.7014921022625864
437178.8291026594
4669768.939256327
I realized that my approach was wrong. There is no need to calculate the extent of the image in map projection and set it in the layer. I can simply add a transformation function responsible for transforming coordinates between image projection and map projection. This way the image layer has always it's projection set to image projection and extent set to the size of the image in pixels.
The transformation function is added like this:
import { addCoordinateTransforms } from 'ol/proj.js';
addCoordinateTransforms(
mapProjection,
imageProjection,
coords => {
// forward
return Affine.transform(coords);
},
coords => {
// inverse
}
)
Affine parameters are again calculated from at least 3 points:
// mapPoints - coordinates in map projection
// imagePoints - coordinates in image projection
Affine.calculate(mapPoints, imagePoints);
You can see a complete example here - https://kw9l85y5po.codesandbox.io/

How to set correct image dimensions by LatLngBounds using ImageOverlay?

I want to use ImageOverlays as markers, because I want the images to scale with zoom. Markers icons always resize to keep their size the same when you zoom.
My problem is that I can't figure out how to transform pixels to cords, so my image isn't stretched.
For instance, I decided my south-west LatLng to be [50, 50]. My image dimensions are 24px/24px.
How do I calculate the north-east LatLng based on the image pixels?
You are probably looking for map conversion methods.
In particular, you could use:
latLngToContainerPoint: Given a geographical coordinate, returns the corresponding pixel coordinate relative to the map container.
containerPointToLatLng: Given a pixel coordinate relative to the map container, returns the corresponding geographical coordinate (for the current zoom level).
// 1) Convert LatLng into container pixel position.
var originPoint = map.latLngToContainerPoint(originLatLng);
// 2) Add the image pixel dimensions.
// Positive x to go right (East).
// Negative y to go up (North).
var nextCornerPoint = originPoint.add({x: 24, y: -24});
// 3) Convert back into LatLng.
var nextCornerLatLng = map.containerPointToLatLng(nextCornerPoint);
var imageOverlay = L.imageOverlay(
'path/to/image',
[originLatLng, nextCornerLatLng]
).addTo(map);
Demo: http://playground-leaflet.rhcloud.com/tehi/1/edit?html,output