Leaflet.js (or other solution) zoom to magnified pixels without blur - leaflet

I've been using Leaflet to display raster images lately.
What I would like to do for a particular project is be able to zoom in to an image so the pixels become magnified on the screen in a sharply delineated way, such as you would get when zooming in to an image in Photoshop or the like. I would also like to retain, at some zoom level before maximum, a 1:1 correspondence between image pixel and screen pixel.
I tried going beyond maxNativeZoom as described here and here, which works but the interpolation results in pixel blurring.
I thought of an alternative which is to make the source image much larger using 'nearest neighbour' interpolation to expand each pixel into a larger square: when zoomed to maxNativeZoom the squares then look like sharply magnified pixels even though they aren't.
Problems with this are:
image size and tile count get out of hand quickly (original image is 4096 x 4096)
you never get the 'pop' of a 1:1 correspondence between image pixel and screen pixel
I have thought about using two tile sets: the first from the original image up to it's maxNativeZoom, and then the larger 'nearest neighbour' interpolated image past that, following something like this.
But, this is more complex, doesn't avoid the problem of large tile count, and just seems inelegant.
So:
Can Leaflet do what I need it to and if so how?
If not can you point me in the right direction to something that can (for example, it would be interesting to know how this is achieved)?
Many thanks

One approach is to leverage the image-rendering CSS property. This can hint the browser to use nearest-neighbour interpolation on <img> elements, such as Leaflet map tiles.
e.g.:
img.leaflet-tile {
image-rendering: pixelated;
}
See a working demo. Beware of incomplete browser support.

A more complicated approach (but one that works across more browsers) is to leverage WebGL; in particular Leaflet.TileLayer.GL.
This involves some internal changes to Leaflet.TileLayer.GL to support a per-tile uniform, most critically setting the uniform value to the tile coordinate in each tile render...
gl.uniform3f(this._uTileCoordsPosition, coords.x, coords.y, coords.z);
...having a L.TileLayer that "displays" a non-overzoomed tile for overzoomed tile coordinates (instead of just skipping the non-existent tiles)...
var hackishTilelayer = new L.TileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
'attribution': 'Map data © OpenStreetMap contributors',
maxNonPixelatedZoom: 3
});
hackishTilelayer.getTileUrl = function(coords) {
if (coords.z > this.options.maxNonPixelatedZoom) {
return this.getTileUrl({
x: Math.floor(coords.x / 2),
y: Math.floor(coords.y / 2),
z: coords.z - 1
});
}
// Skip L.TileLayer.prototype.getTileUrl.call(this, coords), instead
// apply the URL template directly to avoid maxNativeZoom shenanigans
var data = {
r: L.Browser.retina ? '#2x' : '',
s: this._getSubdomain(coords),
x: coords.x,
y: coords.y,
z: coords.z // *not* this._getZoomForUrl() !
};
var url = L.Util.template(this._url, L.Util.extend(data, this.options));
return url;
}
... plus a fragment shader that rounds down texel coordinates prior to texel fetches (plus a tile-coordinate-modulo-dependant offset), to actually perform the nearest-neighbour oversampling...
var fragmentShader = `
highp float factor = max(1., pow(2., uTileCoords.z - uPixelatedZoomLevel));
vec2 subtileOffset = mod(uTileCoords.xy, factor);
void main(void) {
vec2 texelCoord = floor(vTextureCoords.st * uTileSize / factor ) / uTileSize;
texelCoord.xy += subtileOffset / factor;
vec4 texelColour = texture2D(uTexture0, texelCoord);
// This would output the image colours "as is"
gl_FragColor = texelColour;
}
`;
...all tied together in an instance of L.TileLayer.GL (which syncs some numbers for the uniforms around):
var pixelated = L.tileLayer.gl({
fragmentShader: fragmentShader,
tileLayers: [hackishTilelayer],
uniforms: {
// The shader will need the zoom level as a uniform...
uPixelatedZoomLevel: hackishTilelayer.options.maxNonPixelatedZoom,
// ...as well as the tile size in pixels.
uTileSize: [hackishTilelayer.getTileSize().x, hackishTilelayer.getTileSize().y]
}
}).addTo(map);
You can see everything working together in this demo.

Related

Convert screen coordinates to Metal's Normalized Device Coordinates

I am trying to render a 2D triangle using user touches. So, I will let a user touch three points on the screen and those points will be used as vertices of a triangle.
You're already aware that you need to return clip-space coordinates (technically not normalized device coordinates) from your vertex shader. The question is how and where to go from UIKit coordinates to Metal's clip-space coordinates.
Let's start by defining these different spaces. Note that below, I actually am using NDC coordinates for the sake of simplicity, since in this particular case, we aren't introducing perspective by returning vertex positions with w != 1. (Here I'm referring to the w coordinate of the clip-space position; in the following discussion, w always refers to the view width).
We pass the vertices into our vertex shader in whatever space is convenient (this is often called model space). Since we're working in 2D, we don't need the usual series of transformations to world space, then eye space. Essentially, the coordinates of the UIKit view are our model space, world space, and eye space all in one.
We need some kind of orthographic projection matrix to move from this space into clip space. If we strip out the unnecessary parts related to the z axis and assume that our view bounds' origin is (0, 0), we come up with the following transformation:
We could pass this matrix into our vertex shader, or we could do the transformation prior to sending the vertices to the GPU. Considering how little data is involved, it really doesn't matter at this point. In fact, using a matrix at all is a little wasteful, since we can just transform each coordinate with a couple of multiplies and an add. Here's how that might look in a Metal vertex function:
float2 inverseViewSize(1.0f / width, 1.0f / height); // passed in a buffer
float clipX = (2.0f * in.position.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -in.position.y * inverseViewSize.y) + 1.0f;
float4 clipPosition(clipX, clipY, 0.0f, 1.0f);
Just to verify that we get the correct results from this transformation, let's plug in the upper-left and lower-right points of our view to ensure they wind up at the extremities of clip space (by linearity, if these points transform correctly, so will all others):
These points appear correct, so we're done. If you're concerned about the apparent distortion introduced by this transformation, note that it is exactly canceled by the viewport transformation that happens prior to rasterization.
Here is a function that will convert UIKit view-based coordinates to Metal's clip space coordinates (based on warrenm`s answer). It can be added directly to a shader file & called from the vertex shader function.
float2 convert_to_metal_coordinates(float2 point, float2 viewSize) {
float2 inverseViewSize = 1 / viewSize;
float clipX = (2.0f * point.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -point.y * inverseViewSize.y) + 1.0f;
return float2(clipX, clipY);
}
You'll want to pass the viewSize (UIKit's bounds) to Metal somehow, say via a buffer parameter on the vertex function.
Translated Thompsonmachine's code to swift, using SIMD values which is what I need to pass to shaders.
func convertToMetalCoordinates(point: CGPoint, viewSize: CGSize) -> simd_float2 {
let inverseViewSize = CGSize(width: 1.0 / viewSize.width, height: 1.0 / viewSize.height)
let clipX = Float((2.0 * point.x * inverseViewSize.width) - 1.0)
let clipY = Float((2.0 * -point.y * inverseViewSize.height) + 1.0)
return simd_float2(clipX, clipY)
}

Displaying georeferenced images using OpenLayers 5

I'm trying to make an application where the user can georeference scanned maps. You can look at an example here: https://codesandbox.io/s/2o99jvrnyy
There are two images:
assets/test.png - without rotation
assets/test_rotation.png - with rotation
The first image is loaded correctly on the map but the one with rotation is not.
I can't find information on whether OpenLayers 5 can handle images with transformation parameters stored in world file. Probably I'm missing something but can't figure out what.
This is how my logic works:
Transformation parameters are calculated with affine transformation using 4 points. You can see the logic in Affine.js file. At least 4 points are picked up from the source image and the map. Then using these 4 points the transformation parameters are calculated. After that I'm calculating the extent of the image:
width = image.width in pixels
height = image.height in pixels
width *= Math.sqrt(Math.pow(parameters.A, 2) + Math.pow(parameters.D, 2));
height *= Math.sqrt(Math.pow(parameters.B, 2) + Math.pow(parameters.E, 2));
// then the extent in projection units is
extent = [parameters.C, parameters.F - height, parameters.C + width, parameters.F];
World file parameters are calculated as defined here.
Probably the problem is that the image with rotation is not rotated when loaded as static image in OpenLayers 5, but can't find a way to do it.
I tried to load both images in QGIS and ArcMap with calculated parameters and both of them are loaded correctly. You can see the result for the second picture:
You can see the parameters for each image here:
Image: test.png
Calculated extent: [436296.79726721847, 4666723.973240128, 439864.3389057907, 4669253.416495154]
Calculated parameters (for world file):
3.8359372067274027
-0.03146800786355865
-0.03350636818089405
-3.820764346376064
436296.79726721847
4669253.416495154
Image: test_rotation.png
Calculated extent: [437178.8291026594, 4667129.767589236, 440486.91675884253, 4669768.939256327]
Calculated parameters (for world file):
3.506332904308879
-1.2831186688536016
-1.3644002712982917
-3.7014921022625864
437178.8291026594
4669768.939256327
I realized that my approach was wrong. There is no need to calculate the extent of the image in map projection and set it in the layer. I can simply add a transformation function responsible for transforming coordinates between image projection and map projection. This way the image layer has always it's projection set to image projection and extent set to the size of the image in pixels.
The transformation function is added like this:
import { addCoordinateTransforms } from 'ol/proj.js';
addCoordinateTransforms(
mapProjection,
imageProjection,
coords => {
// forward
return Affine.transform(coords);
},
coords => {
// inverse
}
)
Affine parameters are again calculated from at least 3 points:
// mapPoints - coordinates in map projection
// imagePoints - coordinates in image projection
Affine.calculate(mapPoints, imagePoints);
You can see a complete example here - https://kw9l85y5po.codesandbox.io/

leaflet editable restrict draw to a specific area

In Leaflet.Editable I want to confine/limit my customers to draw only in a specific area/bounds.
actually im trying to limit them to (90, -90, 180, -180) bounds of map..
maxBounds: [[-90, -180], [90, 180]]
I was not able to find anything anywhere and it seems that i am missing something.
CODEPEN DEMO
please help.
EDIT:
the Y axis is blocking correctly and mouse cannot stretch shape beyond top and bottom.
the problem is in X axis (as seen in pictures)
as for now i solved it with after save check and clear shape if it out of map bounds (BAD USER EXPERIENCE). i need a mouse confinement just like y axis does.
Without knowing your use case (why the whole world map??) Quickest and easiest fix would be to simply set the map's minZoom to something a bit higher, for example, I found that minZoom: 5 was adequate except for cases where the map was both really short and really wide (which is rarely the case in most apps I've seen).
But the real fix involves writing your own custom overrides for dragging markers and shapes.
According to API doc the L.Editable plugin allows you to override a bunch of stuff including the VertexMarker class, via map.editTools.options.vertexMarkerClass.
Fixed codepen: http://codepen.io/anon/pen/GrPpRY?editors=0010
This snippet of code that allows you to constrain the longitude for dragging vertex markers by correcting values under -180 and over 180 is this:
// create custom vertex marker editor
var vertexMarkerClass = L.Editable.VertexMarker.extend({
onDrag: function(e) {
e.vertex = this;
var iconPos = L.DomUtil.getPosition(this._icon),
latlng = this._map.layerPointToLatLng(iconPos);
// fix out of range vertex
if (latlng.lng < -180) {
e.latlng.lng = latlng.lng = -180;
this.setLatLng(latlng);
}
if (latlng.lng > 180) {
e.latlng.lng = latlng.lng = 180;
this.setLatLng(latlng);
}
this.editor.onVertexMarkerDrag(e);
this.latlng.update(latlng);
this._latlng = this.latlng; // Push back to Leaflet our reference.
this.editor.refresh();
if (this.middleMarker) this.middleMarker.updateLatLng();
var next = this.getNext();
if (next && next.middleMarker) next.middleMarker.updateLatLng();
}
});
// attach custom editor
map.editTools.options.vertexMarkerClass = vertexMarkerClass;
I didn't code for dragging the shape as a whole (the rectangle, in this case). While the VertexMarker fix should address all kinds of vertex dragging, you need to override each shape's drag handler to properly constrain the bounds. And if bounds are exceeded, crop the shape appropriately. As was pointed out, Leaflet already does this for latitude, but because Leaflet allows wrapping the map around horizontally you have your essential problem. Using rec.on("drag") to correct the bounds when they cross over your min/max longitude is the only way to address it. It is basically the same solution as I have laid out for the vertexMarkerClass - actual code left as exercise for the diligent reader.

OpenGL ES orthographic projection matrix not working

So my goal is simple. I am trying to get my coordinate space set up, so that the origin is at the bottom left of the screen, and the top right coordinates are (screen.width, screen.height).
Also this is a COMPLETELY 2d engine, so no 3d stuff is needed. I just need those coordinates to work.
Right now I am trying to plot a couple points on the screen. Mostly at places like (0, 0), (width, height), (width / 2, height /2) etc so I can see if things are working right.
Unfortunately right now my efforts to get this going are in vain, and instead of having multiple points I have one in the dead center of the device (obviously they are all overlapping).
So here is my code what exactly am I doing wrong?
Vertex Shader
uniform vec4 color;
uniform float pointSize;
uniform mat4 orthoMatrix;
attribute vec3 position;
varying vec4 outColor;
varying vec3 center;
void main() {
center = position;
outColor = color;
gl_PointSize = pointSize;
gl_Position = vec4(position, 1) * orthoMatrix;
}
And here is how I make the matrix. I am using GLKit so it is theoretically making the orthographic matrix for me. However If you have a custom function you think would better do this then that is fine! I can use it too.
var width:Int32 = 0
var height:Int32 = 0
var matrix:[GLfloat] = []
func onload()
{
width = Int32(self.view.bounds.size.width)
height = Int32(self.view.bounds.size.height)
glViewport(0, 0, GLsizei(height), GLsizei(width))
matrix = glkitmatrixtoarray( GLKMatrix4MakeOrtho(0, GLfloat(width), 0, GLfloat(height), -1, 1))
}
func glkitmatrixtoarray(mat: GLKMatrix4) -> [GLfloat]
{
var buildme:[GLfloat] = []
buildme.append(mat.m.0)
buildme.append(mat.m.1)
buildme.append(mat.m.3)
buildme.append(mat.m.4)
buildme.append(mat.m.5)
buildme.append(mat.m.6)
buildme.append(mat.m.7)
buildme.append(mat.m.8)
buildme.append(mat.m.9)
buildme.append(mat.m.10)
buildme.append(mat.m.11)
buildme.append(mat.m.12)
buildme.append(mat.m.13)
buildme.append(mat.m.15)
return buildme
}
Passing it over to the shader
func draw()
{
//Setting up shader for use
let loc3 = glGetUniformLocation(program, "orthoMatrix")
if (loc3 != -1)
{
//glUniformMatrix4fv(loc3, 1, GLboolean(GL_TRUE), &matrix[0])
glUniformMatrix4fv(loc3, 1, GLboolean(GL_TRUE), &matrix[0])
}
//Passing points and extra data
}
Note: If you remove the multiplication with the matrix in the vertex shader the points show up, however obiously most of them are off screen because of how default OpenGL works.
Also: I have tried using this function rather then glKit's method. Same results. Perhaps I am not passing there might things into the matrix making function, or maybe im not getting it to the shader properly.
EDIT: I have thrown up the project file incase you want to see how everything goes.
OK I finally figured this out! What I did
1. I miscounted when turning the glkit matrix to an array.
2. When passing the matrix as a uniform you actually want the address of the whole array not just the beginning element.
3. GL_FALSE is not a proper argument when passing the matrix to the shader.
Thankyou reto matic

Real-Time glow shader confusion

So I have a rather simple real-time 2d game that I am trying to add some nice glow to. To take it down to its most basic form it is simply circles and lies drawn on a black surface. And if you consider the scene from a hsv color space perspective all colors (except for black) have a "v" value of 100%.
Currently I have a sort of "accumulation" buffer where the current frame is joined with the previous frame. It works by using two off-screen buffers and a black texture.
Buffer one activated-------------
Lines and dots drawn
Buffer one deactivated
Buffer two activated-------------
Buffer two contents drawn as a ful screen quad
Black texture drawn with slight transparency over full screen
Buffer one contents drawn
Buffer two deactivated
On Screen buffer activated-------
Buffer two's contents drawn to screen
Right now all "lag" by far comes from latency on the cpu. The GPU handles all of this really well.
So I was thinking of maybe trying to spice things up abit by adding a glow effect to things. I was thinking perhaps for step 10 instead of using a regular texture shader, I could use one that draws the texture except with glow!
Unfortunately I am a bit confused on how to do this. Here are some reasons
Blur stuff. Mostly that some people claim that a Gaussian blur can be done real-time while others say you shouldn't. Also people mention another type of blur called a "focus" blur that I dont know what it is.
Most of the examples I can find use XNA. I need to have one that is written in a shader language that is like OpenGL es 2.0.
Some people call it glow, others call it bloom
Different blending modes? can be used to add the glow to the original texture.
How to combine vertical and horizontal blur? Perhaps in one draw call?
Anyway the process as I understand it for rendering glow is thus
Cut out dark data from it
Blur the light data (using Gaussian?)
Blend the light data on-top of the original (screen blending?)
So far I have gotten to the point where I have a shader that draws a texture. What does my next step look like?
//Vertex
percision highp float;
attrivute vec2 positionCoords;
attribute vec2 textureCoords;
uniform mat4 matrix;
uniform float alpha;
varying vec2 v_texcoord;
varying float o_alpha;
void main()
{
gl_Position = matrix * vec4(positionCoords, 0.0, 1.0);
v_texcoord = textureCoords.xy;
o_alpha = alpha;
}
//Fragment
varying vec2 v_texcoord;
uniform sampler2D s_texture;
varying float o_alpha;
void main()
{
vec4 color = texture2D(s_texture, v_texcoord);
gl_FragColor = vec4(color.r, color.g, color.b, color.a - o_alpha);
}
Also is this a feasible thing to do in real-time?
Edit: I probably want to do a 5px or less blur
To address your initial confusion items:
Any kind of blur filter will effectively spread each pixel into a blob based on its original position, and accumulate this result additively for all pixels. The difference between filters is the shape of the blob.
For a Gaussian blur, this blob should be a smooth gradient, feathering gradually to zero around the edges. You probably want a Gaussian blur.
A "focus" blur would be an attempt to emulate an out-of-focus camera: rather than fading gradually to zero, its blob would spread each pixel over a hard-edged circle, giving a subtly different effect.
For a straightforward, one-pass effect, the computational cost is proportional to the width of the blur. This means that a narrow (e.g. 5px or less) blur is likely to be feasible as a real-time one-pass effect. (It is possible to achieve a wide Gaussian blur in real-time by using multiple passes and a multi-resolution pyramid, but I'd recommend trying something simpler first...)
You could reasonably call the effect either "glow" or "bloom". However, to me, "glow" connotes a narrow blur leading to a neon-like effect, while "bloom" connotes using a wide blur to emulate the visual effect of bright objects in a high-dynamic-range visual environment.
The blend mode determines how what you draw is combined with the existing colors in the target buffer. In OpenGL, activate blending with glEnable(GL_BLEND) and set the mode with glBlendFunc().
For a narrow blur, you should be able to do horizontal and vertical filtering in one pass.
To do fast one-pass full-screen sampling, you will need to determine the pixel increment in your source texture. It is fastest to determine this statically, so that your fragment shader doesn't need to compute it at run-time:
float dx = 1.0 / x_resolution_drawn_over;
float dy = 1.0 / y_resolution_drawn_over;
You can do a 3-pixel (1,2,1) Gaussian blur in one pass by setting your texture sampling mode to GL_LINEAR, and taking 4 samples from source texture t as follows:
float dx2 = 0.5*dx; float dy2 = 0.5*dy; // filter steps
[...]
vec2 a1 = vec2(x+dx2, y+dy2);
vec2 a2 = vec2(x+dx2, y-dy2);
vec2 b1 = vec2(x-dx2, y+dy2);
vec2 b2 = vec2(x-dx2, y-dy2);
result = 0.25*(texture(t,a1) + texture(t,a2) + texture(t,b1) + texture(t,b2));
You can do a 5-pixel (1,4,6,4,1) Gaussian blur in one pass by setting your texture sampling mode to GL_LINEAR, and taking 9 samples from source texture t as follows:
float dx12 = 1.2*dx; float dy12 = 1.2*dy; // filter steps
float k0 = 0.375; float k1 = 0.3125; // filter constants
vec4 filter(vec4 a, vec4 b, vec4 c) {
return k1*a + k0*b + k1*c;
}
[...]
vec2 a1 = vec2(x+dx12, y+dy12);
vec2 a2 = vec2(x, y+dy12);
vec2 a3 = vec2(x-dx12, y+dy12);
vec4 a = filter(sample(t,a1), sample(t,a2), sample(t,a3));
vec2 b1 = vec2(x+dx12, y );
vec2 b2 = vec2(x, y );
vec2 b3 = vec2(x-dx12, y );
vec4 b = filter(sample(t,b1), sample(t,b2), sample(t,b3));
vec2 c1 = vec2(x+dx12, y-dy12);
vec2 c2 = vec2(x, y-dy12);
vec2 c3 = vec2(x-dx12, y-dy12);
vec4 c = filter(sample(t,c1), sample(t,c2), sample(t,c3));
result = filter(a,b,c);
I can't tell you if these filters will be real-time feasible on your platform; 9 samples/pixel at full resolution could be slow.
Any wider Gaussian would make separate horizontal and vertical passes advantageous; substantially wider Gaussian would require multi-resolution techniques for real-time performance. (Note that, unlike the Gaussian, filters such as the "focus" blur are not separable, which means they cannot be separated into horizontal and vertical passes...)
Everything that #comingstorm has said is true, but there's a much easier way. Don't write the blur or glow yourself. Since you're on iOS, why not use CoreImage which has a number of interesting filters to choose from and which work in realtime already? For example, they have a Bloom filter which will likely produce the results you want. Also of interest might be the Gloom filter.
Chaining together CoreImage filters is much easier than writing shaders. You can create a CIImage from an OpenGL texture via [+CIImage imageWithTexture:size:flipped:colorSpace:].