Displaying georeferenced images using OpenLayers 5 - openlayers-5

I'm trying to make an application where the user can georeference scanned maps. You can look at an example here: https://codesandbox.io/s/2o99jvrnyy
There are two images:
assets/test.png - without rotation
assets/test_rotation.png - with rotation
The first image is loaded correctly on the map but the one with rotation is not.
I can't find information on whether OpenLayers 5 can handle images with transformation parameters stored in world file. Probably I'm missing something but can't figure out what.
This is how my logic works:
Transformation parameters are calculated with affine transformation using 4 points. You can see the logic in Affine.js file. At least 4 points are picked up from the source image and the map. Then using these 4 points the transformation parameters are calculated. After that I'm calculating the extent of the image:
width = image.width in pixels
height = image.height in pixels
width *= Math.sqrt(Math.pow(parameters.A, 2) + Math.pow(parameters.D, 2));
height *= Math.sqrt(Math.pow(parameters.B, 2) + Math.pow(parameters.E, 2));
// then the extent in projection units is
extent = [parameters.C, parameters.F - height, parameters.C + width, parameters.F];
World file parameters are calculated as defined here.
Probably the problem is that the image with rotation is not rotated when loaded as static image in OpenLayers 5, but can't find a way to do it.
I tried to load both images in QGIS and ArcMap with calculated parameters and both of them are loaded correctly. You can see the result for the second picture:
You can see the parameters for each image here:
Image: test.png
Calculated extent: [436296.79726721847, 4666723.973240128, 439864.3389057907, 4669253.416495154]
Calculated parameters (for world file):
3.8359372067274027
-0.03146800786355865
-0.03350636818089405
-3.820764346376064
436296.79726721847
4669253.416495154
Image: test_rotation.png
Calculated extent: [437178.8291026594, 4667129.767589236, 440486.91675884253, 4669768.939256327]
Calculated parameters (for world file):
3.506332904308879
-1.2831186688536016
-1.3644002712982917
-3.7014921022625864
437178.8291026594
4669768.939256327

I realized that my approach was wrong. There is no need to calculate the extent of the image in map projection and set it in the layer. I can simply add a transformation function responsible for transforming coordinates between image projection and map projection. This way the image layer has always it's projection set to image projection and extent set to the size of the image in pixels.
The transformation function is added like this:
import { addCoordinateTransforms } from 'ol/proj.js';
addCoordinateTransforms(
mapProjection,
imageProjection,
coords => {
// forward
return Affine.transform(coords);
},
coords => {
// inverse
}
)
Affine parameters are again calculated from at least 3 points:
// mapPoints - coordinates in map projection
// imagePoints - coordinates in image projection
Affine.calculate(mapPoints, imagePoints);
You can see a complete example here - https://kw9l85y5po.codesandbox.io/

Related

Leaflet.js (or other solution) zoom to magnified pixels without blur

I've been using Leaflet to display raster images lately.
What I would like to do for a particular project is be able to zoom in to an image so the pixels become magnified on the screen in a sharply delineated way, such as you would get when zooming in to an image in Photoshop or the like. I would also like to retain, at some zoom level before maximum, a 1:1 correspondence between image pixel and screen pixel.
I tried going beyond maxNativeZoom as described here and here, which works but the interpolation results in pixel blurring.
I thought of an alternative which is to make the source image much larger using 'nearest neighbour' interpolation to expand each pixel into a larger square: when zoomed to maxNativeZoom the squares then look like sharply magnified pixels even though they aren't.
Problems with this are:
image size and tile count get out of hand quickly (original image is 4096 x 4096)
you never get the 'pop' of a 1:1 correspondence between image pixel and screen pixel
I have thought about using two tile sets: the first from the original image up to it's maxNativeZoom, and then the larger 'nearest neighbour' interpolated image past that, following something like this.
But, this is more complex, doesn't avoid the problem of large tile count, and just seems inelegant.
So:
Can Leaflet do what I need it to and if so how?
If not can you point me in the right direction to something that can (for example, it would be interesting to know how this is achieved)?
Many thanks
One approach is to leverage the image-rendering CSS property. This can hint the browser to use nearest-neighbour interpolation on <img> elements, such as Leaflet map tiles.
e.g.:
img.leaflet-tile {
image-rendering: pixelated;
}
See a working demo. Beware of incomplete browser support.
A more complicated approach (but one that works across more browsers) is to leverage WebGL; in particular Leaflet.TileLayer.GL.
This involves some internal changes to Leaflet.TileLayer.GL to support a per-tile uniform, most critically setting the uniform value to the tile coordinate in each tile render...
gl.uniform3f(this._uTileCoordsPosition, coords.x, coords.y, coords.z);
...having a L.TileLayer that "displays" a non-overzoomed tile for overzoomed tile coordinates (instead of just skipping the non-existent tiles)...
var hackishTilelayer = new L.TileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
'attribution': 'Map data © OpenStreetMap contributors',
maxNonPixelatedZoom: 3
});
hackishTilelayer.getTileUrl = function(coords) {
if (coords.z > this.options.maxNonPixelatedZoom) {
return this.getTileUrl({
x: Math.floor(coords.x / 2),
y: Math.floor(coords.y / 2),
z: coords.z - 1
});
}
// Skip L.TileLayer.prototype.getTileUrl.call(this, coords), instead
// apply the URL template directly to avoid maxNativeZoom shenanigans
var data = {
r: L.Browser.retina ? '#2x' : '',
s: this._getSubdomain(coords),
x: coords.x,
y: coords.y,
z: coords.z // *not* this._getZoomForUrl() !
};
var url = L.Util.template(this._url, L.Util.extend(data, this.options));
return url;
}
... plus a fragment shader that rounds down texel coordinates prior to texel fetches (plus a tile-coordinate-modulo-dependant offset), to actually perform the nearest-neighbour oversampling...
var fragmentShader = `
highp float factor = max(1., pow(2., uTileCoords.z - uPixelatedZoomLevel));
vec2 subtileOffset = mod(uTileCoords.xy, factor);
void main(void) {
vec2 texelCoord = floor(vTextureCoords.st * uTileSize / factor ) / uTileSize;
texelCoord.xy += subtileOffset / factor;
vec4 texelColour = texture2D(uTexture0, texelCoord);
// This would output the image colours "as is"
gl_FragColor = texelColour;
}
`;
...all tied together in an instance of L.TileLayer.GL (which syncs some numbers for the uniforms around):
var pixelated = L.tileLayer.gl({
fragmentShader: fragmentShader,
tileLayers: [hackishTilelayer],
uniforms: {
// The shader will need the zoom level as a uniform...
uPixelatedZoomLevel: hackishTilelayer.options.maxNonPixelatedZoom,
// ...as well as the tile size in pixels.
uTileSize: [hackishTilelayer.getTileSize().x, hackishTilelayer.getTileSize().y]
}
}).addTo(map);
You can see everything working together in this demo.

Geometrical transformation of a polygon to a higher resolution image

I'm trying to resize and reposition a ROI (region of interest) correctly from a low resolution image (256x256) to a higher resolution image (512x512). It should also be mentioned that the two images cover different field of view - the low and high resolution image have 330mm x 330mm and 180mm x 180mm FoV, respectively.
What I've got at my disposal are:
Physical reference point (in mm) in the 256x256 and 512x512 image, which are refpoint_lowres=(-164.424,-194.462) and refpoint_highres=(-94.3052,-110.923). The reference points are located in the top left pixel (1,1) in their respective images.
Pixel coordinates of the ROI in the 256x256 image (named pxX and pxY). These coordinates are positioned relative to the reference point of the lower resolution image, refpoint_lowres=(-164.424,-194.462).
Pixel spacing for the 256x256 and 512x512 image, which are 0.7757 pixel/mm and 2.8444 pixel/mm respectively.
How can I rescale and reposition the ROI (the binary mask) to correct pixel location in the 512x512 image? Many thanks in advance!!
Attempt
% This gives correctly placed and scaled binary array in the 256x256 image
mask_lowres = double(poly2mask(pxX, pxY, 256., 256.));
% Compute translational shift in pixel
mmShift = refpoint_lowres - refpoint_highres;
pxShift = abs(mmShift./pixspacing_highres)
% This produces a binary array that is only positioned correctly in the
% 512x512 image, but it is not upscaled correctly...(?)
mask_highres = double(poly2mask(pxX + pxShift(1), pxY + pxShift(2), 512.,
512.));
So you have coordinates pxX, and pxY in pixels with respect to the low-resolution image. You can transform these coordinates to real-world coordinates:
pxX_rw = pxX / 0.7757 - 164.424;
pxY_rw = pxY / 0.7757 - 194.462;
Next you can transform these coordinates to high-res coordinates:
pxX_hr = (pxX_rw - 94.3052) * 2.8444;
pxY_hr = (pxY_rw - 110.923) * 2.8444;
Since the original coordinates fit in the low-res image, but the high-res image is smaller (in physical coordinates) than the low-res one, it is possible that these new coordinates do not fit in the high-res image. If this is the case, cropping the polygon is a non-trivial exercise, it cannot be done by simply moving the vertices to be inside the field of view. MATLAB R2017b introduces the polyshape object type, which you can intersect:
bbox = polyshape([0 0 180 180] - 94.3052, [180 0 0 180] - 110.923);
poly = polyshape(pxX_rw, pxY_rw);
poly = intersect([poly bbox]);
pxX_rw = poly.Vertices(:,1);
pxY_rw = poly.Vertices(:,2);
If you have an earlier version of MATLAB, maybe the easiest solution is to make the field of view larger to draw the polygon, then crop the resulting image to the right size. But this does require some proper calculation to get it right.

How to set correct image dimensions by LatLngBounds using ImageOverlay?

I want to use ImageOverlays as markers, because I want the images to scale with zoom. Markers icons always resize to keep their size the same when you zoom.
My problem is that I can't figure out how to transform pixels to cords, so my image isn't stretched.
For instance, I decided my south-west LatLng to be [50, 50]. My image dimensions are 24px/24px.
How do I calculate the north-east LatLng based on the image pixels?
You are probably looking for map conversion methods.
In particular, you could use:
latLngToContainerPoint: Given a geographical coordinate, returns the corresponding pixel coordinate relative to the map container.
containerPointToLatLng: Given a pixel coordinate relative to the map container, returns the corresponding geographical coordinate (for the current zoom level).
// 1) Convert LatLng into container pixel position.
var originPoint = map.latLngToContainerPoint(originLatLng);
// 2) Add the image pixel dimensions.
// Positive x to go right (East).
// Negative y to go up (North).
var nextCornerPoint = originPoint.add({x: 24, y: -24});
// 3) Convert back into LatLng.
var nextCornerLatLng = map.containerPointToLatLng(nextCornerPoint);
var imageOverlay = L.imageOverlay(
'path/to/image',
[originLatLng, nextCornerLatLng]
).addTo(map);
Demo: http://playground-leaflet.rhcloud.com/tehi/1/edit?html,output

How to preserve spatial reference using Imcrop with Matlab

I have an image and the spatial reference object of that image.
Now i want to crop the image by coordinates according to the spatial reference object.
The function Imcrop can only crop according the pixel coordinates. Is there a way to crop based on the world coordinates?
I tried to use Imcrop and compute for the new reference object but I get lost in the coordinate transformation.
An example of the reference object after warping an Image.
imref2d with properties:
XWorldLimits: [-775.4357 555.5643]
YWorldLimits: [-488.3694 523.6306]
ImageSize: [1012 1331]
PixelExtentInWorldX: 1
PixelExtentInWorldY: 1
ImageExtentInWorldX: 1331
ImageExtentInWorldY: 1012
XIntrinsicLimits: [0.5000 1.3315e+03]
YIntrinsicLimits: [0.5000 1.0125e+03]
What I actually want to do is to crop the image such that the point (0,0) is the center of the cropped image.
According to you spatial reference each pixel has a dimensions of 1 x 1 in world coordinates. Therefore if you want to convert between world coordinate (Xw,Yw) and image coordinate (Xi,Yi) do the following:
Xi = round(abs(-775.4357 - Xw))
Yi = round(abs(-488.3694 - Yw))
So if you want to crop the image such that the real world coordinate (0,0) will be the center of the new cropped image and the size of the new image will be width on height than the rectangle for imcrop will be
[(755 - width) (488 - height) width height]

Get a relationship between position vector return by getposition method in matlab

I wrote a code for image cropping. I used imrect to draw a rectangle on the image and then get the position of it by using the method getposition. I wrote a function which uses image pixel coordinates for cropping operation. How can I create a relationship between values return by getposition method and image pixel coordinates.My code for cropping is as follows,
[rnum cnum dim]=size(img);
for h=1:dim
for i=1:width
for j=1:height
negative(i,j,h)=img(xmin+i,ymin+j,h);
end
end
end
width,height,xmin,ymin have to found from getposition method
Like you said, imrect's getPosition method will return:
[xmin ymin width height] = getPosition( h );
The first two values are the top-left corner of the rectangle, and the next two values are the length of the sides of the rectangle. These should all be in pixel coordinates if you are using imrect.
To crop an image based on these position values, you will start at the top-left corner of (xmin, ymin) and go to the bottom-right corner at (xmin+width-1, ymin+height-1).
You should not use for loops to get the pixel data, you can take advantage of MATLAB's vectorization characteristics and do the following:
CroppedImageMatrix = OriginalImageMatrix( [ymin : 1 : ymin+height-1],
[xmin : 1 : xmin+width-1],
: );
This will immediately "crop" the image and place the cropped data into the new matrix. You can do this because you are using a rectangular crop and all of the indices correspond to create a rectangular lattice of points. It would be "trickier" if this was not a rectangular crop.
This also will work the same for color or grayscale images because you do not need to index the channel dimension, you just take the values from every available channel.
P.S. - Documentation page for imrect: http://www.mathworks.com/help/images/ref/imrect.html