Geotools OSM Tile layer Failed to load image, can't create an ImageInputStream - openstreetmap

I'am trying to print an OSM tiles in a given bounding box using geotools and the tile client,i have implemented a wms service that can read a wms request and render the image in a given boudning box, the OSM layer is used as a base layer,i have other layers that are vector layers that i can add later,vector layer are displayed correctly in the given bounding box but osm tiles are not displayed ,the Url used for sending the request to the osm server don't get any response ?i have the following error :
2020-05-27 16:03:00.291 ERROR 26094 --- [pool-6-thread-1] org.geotools.tile : Failed to load image: https://tile.openstreetmap.org/8/123/106.png
java.io.IOException: Can't create an ImageInputStream!
at org.geotools.image.io.ImageIOExt.read(ImageIOExt.java:339) ~[gt-coverage-22.2.jar:na]
at org.geotools.image.io.ImageIOExt.readBufferedImage(ImageIOExt.java:402) ~[gt-coverage-22.2.jar:na]
at org.geotools.tile.Tile.loadImageTileImage(Tile.java:175) ~[gt-tile-client-22.2.jar:na]
at org.geotools.tile.Tile.getBufferedImage(Tile.java:163) ~[gt-tile-client-22.2.jar:na]
at org.geotools.tile.util.TileLayer.getTileImage(TileLayer.java:143) [gt-tile-client-22.2.jar:na]
at org.geotools.tile.util.TileLayer.renderTile(TileLayer.java:131) [gt-tile-client-22.2.jar:na]
at org.geotools.tile.util.TileLayer.renderTiles(TileLayer.java:125) [gt-tile-client-22.2.jar:na]
at org.geotools.tile.util.TileLayer.draw(TileLayer.java:86) [gt-tile-client-22.2.jar:na]
at org.geotools.renderer.lite.CompositingGroup$WrappingDirectLayer.draw(CompositingGroup.java:228) [gt-render-22.2.jar:na]
at org.geotools.renderer.lite.StreamingRenderer$RenderDirectLayerRequest.execute(StreamingRenderer.java:3850) [gt-render-22.2.jar:na]
at org.geotools.renderer.lite.StreamingRenderer$PainterThread.run(StreamingRenderer.java:3911) [gt-render-22.2.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_232]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_232]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_232]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_232]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_232]
for the source code these is how i add the osm tile layer :
MapContent mapContent = new MapContent();
String baseURL = "https://tile.openstreetmap.org/";
TileService service = new OSMService("OSM", baseURL);
mapContent.addLayer(new TileLayer(service));
later i print it using gt-render :
StreamingRenderer renderer = new StreamingRenderer();
renderer.setMapContent(map);
ReferencedEnvelope mapBounds = mapRequest.getReferencedEnvelope();
Rectangle imageBounds = new Rectangle(0, 0, mapRequest.getWidth(), mapRequest.getHeight());
map.getViewport().setScreenArea(imageBounds);
map.getViewport().setBounds( mapBounds );
BufferedImage image = new BufferedImage(mapRequest.getWidth(), mapRequest.getHeight(),BufferedImage.TYPE_4BYTE_ABGR);
Graphics2D gr = image.createGraphics();
gr.setRenderingHints(new RenderingHints(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON));
int threads = Runtime.getRuntime().availableProcessors();
ExecutorService fixedPool = Executors.newFixedThreadPool(threads - 1);
renderer.setThreadPool(fixedPool);
try {
renderer.paint(gr, imageBounds, mapBounds);
ImageIO.write(image, imageExtension, os);
}
catch (IOException e) {
throw new RuntimeException(e);
}
gr.dispose();
map.dispose();
i have a hard coded version of the source code to test it:
public void test(OutputStream os) throws NoSuchAuthorityCodeException, FactoryException {
MapContent map = new MapContent();
String baseURL = "https://tile.openstreetmap.org/";
TileService service = new OSMService("OSM", baseURL);
map.addLayer(new TileLayer(service));
StreamingRenderer renderer = new StreamingRenderer();
renderer.setMapContent(map);
CoordinateReferenceSystem crs = CRS.decode("EPSG:3857");
ReferencedEnvelope mapBounds = new ReferencedEnvelope(-939258.203568246,-626172.1357121639,3130860.67856082,3443946.746416902,crs);
Rectangle imageBounds = new Rectangle(0, 0, 256, 256);
map.getViewport().setScreenArea(imageBounds);
map.getViewport().setBounds( mapBounds );
BufferedImage image = new BufferedImage(256, 256,
BufferedImage.TYPE_4BYTE_ABGR);
Graphics2D gr = image.createGraphics();
gr.setRenderingHints(new RenderingHints(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON));
try {
renderer.paint(gr, imageBounds, mapBounds);
ImageIO.write(image, imageExtension, os);
} catch (IOException e) {
throw new RuntimeException(e);
}
gr.dispose();
map.dispose();
}

The problem seems to be that OpenStreetMap will return an HTTP-429 error (Too Many Requests) if you don't set a valid User-Agent header in your requests.
I'm assuming this is a new requirement since the OSM tile code was last used or tested. Though looking at the test it may not actually fetch a tile to render.
I've raised a bug against gt-tile-client at the issue tracker and I have a PR to fix the issue, to add a header in as the WMTSTile implementation does.
It should be available in the master nightly build later today 30/5/2020.

Related

How to do a preprocessing on tiles of a TileLayer before displaying them on Leaflet?

I have a large image that I broke up in 256x256 tiles using gdal2tiles.py.
I display it on leaflet as a L.tileLayer. It works fine.
Now, I'd like to preprocess the 256x256 tiles before they are rendered in tileLayer.
I need to apply an algorithm on these stored tiles, and the algorithm generates tiles of same size, but with different content. It can looks like changing stored tiles content dynamically.
Is it possible to replace tiles that are in TileLayer with processed tiles ?
How should I proceed ?
I would like to process these tiles only once, so I guess I should take advantage of caching.
#IvanSanchez thank you for your answer.
I created a new tileLayer allowing to call a rest API doing predictions on tiles (taking and returning a base64-encoded PNG image).
/*
* L.TileLayer.Infer, inspired by L.TileLayer.PixelFilter (https://github.com/GreenInfo-Network/L.TileLayer.PixelFilter/)
*/
L.tileLayerInfer = function (url, options) {
return new L.TileLayer.Infer(url, options);
}
L.TileLayer.Infer = L.TileLayer.extend({
// the constructor saves settings and throws a fit if settings are bad, as typical
// then adds the all-important 'tileload' event handler which basically "detects" an unmodified tile and performs the pxiel-swap
initialize: function (url, options) {
L.TileLayer.prototype.initialize.call(this, url, options);
// and add our tile-load event hook which triggers us to do the infer
this.on('tileload', function (event) {
this.inferTile(event.tile);
});
},
// extend the _createTile function to add the .crossOrigin attribute, since loading tiles from a separate service is a pretty common need
// and the Canvas is paranoid about cross-domain image data. see issue #5
// this is really only for Leaflet 0.7; as of 1.0 L.TileLayer has a crossOrigin setting which we define as a layer option
_createTile: function () {
var tile = L.TileLayer.prototype._createTile.call(this);
tile.crossOrigin = "Anonymous";
return tile;
},
// the heavy lifting to do the pixel-swapping
// called upon 'tileload' and passed the IMG element
// tip: when the tile is saved back to the IMG element that counts as a tileload event too! thus an infinite loop, as wel as comparing the pixelCodes against already-replaced pixels!
// so, we tag the already-swapped tiles so we know when to quit
// if the layer is redrawn, it's a new IMG element and that means it would not yet be tagged
inferTile: function (imgelement) {
// already processed, see note above
if (imgelement.getAttribute('data-InferDone')) return;
// copy the image data onto a canvas for manipulation
var width = imgelement.width;
var height = imgelement.height;
var canvas = document.createElement("canvas");
canvas.width = width;
canvas.height = height;
var context = canvas.getContext("2d");
context.drawImage(imgelement, 0, 0);
// encode image to base64
var uri = canvas.toDataURL('image/png');
var b64 = uri.replace(/^data:image.+;base64,/, '');
var options = this.options;
// call to Rest API
fetch('/api/predict', {
method: 'POST',
mode: 'no-cors',
credentials: 'include',
cache: 'no-cache',
headers: {
'Content-type': 'application/json',
'Accept': 'application/json',
'Access-Control-Allow-Origin': '*',
},
body: JSON.stringify({
'image': [b64]
})
})
.then((response) => response.json())
.then((responseJson) => {
// Perform success response.
const obj = JSON.parse(responseJson);
image = "data:image/png;base64," + obj["predictions"][0];
var img = new Image();
img.onload = function() {
// draw retrieve image on tile (replace tile content)
context.globalAlpha = options.opacity
context.drawImage(img, 0, 0, canvas.width, canvas.height);
}
img.src = image;
imgelement.setAttribute('data-InferDone', true);
imgelement.src = image;
})
.catch((error) => {
console.log(error)
});
}
});

What is the best way to load KML layers on Leaflet?

I have to load a KML layer on a Leaflet app. After some browsing I found a library called leaflet-kml that does this. There are two ways that I can load the KML layer: either by the KML layer's URI or a KML string. The KML is stored in a server and I have backend code that retrieves both the URI and string representation.
Here is the approach using the URI.
function LoadKML(containerName, name)
{
let kmlURL = GetKmlURI(containerName, name);
let kmlLayer = new L.KML(kmlURL);
map.addLayer(kmlLayer);
}
Here is the approach using the kml string.
function LoadKML(containerName, name)
{
let kmlString = GetKmlString(containerName, name);
let kmlLayer = new L.KML.parseKML(kmlString);
map.addLayer(kmlLayer);
}
I could not get a URL with the first method due to the CORS restriction. The second method returns a string, but could not be parsed correctly.
KML.js:77 Uncaught TypeError: this.parseStyles is not a function
at new parseKML (KML.js:77)
at LoadKML (Account:470)
at Account:461
How should I call the function in leaflet-kml? Are there any libraries that can easily load KML into leaflet?
You can use leaflet-omnivore. It is the best plugin for loading KML files (https://github.com/mapbox/leaflet-omnivore)
var kmlData = omnivore.kml('data/kmlData.kml', null, customLayer);
There is the plugin leaflet-kml
https://github.com/windycom/leaflet-kml
using it you can write your code like this:
<head>
<script src="L.KML.js"></script>
</head>
<script type='text/javascript'>
// Make basemap
const map = new L.Map('map', {center: new L.LatLng(58.4, 43.0), zoom: 11},)
, osm = new L.TileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png')
map.addLayer(osm)
// Load kml file
fetch('Coventry.kml')
.then( res => res.text() )
.then( kmltext => {
// Create new kml overlay
parser = new DOMParser();
kml = parser.parseFromString(kmltext,"text/xml");
console.log(kml)
const track = new L.KML(kml)
map.addLayer(track)
// Adjust map to show the kml
const bounds = track.getBounds()
map.fitBounds( bounds )
})
</script>
</body>
It should work, rgds
You were close! The parseKML requires a parsed DOM xml. The result is a list of features, which you have to wrap as a layer too.
function LoadKML(containerName, name)
{
let kmlString = GetKmlString(containerName, name);
const domParser = new new DOMParser();
const parsed = parser.parseFromString(userLayer.kml, 'text/xml');
let kmlGeoItems = new L.KML.parseKML(parsed); // this is an array of geojson
const layer = L.layerGroup(L.KML.parseKML(parsed));
map.addLayer(layer);
}

zxing Datamatrix generator creating rectangular barcode which can't be scanned

I am using barcodewriter to write datamatrix barcoe. While most of the times it creates correct square style datamatrix barcode, for some of text it creates rectangular shaped barcode.
For inputData like below it creates rectangular barcode
8004600000070000017
C/TH PAUL PENGELLY
C/TH NICKY PARSONS
C/TH ROSEMARIE BARTOLOME
while for others it creates square styled: CTH HEKT-WOODROW MORGAN
800460000007
800460000007000001700000
i am usinf this code to generate code:
BarcodeWriter writer = new BarcodeWriter() { Format = BarcodeFormat.DATA_MATRIX };
var img = writer.Write(inputData);
return new Bitmap(img);
img.Save(ms, System.Drawing.Imaging.ImageFormat.Png);
dto.BarcodeImage = ms.ToArray();
How can I make sure that I always get Square shaped datamatrix?
I have alread tried adding height,width options.
Thanks
There is SymbolShape option which can be used to force shape .
DatamatrixEncodingOptions options = new DatamatrixEncodingOptions()
{
Height = 300,
Width = 300,
PureBarcode = true,
Margin = 1,
SymbolShape = SymbolShapeHint.FORCE_SQUARE
};
It is not easy to detect but after careful reviewing, I found how to do it.
readonly DataMatrixWriter DMencoder = new();
readonly Dictionary<EncodeHintType, object> DMencodeType = new()
{
[EncodeHintType.DATA_MATRIX_DEFAULT_ENCODATION] = Encodation.C40, //Optional
[EncodeHintType.DATA_MATRIX_SHAPE] = SymbolShapeHint.FORCE_SQUARE
};
DMencoder.encode(matrixText, BarcodeFormat.DATA_MATRIX, 100, 100, DMencodeType)

JGraphT Image Creation

I need to create an Directed Graph and an image displaying this Graph.
I tried using DirectedGraph which works just fine to create the Graph, it´s internally stored correctly, I tested this but I fail in creating an image from it to Display in an E4 RCP Application.
This is my code:
import org.jgraph.JGraph;
import org.jgrapht.DirectedGraph;
import org.jgrapht.ext.JGraphModelAdapter;
import org.jgrapht.graph.DefaultDirectedGraph;
import org.jgrapht.graph.DefaultEdge;
DirectedGraph <String, DefaultEdge> graph = new DefaultDirectedGraph<String, DefaultEdge>(DefaultEdge.class);
addVertexes();
addEdges();
//Create image from graph
JGraphModelAdapter<String, DefaultEdge> graphModel = new JGraphModelAdapter<String, DefaultEdge>(graph);
JGraph jgraph = new JGraph (graphModel);
BufferedImage img = jgraph.getImage(Color.WHITE, 5);
but apparently img is always null. Why is that so and how can I change this to work properly?
Just read about JGraphX and tried using that, so for me it works just fine! This is an example of my code now (example with reduced Vertices and Edges).
mxGraph graphMx = new mxGraph();
graphMx.insertVertex(graphMx.getDefaultParent(), "Start", "Start", 0.0, 0.0, 50.0, 30.0, "rounded");
graphMx.insertVertex(graphMx.getDefaultParent(), "Ende", "Ende", 0.0, 0.0, 50.0, 30.0, "rounded");
graphMx.insertEdge(graphMx.getDefaultParent(), null, "", ((mxGraphModel)graphMx.getModel()).getCell("Start"), ((mxGraphModel)graphMx.getModel()).getCell("Ende"));
mxIGraphLayout layout = new mxHierarchicalLayout(graphMx);
layout.execute(graphMx.getDefaultParent());
BufferedImage image = mxCellRenderer.createBufferedImage(graphMx, null, 1, Color.WHITE, true, null);
return image;
My answer is similar to #HexFlex, but i got this to work the same way, although i'm not sure how to customize the drawing:
String GRAPH_FILE_PATH = "somwhere/u/want.png";
public static <V, E> File drawGraph(Graph<V, E> graph) throws IOException {
JGraphXAdapter<V, E> graphAdapter = new JGraphXAdapter<V, E>(graph);
mxIGraphLayout layout = new mxCircleLayout(graphAdapter);
layout.execute(graphAdapter.getDefaultParent());
BufferedImage image = mxCellRenderer.createBufferedImage(graphAdapter, null, 2, new Color(0f,0f,0f,.5f), true, null);
File imgFile = new File(GRAPH_FILE_PATH);
ImageIO.write(image, "PNG", imgFile);
return imgFile;
}
Edit:
Btw, the code is from https://www.baeldung.com/jgrapht, i just refactored it into a function
you can also put it on a JFRAME and generate an image from that frame.
-Jan

Trying to make use of Jcrop and serverside image resizing with scala Scrimage lib

I'm trying to combine jcrop and scrimage but I'm having trouble in understanding
the documentation of scrimage.
The user uploads an image. When the upload is done the user is able choose a fixed
area to crop with Jcrop:
upload.js
$(function () {
$('#fileupload').fileupload({
dataType: 'json',
progress: function (e, data) {
var progress = parseInt(data.loaded / data.total * 100, 10);
$("#progress").find(".progress-bar").css(
"width",
progress + "%"
);
},
done: function (e, data) {
$("#cropArea").empty();
var cropWidth = 210;
var cropHeight = 144;
if(data.result.status == 200) {
var myImage = $("<img></img>", {
src: data.result.link
}).appendTo('#cropArea');
var c = myImage.Jcrop({
allowResize: false,
allowSelect: false,
setSelect:[0,0,cropWidth,cropHeight],
onSelect: showCoords
});
}
}
});
});
Example:
When the user is satisfied the coordinates will be posted to the server and there is where the magic should happen.
Controller:
def uploadFile = Action(multipartFormDataAsBytes) { request =>
val result = request.body.files.map {
case FilePart(key, filename, contentType, bytes) => {
val coords = request.body.dataParts.get("coords")
val bais = new ByteArrayInputStream(bytes)
Image(bais).resize(magic stuff with coords)
Ok("works")
}
}
result(0)
}
If i read the docs for scrimage and resize:
Resizes the canvas to the given dimensions. This does not scale the
image but simply changes the dimensions of the canvas on which the
image is sitting. Specifying a larger size will pad the image with a
background color and specifying a smaller size will crop the image.
This is the operation most people want when they think of crop.
But when trying to implement resize with an inputstream Image(is).resize() I'm not sure I how should do this. Resize takes a scaleFactor, position and color... I guess I should
populate the position with the coords I get from jcrop??, and what do I do with the scaleFactor? Anybody got a good example of how to do this this?
Thank you for two great libs!
Subimage is what you want. That lets you specify coordinates rather than offsets.
So simply,
val image = // original
val resized = image.subimage(x,y,w,h) // you'll get these from jcrop somehow