Can't load geoJSON layer dynamically into Leaflet map - leaflet

If I load some geoJSON via a JS file
<script type="text/javascript" src="xyz.js"></script>
where the .js file has data such as
var regions = {"type":"FeatureCollection","features":[ ....(etc)
I can then use this to add it to my map, thus:
var newGeoJsonLayerGroup = L.geoJson(regions, {
// do stuff
}).addTo(map);
And this works a treat. However, if I want to get this data dynamically via an AJAX call, I first amned the data file to omit the "var regions = " then fetch it and try to load it like this, it fails:
$.ajax({
url: "getRegions.ashx",
cache: false,
type: "post",
async: false,
success: function (response) {
var regions = response;
var newGeoJsonLayerGroup = L.geoJson(regions, {
// do stuff
}).addTo(map);
}
});
The returned data from the AJAX call (response) is as expected, so why won't it load into the map?

OK, silly me - my AJAX response was being returned with the wrong Response.ContentType - I was using "text/plain"; changed to "application/json" and all is good :)

Related

How I get contain of a href and img from cgv website using axios and jsdom

I have a problem when i wanto target and get contain a href and img from this website https://www.cgv.id/en/movies/now_playing but i always wrong to get it. This is may code:
const { default: axios } = require("axios");
const { JSDOM } = require("jsdom");
(async () => {
try {
const { data } = await axios.get(
"https://www.cgv.id/en/movies/now_playing"
);
let dom = new JSDOM(data).window.document;
let list = [...dom.getElementsByClassName('movie-list-body').querySelectorAll('li')]
list = list.map(v => v.document.querySelectorAll('li a[href]').textContent.trimEnd())
console.log(list);
} catch (error) {
console.log(error);
}
})()
My code is error. How i repair it and can target to get contain a href and img it?
There are couple of issues with using JSDOM there, especially the way you are using it.
Firstly the website in question does not have any markup for the DOM element with the class name movie-list-body as you retrieve it using Axios
On further inspection I realised they are using a jQuery AJAX call to retrieve all the links and images from a JSON file.
Following is the script they are using to do so.
<script>
$(function() {
$.ajax({
type: 'GET',
url: '/en/loader/home_movie_list',
success: function(data) {
$('.movie-list-body').html(data.now_playing);
$('.comingsoon-movie-list-body').html(data.comingsoon);
$('.lazy').lazy({
combined: true
});
}
});
});
</script>
In my opinion you should just use that JSON file. However, if you still want to use JSDOM following are some of the approaches.
Given that the site requires resource processing, if you want to parse the whole page using JSDOM you will have to pass the options as mentioned in the JSDOM documentation as follows:
const options = {
contentType: "text/html",
includeNodeLocations: true,
resources: "usable",
};
let dom = new JSDOM( data, options ).window.document;
These options will allow the JSDOM to load all the resources including jQuery that will in-turn allow the Node to make the AJAX call, populate the element and then in-theory you extract the links. However, there are some CSS files that JSDOM is unable to parse.
Therefore, I think your best bet is to do something along the following lines:
const { default: axios } = require("axios");
const { JSDOM } = require("jsdom");
(async () => {
try {
const data = await axios.get(
"https://www.cgv.id/en/loader/home_movie_list"
);
const base_url = 'https://www.cgv.id';
var dom = new JSDOM(data.data.now_playing).window.document;
var lists = [ ... dom.getElementsByTagName('ul')[0].children ]
var list = lists.map( list => [ base_url+list.firstChild.href, list.firstChild.firstChild.dataset.src ] );
console.log( list );
} catch (error) {
console.log(error);
}
})()
Note:
There is only one catch with the approach mentioned above which is that if the author of the website changes the endpoint for the JSON file, your solution will stop working.

Loading using ajax request - onEachFeature leaflet - multiple requests

Hi I am trying to find out how to add a loading in a ajax call, but there is one issue, in the example below the loader hide on the first request, but I need to hide it only when the last request is made.
onEachFeature/geoJson runs multiple markers on my map, each marker has a unique data, so ajax runs one time for each marker.
function onEachFeature(feature, layer) {
if (feature.properties && feature.properties.name) {
function getImage(value) {
function applyImage(result, resultB) {
//do something...
}
$.ajax({
url: "libs/php/getImage.php",
type: 'POST',
dataType: 'json',
data: {
id: value,
},
success: function (result) {
$("#iframeloading").show();
let valueA = result.A;
let valueB = result.B
applyImage(valueA, valueB)
},
error: function () {
console.log('error');
},
complete: function() {
hide goes here but there's no effect...
}
})
};
getImage(feature.properties.wikidata)
It works in my code, just put it before ajax calls, it will apply to every ajax call:
$(document).ajaxStart(function(){
$("#iframeloading").show();
});
$(document).ajaxStop(function(){
$("#iframeloading").hide();
});

How to add custom Authorization Header for tile requests in Leaflet

I am using the Leaflet.VectorGrid plugin to load pbf vector tiles on a leaflet map. The API that I retrieve the vector tiles requires an authorization header to be passed. In Mapbox GL js this can be resolved by using the transformRequest option. Example:
var baseLayer = L.mapboxGL({
accessToken: token,
style: 'myStyle',
transformRequest: (url, resourceType)=> {
if(resourceType == 'Tile' && url.startsWith(TILE_SERVER_HOST)) {
return {
url: url,
headers: { 'Authorization': 'Basic ' + key }
};
}
}
}).addTo(leafletMap);
How can I do something similar in Leaflet to bypass the 401 authorized message that I am getting?
For reference vector layer constructor from the plugin:
var vectorTileOptions = {
rendererFactory: L.canvas.tile
};
var pbfLayer = L.vectorGrid.protobuf(vectorTileUrl, VectorTileOptions).addTo(leafletMap);
This Github issue https://github.com/Leaflet/Leaflet.VectorGrid/issues/89 describes a fetchOptions attribute you can pass when instantiating your layer that will be used as fetch options to retrieve the tiles.
You should be able to do
var vectorTileOptions = {
rendererFactory: L.canvas.tile,
fetchOptions: {
headers: {
Authorization: 'Basic ' + key
}
}
};
var pbfLayer = L.vectorGrid.protobuf(vectorTileUrl, VectorTileOptions).addTo(leafletMap);

How to do a preprocessing on tiles of a TileLayer before displaying them on Leaflet?

I have a large image that I broke up in 256x256 tiles using gdal2tiles.py.
I display it on leaflet as a L.tileLayer. It works fine.
Now, I'd like to preprocess the 256x256 tiles before they are rendered in tileLayer.
I need to apply an algorithm on these stored tiles, and the algorithm generates tiles of same size, but with different content. It can looks like changing stored tiles content dynamically.
Is it possible to replace tiles that are in TileLayer with processed tiles ?
How should I proceed ?
I would like to process these tiles only once, so I guess I should take advantage of caching.
#IvanSanchez thank you for your answer.
I created a new tileLayer allowing to call a rest API doing predictions on tiles (taking and returning a base64-encoded PNG image).
/*
* L.TileLayer.Infer, inspired by L.TileLayer.PixelFilter (https://github.com/GreenInfo-Network/L.TileLayer.PixelFilter/)
*/
L.tileLayerInfer = function (url, options) {
return new L.TileLayer.Infer(url, options);
}
L.TileLayer.Infer = L.TileLayer.extend({
// the constructor saves settings and throws a fit if settings are bad, as typical
// then adds the all-important 'tileload' event handler which basically "detects" an unmodified tile and performs the pxiel-swap
initialize: function (url, options) {
L.TileLayer.prototype.initialize.call(this, url, options);
// and add our tile-load event hook which triggers us to do the infer
this.on('tileload', function (event) {
this.inferTile(event.tile);
});
},
// extend the _createTile function to add the .crossOrigin attribute, since loading tiles from a separate service is a pretty common need
// and the Canvas is paranoid about cross-domain image data. see issue #5
// this is really only for Leaflet 0.7; as of 1.0 L.TileLayer has a crossOrigin setting which we define as a layer option
_createTile: function () {
var tile = L.TileLayer.prototype._createTile.call(this);
tile.crossOrigin = "Anonymous";
return tile;
},
// the heavy lifting to do the pixel-swapping
// called upon 'tileload' and passed the IMG element
// tip: when the tile is saved back to the IMG element that counts as a tileload event too! thus an infinite loop, as wel as comparing the pixelCodes against already-replaced pixels!
// so, we tag the already-swapped tiles so we know when to quit
// if the layer is redrawn, it's a new IMG element and that means it would not yet be tagged
inferTile: function (imgelement) {
// already processed, see note above
if (imgelement.getAttribute('data-InferDone')) return;
// copy the image data onto a canvas for manipulation
var width = imgelement.width;
var height = imgelement.height;
var canvas = document.createElement("canvas");
canvas.width = width;
canvas.height = height;
var context = canvas.getContext("2d");
context.drawImage(imgelement, 0, 0);
// encode image to base64
var uri = canvas.toDataURL('image/png');
var b64 = uri.replace(/^data:image.+;base64,/, '');
var options = this.options;
// call to Rest API
fetch('/api/predict', {
method: 'POST',
mode: 'no-cors',
credentials: 'include',
cache: 'no-cache',
headers: {
'Content-type': 'application/json',
'Accept': 'application/json',
'Access-Control-Allow-Origin': '*',
},
body: JSON.stringify({
'image': [b64]
})
})
.then((response) => response.json())
.then((responseJson) => {
// Perform success response.
const obj = JSON.parse(responseJson);
image = "data:image/png;base64," + obj["predictions"][0];
var img = new Image();
img.onload = function() {
// draw retrieve image on tile (replace tile content)
context.globalAlpha = options.opacity
context.drawImage(img, 0, 0, canvas.width, canvas.height);
}
img.src = image;
imgelement.setAttribute('data-InferDone', true);
imgelement.src = image;
})
.catch((error) => {
console.log(error)
});
}
});

Calling an ashx handler with jquery causes form action to post back to it

I'm calling an ashx handler with jquery ajax:
$.ajax({ type: "GET",
url: "handlers/getpage.ashx?page=" + pageName,
dataType: "html",
success: function (response) {
$('.hidden-slide-panel').append(response);
});
However when this hidden-slide-panel div gets populated, when I click on anything inside it, the form action value has been set now to getpage.ashx, rather than the calling pages form action. Is there a way to force it to use the calling pages action?
Use the "data" property for ajax():
http://api.jquery.com/jQuery.ajax/
Example:
$.ajax({ type: "GET",
url: "whatever.ashx",
data: { page: pageName },
success: function(data) { alert(data); }
});
Sounds like you just need to set the form back to its original value if it's changing:
document.forms[0].action = 'whatever';
// or
document.YourFormNameHere.action = 'whatever';