How to convert lat/lon to correct pixel location in GridLayer Tile - leaflet

I'm playing with creating a Konva-based GridLayer for Leaflet (basically an abstraction around canvas elements to try and render tens of thousands of features efficiently). I have some code that seems to work to some degree (the lines in my sample data seem to line up with what I would expect), but I am getting strange behavior. Specifically, features will seem to visibly "teleport" or disappear completely. Additionally, it is not uncommon to see breaks in lines at the edges of the tiles. I suspect this means I'm calculating the pixel location within each tile incorrectly (although it's certainly possible something else is wrong). I am basically identifying the pixel location of the tile (x, y in renderStage()), and am translating the map pixel position by that many pixels (pt.x and pt.y, generated by projecting the lat/lon). This is intended to create an array of [x1, y1, x2, y2, ...], which can be rendered to the individual tile. Everything is expected to be in EPSG:4326.
Does anyone know how to properly project lat/lon to pixel coordinates within individual tiles of a GridLayer? There are plenty of examples for doing it for the entire map, but this doesn't seem to translate cleanly into how to find those same pixel locations in tiles offset from the upper left of the map.
import { GridLayer, withLeaflet } from "react-leaflet";
import { GridLayer as LeafletGridLayer } from "leaflet";
import { Stage, Line, FastLayer } from "konva";
import * as Util from 'leaflet/src/core/Util';
import _ from "lodash";
export const CollectionLayer = LeafletGridLayer.extend({
options: {
tileSize: 256
},
initialize: function(collection, props) {
Util.setOptions(this, props)
this.collection = collection;
this.stages = new Map();
this.shapes = {};
this.cached = {};
this.on('tileunload', (e) => {
const stage = this.stages[e.coords]
if (stage) {
this.stages.delete(e.coords)
stage.destroy()
}
})
},
renderStage: function(stage, coords, tileBounds) {
const x = coords.x * this._tileSize.x
const y = coords.y * this._tileSize.y
const z = coords.z;
const layer = stage.getLayers()[0]
if (!layer || !tileBounds) return;
_.each(this.collection.data, (entity, id) => {
if (entity.bounds && tileBounds.intersects(entity.bounds)) {
let shape = this.shapes[id]
if (!shape) {
shape = new Line()
shape.shadowForStrokeEnabled(false)
this.shapes[id] = shape
}
layer.add(shape);
const points = entity.position.reduce((pts, p) => {
const pt = this._map.project([p.value[1], p.value[0]], this._tileZoom)
pts.push(pt.x - x);
pts.push(pt.y - y);
return pts
}, [])
shape.points(points);
shape.stroke('red');
shape.strokeWidth(2);
this.shapes[id] = shape
}
})
layer.batchDraw()
},
createTile: function(coords) {
const tile = document.createElement("div");
const tileSize = this.getTileSize();
const stage = new Stage({
container: tile,
width: tileSize.x,
height: tileSize.y
});
const bounds = this._tileCoordsToBounds(coords);
const layer = new FastLayer();
stage.add(layer);
this.stages[coords] = stage
this.renderStage(stage, coords, bounds);
return tile;
}
});
class ReactCollectionLayer extends GridLayer {
createLeafletElement(props) {
console.log("PROPS", props);
return new CollectionLayer(props.collection.data, this.getOptions(props));
}
updateLeafletElement(fromProps, toProps) {
super.updateLeafletElement(fromProps, toProps);
if (this.leafletElement.collection !== toProps.collection) {
this.leafletElement.collection = toProps.collection
this.leafletElement.redraw();
}
}
}
export default withLeaflet(ReactCollectionLayer);

Everything is expected to be in EPSG:4326.
No.
Once you are dealing with raster data (image tiles), everything is expected to be either in the map's display CRS, which is (by default) EPSG:3857, or in pixels relative to the CRS origin. These concepts are explained a bit more in-depth in one of Leaflet's tutorials.
In fact, you seem to be working in pixels here, at least for your points:
const pt = this._map.project([p.value[1], p.value[0]], this._tileZoom)
However, your calculation of the pixel offset for each tile is too naïve:
const x = coords.x * this._tileSize.x
const y = coords.y * this._tileSize.y
That should instead rely on the private method _getTiledPixelBounds of L.GridLayer, e.g.:
const tilePixelBounds = this._getTiledPixelBounds();
const x = tilePixelBounds.min.x;
const y = tilePixelBounds.min.y;
And use these bounds to add some sanity checks while looping through the points:
const pt = this._map.project([p.value[1], p.value[0]], this._tileZoom);
if (!tilePixelBounds.contains(pt)) { console.error(....); }
On the other hand:
[...] an abstraction around canvas elements to try and render tens of thousands of features efficiently
I don't think using Konva to actually draw items on a <canvas> is going to improve the performance - the methods are just the same used by Leaflet (and, if we're talking about tiling vector data, the same used by Leaflet.VectorGrid ). Ten thousand calls to canvas draw functions are going to take the same time no matter what the library on top. If you have time to consider other alternatives, Leaflet.GLMarkers and its WebGL rendering might offer better performance at the price of less compatibility and higher integration costs.

Related

How to create hybrith route like (Pedestrian and Car)?

I have a route that includes vehicle and pedestrian mode together. When HERE is creating this, I want to show the pedestrian parts with dashed lines or different color for users to understand.
I parsed routes sections for sectionTransportMode like that
_routeCalculator.calculatePedestrianRoute(waypoints, (HERE.RoutingError? routingError, List<HERE.Route>? routeList) async {
if (routingError == null) {
HERE.Route _calculatedRoute = routeList!.first;
_calculatedRoute.sections.forEach((element) {
print('TransportMode: ' + element.sectionTransportMode.toString());
});
_showRouteOnMap(_calculatedRoute);
_startNavigationOnRoute(isSimulated, _calculatedRoute);
} else {
final error = routingError.toString();
_showDialog('Error', 'Error while calculating a pedestrian route: $error');
}
});
But how can i do that after this code snippet.
A MapPolyline consists of three elements:
A list of two or more geographic coordinates that define where to place the polyline on the map.
A GeoPolyline that contains this list of coordinates.
Style parameters such as DashPattern or LineCap to define how to visualize the polyline.
https://developer.here.com/documentation/android-sdk-navigate/4.8.3.0/dev_guide/topics/map-items.html#add-map-polylines
When calling the showRouteOnMap, you can create the MapPolyline with dashed lines as shown below in the example.
Here is an example:
private void showRouteOnMap(Route route) {
// Show route as polyline.
GeoPolyline routeGeoPolyline;
try {
routeGeoPolyline = new GeoPolyline(route.getPolyline());
} catch (InstantiationErrorException e) {
// It should never happen that a route polyline contains less than two vertices.
return;
}
float widthInPixels = 20;
//Blue Color
//Color lineColor = Color.valueOf(0, 0f, 0f, 139f);
MapPolyline routeMapPolyline = new MapPolyline(routeGeoPolyline,
widthInPixels,
Color.valueOf(0, 0.56f, 0.54f, 0.63f)); // RGBA
//Setting polyline to DashPattern
routeMapPolyline.setDashPattern(new DashPattern(10));
// routeMapPolyline.setDashFillColor(lineColor);
mapView.getMapScene().addMapPolyline(routeMapPolyline);
mapPolylines.add(routeMapPolyline);
// Draw a circle to indicate starting point and destination.
addCircleMapMarker(startGeoCoordinates, R.drawable.green_dot);
addCircleMapMarker(destinationGeoCoordinates, R.drawable.green_dot);
// Log maneuver instructions per route section.
List<Section> sections = route.getSections();
for (Section section : sections) {
logManeuverInstructions(section);
}
}
Please refer to this example available in git:
https://github.com/heremaps/here-sdk-examples/tree/master/examples/latest/navigate/flutter/routing_hybrid_app

MapboxGL Render Function Issue

I'm using mapboxgl and I'm also using ThreeJS be able to import 3D model to the scene. The 3D model that I used has very high polygon count. Due to MapboxGl's render function triggering in each frame my browser is being very laggy. Is it possible to trigger the render function only once or which function must use at this point istead of render function ? I would like to render my 3D model only once on the map.
Here is my codes:
mapBoxGLSetup: function () {
mapboxgl.accessToken = "";
oOriginPoint = [29.400261610397465, 40.87692013157027, 1];
oMap = new mapboxgl.Map({
logoPosition: "bottom-right",
container: oSceneContainer.id,
style: 'mapbox://styles/mapbox/streets-v11',
center: oOriginPoint,
zoom: 15,
pitch: 0,
antialias: true
});
var modelOrigin = oOriginPoint;
var modelAltitude = 0;
var modelRotate = [Math.PI / 2, Math.PI / 6.5, 0];
var modelAsMercatorCoordinate = mapboxgl.MercatorCoordinate.fromLngLat(
modelOrigin,
modelAltitude
);
o3DModelTransform = {
translateX: modelAsMercatorCoordinate.x,
translateY: modelAsMercatorCoordinate.y,
translateZ: modelAsMercatorCoordinate.z,
rotateX: modelRotate[0],
rotateY: modelRotate[1],
rotateZ: modelRotate[2],
scale: (modelAsMercatorCoordinate.meterInMercatorCoordinateUnits() / 1000) * 0.85
};
},
oSceneMapSetup: function () {
oMap.on('style.load', function () {
oMap.addLayer({
id: 'custom_layer',
type: 'custom',
renderingMode: '3d',
onAdd: function (oMapElement, oGlElement) {
base.oMapElement = oMapElement;
base.setupRenderer(oMapElement, oGlElement);
base.setupLayout(); // I'm loading 3D model in this function
base.setupRayCaster();
},
render: function (gl, matrix) {
// This render function is triggering each frame
var rotationX = new THREE.Matrix4().makeRotationAxis(new THREE.Vector3(1, 0, 0), o3DModelTransform.rotateX);
var rotationY = new THREE.Matrix4().makeRotationAxis(new THREE.Vector3(0, 1, 0), o3DModelTransform.rotateY);
var rotationZ = new THREE.Matrix4().makeRotationAxis(new THREE.Vector3(0, 0, 1), o3DModelTransform.rotateZ);
var oMatrix = new THREE.Matrix4().fromArray(matrix);
var oTranslation = new THREE.Matrix4().makeTranslation(o3DModelTransform.translateX, o3DModelTransform.translateY, o3DModelTransform.translateZ)
.scale(new THREE.Vector3(o3DModelTransform.scale, -o3DModelTransform.scale, o3DModelTransform.scale))
.multiply(rotationX)
.multiply(rotationY)
.multiply(rotationZ);
oCamera.projectionMatrix = oMatrix.multiply(oTranslation);
oRenderer.resetState();
oRenderer.render(oScene, oCamera);
base.oMapElement.triggerRepaint();
}
})
});
},
Thanks for your help and support.
As long as you still calling triggerRepaint on each layer render loop, you will repaint the full map, it’s inherent to the way CustomLayerInterface and update layer work in Mapbox.
When I did my first research on the TriggerRepaint topic, I found a quite old issue in Mapbox where a guy tested all the different options, including having a fully separated context and even 2 mapbox instances, one of them empty. Here is the link
The performance was obviously better in terms of FPS/memory, but there were other collaterals that I personally wouldn't assume for threebox, like losing the depth calculation between mapbox fill-extrusions and 3D custom layer.
Sharing context
Different contexts & canvas
The second issue is the delay between the movement of both cameras. While current sharing context ensures the objects are fixed and stuck to a coords set, creating different contexts will produce a soft dragging effect where the delay between the 2 contexts render can be visually perceived when the map moves first and the 3D objects follow. It's perceivable even with ne single cube, so with thousands of objects will be definitely clearer.

How to change the zoom centerpoint in an ILNumerics scene viewed with a camera

I would like to be able to zoom into an ILNumerics scene viewed by a camera (as in scene.Camera) with the center point of the zoom determined by where the mouse pointer is located when I start spinning the mouse scroll wheel. The default zoom behavior is for the zoom center to be at the scene.Camera.LookAt point. So I guess this would require the mouse to be tracked in (X,Y) continuously and for that point to be used as the new LookAt point? This seems to be like this post on getting the 3D coordinates from a mouse click, but in my case there's no click to indicate the location of the mouse.
Tips would be greatly appreciated!
BTW, this kind of zoom method is standard operating procedure in CAD software to zoom in and out on an assembly of parts. It's super convenient for the user.
One approach is to overload the MouseWheel event handler. The current coordinates of the mouse are available here, too.
Use the mouse screen coordinates to acquire (to "pick") the world
coordinate corresponding to the primitive under the mouse.
Adjust the Camera.Position and Camera.ZoomFactor to 'move' the camera closer to the point under the mouse and to achieve the required 'directional zoom' effect.
Here is a complete example from the ILNumerics website:
using System;
using System.Windows.Forms;
using ILNumerics;
using ILNumerics.Drawing;
using ILNumerics.Drawing.Plotting;
using static ILNumerics.Globals;
using static ILNumerics.ILMath;
namespace ILNumerics.Examples.DirectionalZoom {
public partial class Form1 : Form {
public Form1() {
InitializeComponent();
}
private void panel2_Load(object sender, EventArgs e) {
Array<float> X = 0, Y = 0, Z = CreateData(X, Y);
var surface = new Surface(Z, X, Y, colormap: Colormaps.Winter);
surface.UseLighting = true;
surface.Wireframe.Visible = false;
panel2.Scene.Camera.Add(surface);
// setup mouse handlers
panel2.Scene.Camera.Projection = Projection.Orthographic;
panel2.Scene.Camera.MouseDoubleClick += Camera_MouseDoubleClick;
panel2.Scene.Camera.MouseWheel += Camera_MouseWheel;
// initial zoom all
ShowAll(panel2.Scene.Camera);
}
private void Camera_MouseWheel(object sender, Drawing.MouseEventArgs e) {
// Update: added comments.
// the next conditionals help to sort out some calls not needed. Helpful for performance.
if (!e.DirectionUp) return;
if (!(e.Target is Triangles)) return;
// make sure to start with the SceneSyncRoot - the copy of the scene which receives
// user interaction and is eventually used for rendering. See: https://ilnumerics.net/scene-management.html
var cam = panel2.SceneSyncRoot.First<Camera>();
if (Equals(cam, null)) return; // TODO: error handling. (Should not happen in regular setup, though.)
// in case the user has configured limited interaction
if (!cam.AllowZoom) return;
if (!cam.AllowPan) return; // this kind of directional zoom "comprises" a pan operation, to some extent.
// find mouse coordinates. Works only if mouse is over a Triangles shape (surfaces, but not wireframes):
using (var pick = panel2.PickPrimitiveAt(e.Target as Drawable, e.Location)) {
if (pick.NextVertex.IsEmpty) return;
// acquire the target vertex coordinates (world coordinates) of the mouse
Array<float> vert = pick.VerticesWorld[pick.NextVertex[0], r(0, 2), 0];
// and transform them into a Vector3 for easier computations
var vertVec = new Vector3(vert.GetValue(0), vert.GetValue(1), vert.GetValue(2));
// perform zoom: we move the camera closer to the target
float scale = Math.Sign(e.Delta) * (e.ShiftPressed ? 0.01f : 0.2f); // adjust for faster / slower zoom
var offs = (cam.Position - vertVec) * scale; // direction on the line cam.Position -> target vertex
cam.Position += offs; // move the camera on that line
cam.LookAt += offs; // keep the camera orientation
cam.ZoomFactor *= (1 + scale);
// TODO: consider adding: the lookat point now moved away from the center / the surface due to our zoom.
// In order for better rotations it makes sense to place the lookat point back to the surface,
// by adjusting cam.LookAt appropriately. Otherwise, one could use cam.RotationCenter.
e.Cancel = true; // don't execute common mouse wheel handlers
e.Refresh = true; // immediate redraw at the end of event handling
}
}
private void Camera_MouseDoubleClick(object sender, Drawing.MouseEventArgs e) {
var cam = panel2.Scene.Camera;
ShowAll(cam);
e.Cancel = true;
e.Refresh = true;
}
// Some sample data. Replace this with your own data!
private static RetArray<float> CreateData(OutArray<float> Xout, OutArray<float> Yout) {
using (Scope.Enter()) {
Array<float> x_ = linspace<float>(0, 20, 100);
Array<float> y_ = linspace<float>(0, 18, 80);
Array<float> Y = 1, X = meshgrid(x_, y_, Y);
Array<float> Z = abs(sin(sin(X) + cos(Y))) + .01f * abs(sin(X * Y));
if (!isnull(Xout)) {
Xout.a = X;
}
if (!isnull(Yout)) {
Yout.a = Y;
}
return -Z;
}
}
// See: https://ilnumerics.net/examples.php?exid=7b0b4173d8f0125186aaa19ee8e09d2d
public static double ShowAll(Camera cam) {
// Update: adjusts the camera Position too.
// this example works only with orthographic projection. You will need to take the view frustum
// into account, if you want to make this method work with perspective projection also. however,
// the general functioning would be similar....
if (cam.Projection != Projection.Orthographic) {
throw new NotImplementedException();
}
// get the overall extend of the cameras scene content
var limits = cam.GetLimits();
// take the maximum of width/ height
var maxExt = limits.HeightF > limits.WidthF ? limits.HeightF : limits.WidthF;
// make sure the camera looks at the unrotated bounding box
cam.Reset();
// center the camera view
cam.LookAt = limits.CenterF;
cam.Position = cam.LookAt + Vector3.UnitZ * 10;
// apply the zoom factor: the zoom factor will scale the 'left', 'top', 'bottom', 'right' limits
// of the view. In order to fit exactly, we must take the "radius"
cam.ZoomFactor = maxExt * .50;
return cam.ZoomFactor;
}
}
}
Note, that the new handler performs the directional zoom only when the mouse is located over an object hold by this Camera! If, instead, the mouse is placed on the background of the scene or over some other Camera / plot cube object no effect will be visible and the common zoom feature is performed (zooming in/out to the look-at point).

Drag, drop and shape rotation with Raphael JS

I'm using RaphaelJS 2.0 to create several shapes in a div. Each shape needs to be able to be dragged and dropped within the bounds of the div, independently. Upon double clicking a shape, that shape needs to rotate 90 degrees. It may then be dragged and dropped and rotated again.
I've loaded some code onto fiddler: http://jsfiddle.net/QRZMS/. It's basically this:
window.onload = function () {
var angle = 0;
var R = Raphael("paper", "100%", "100%"),
shape1 = R.rect(100, 100, 100, 50).attr({ fill: "red", stroke: "none" }),
shape2 = R.rect(200, 200, 100, 50).attr({ fill: "green", stroke: "none" }),
shape3 = R.rect(300, 300, 100, 50).attr({ fill: "blue", stroke: "none" }),
shape4 = R.rect(400, 400, 100, 50).attr({ fill: "black", stroke: "none" });
var start = function () {
this.ox = this.attr("x");
this.oy = this.attr("y");
},
move = function (dx, dy) {
this.attr({ x: this.ox + dx, y: this.oy + dy });
},
up = function () {
};
R.set(shape1, shape2, shape3, shape4).drag(move, start, up).dblclick(function(){
angle -= 90;
shape1.stop().animate({ transform: "r" + angle }, 1000, "<>");
});
}
The drag and drop is working and also one of the shapes rotates on double click. However, there are two issues/questions:
How can I attach the rotation onto each shape automatically without having to hard-code each item reference into the rotate method? I.e. I just want to draw the shapes once, then have them all automatically exposed to the same behaviour, so they can each be dragged/dropped/rotated independently without having to explicitly apply that behaviour to each shape.
After a shape has been rotated, it no longer drags correctly - as if the drag mouse movement relates to the original orientation of the shape rather than updating when the shape is rotated. How can I get this to work correctly so that shapes can just be dragged and rotated many times, seamlessley?
Many thanks for any pointers!
I've tried several times to wrap my head around the new transform engine, to no avail. So, I've gone back to first principles.
I've finally managed to correctly drag and drop an object thats undergone several transformations, after trying to work out the impact of the different transformations - t,T,...t,...T,r,R etc...
So, here's the crux of the solution
var ox = 0;
var oy = 0;
function drag_start(e)
{
};
function drag_move(dx, dy, posx, posy)
{
r1.attr({fill: "#fa0"});
//
// Here's the interesting part, apply an absolute transform
// with the dx,dy coordinates minus the previous value for dx and dy
//
r1.attr({
transform: "...T" + (dx - ox) + "," + (dy - oy)
});
//
// store the previous versions of dx,dy for use in the next move call.
//
ox = dx;
oy = dy;
}
function drag_up(e)
{
// nothing here
}
That's it. Stupidly simple, and I'm sure it's occurred to loads of people already, but maybe someone might find it useful.
Here's a fiddle for you to play around with.
... and this is a working solution for the initial question.
I solved the drag/rotate issue by re-applying all transformations when a value changes. I created a plugin for it.
https://github.com/ElbertF/Raphael.FreeTransform
Demo here:
http://alias.io/raphael/free_transform/
As amadan suggests, it's usually a good idea to create functions when multiple things have the same (initial) attributes/properties. That is indeed the answer to your first question. As for the second question, that is a little more tricky.
When a Rapheal object is rotated, so is the coordinate plane. For some reason, dmitry and a few other sources on the web seem to agree that it's the correct way to implement it. I, like you, disagree. I've not managed to find an all round good solution but I did mange to create a work around. I'll briefly explain and then show the code.
Create a custom attribute to store the current state of rotation
Depending on that attribute you decide how to handle the move.
Providing that you are only going to be rotating shapes by 90 degrees (if not it becomes a lot more difficult) you can determine how the coordinates should be manipulated.
var R = Raphael("paper", "100%", "100%");
//create the custom attribute which will hold the current rotation of the object {0,1,2,3}
R.customAttributes.rotPos = function (num) {
this.node.rotPos = num;
};
var shape1 = insert_rect(R, 100, 100, 100, 50, { fill: "red", stroke: "none" });
var shape2 = insert_rect(R, 200, 200, 100, 50, { fill: "green", stroke: "none" });
var shape3 = insert_rect(R, 300, 300, 100, 50, { fill: "blue", stroke: "none" });
var shape4 = insert_rect(R, 400, 400, 100, 50, { fill: "black", stroke: "none" });
//Generic insert rectangle function
function insert_rect(paper,x,y, w, h, attr) {
var angle = 0;
var rect = paper.rect(x, y, w, h);
rect.attr(attr);
//on createion of the object set the rotation position to be 0
rect.attr({rotPos: 0});
rect.drag(drag_move(), drag_start, drag_up);
//Each time you dbl click the shape, it gets rotated. So increment its rotated state (looping round 4)
rect.dblclick(function(){
var pos = this.attr("rotPos");
(pos++)%4;
this.attr({rotPos: pos});
angle -= 90;
rect.stop().animate({transform: "r" + angle}, 1000, "<>");
});
return rect;
}
//ELEMENT/SET Dragger functions.
function drag_start(e) {
this.ox = this.attr("x");
this.oy = this.attr("y");
};
//Now here is the complicated bit
function drag_move() {
return function(dx, dy) {
//default position, treat drag and drop as normal
if (this.attr("rotPos") == 0) {
this.attr({x: this.ox + dx, y: this.oy + dy});
}
//The shape has now been rotated -90
else if (this.attr("rotPos") == 1) {
this.attr({x:this.ox-dy, y:this.oy + dx});
}
else if (this.attr("rotPos") == 2) {
this.attr({x: this.ox - dx, y: this.oy - dy});
}
else if (this.attr("rotPos") == 3) {
this.attr({x:this.ox+dy, y:this.oy - dx});
}
}
};
function drag_up(e) {
}
I can't really think of clear concise way to explain how the drag_move works. I think it's probably best that you look at the code and see how it works. Basically, you just need to work out how the x and y variables are now treated from this new rotated state. Without me drawing lots of graphics I'm not sure I could be clear enough. (I did a lot of turning my head sideways to work out what it should be doing).
There are a few drawbacks to this method though:
It only works for 90degree rotations (a huge amount more calculations would be needed to do 45degrees, nevermind any given degree)
There is a slight movement upon drag start after a rotation. This is because the drag takes the old x and y values, which have been rotated. This isn't a massive problem for this size of shape, but bigger shapes you will really start to notice shapes jumping across the canvas.
I'm assuming the reason that you are using transform is that you can animate the rotation. If this isn't necessary then you could use the .rotate() function which always rotates around the center of the element and so would eliminate the 2nd drawback I mentioned.
This isn't a complete solution, but it should definitely get you going along the correct path. I would be interested to see a full working version.
I've also created a version of this on jsfiddle which you can view here: http://jsfiddle.net/QRZMS/3/
Good luck.
I usually create an object for my shape and write the event handling into the object.
function shape(x, y, width, height, a)
{
var that = this;
that.angle = 0;
that.rect = R.rect(x, y, width, height).attr(a);
that.rect.dblclick(function() {
that.angle -= 90;
that.rect.stop().animate({
transform: "r" + that.angle }, 1000, "<>");
});
return that;
}
In the above, the constructor not only creates the rectangle, but sets up the double click event.
One thing to note is that a reference to the object is stored in "that". This is because the "this" reference changes depending on the scope. In the dblClick function I need to refer to the rect and angle values from my object, so I use the stored reference that.rect and that.angle
See this example (updated from a slightly dodgy previous instance)
There may be better ways of doing what you need, but this should work for you.
Hope it help,
Nick
Addendum: Dan, if you're really stuck on this, and can live without some of the things that Raphael2 gives you, I'd recommend moving back to Raphael 1.5.x. Transforms were just added to Raphael2, the rotation/translation/scale code is entirely different (and easier) in 1.5.2.
Look at me, updating my post, hoping for karma...
If you don't want to use a ElbertF library, you can transform Cartesian Coordinates in Polar Coordinates.
After you must add or remove the angle and transform again in Cartesian Coordinate.
We can see this example with a rect rotate in rumble and moved.
HTML
<div id="foo">
</div>
JAVASCRIPT
var paper = Raphael(40, 40, 400, 400);
var c = paper.rect(40, 40, 40, 40).attr({
fill: "#CC9910",
stroke: "none",
cursor: "move"
});
c.transform("t0,0r45t0,0");
var start = function () {
this.ox = this.type == "rect" ? this.attr("x") : this.attr("cx");
this.oy = this.type == "rect" ? this.attr("y") : this.attr("cy");
},
move = function (dx, dy) {
var r = Math.sqrt(Math.pow(dx, 2) + Math.pow(dy, 2));
var ang = Math.atan2(dy,dx);
ang = ang - Math.PI/4;
dx = r * Math.cos(ang);
dy = r * Math.sin(ang);
var att = this.type == "rect" ? { x: this.ox + dx, y: this.oy + dy} : { cx: this.ox + dx, cy: this.oy + dy };
this.attr(att);
},
up = function () {
};
c.drag(move, start, up);?
DEMO
http://jsfiddle.net/Ef83k/74/
my first thought was to use getBBox(false) to capture the x,y coordinates of the object after transform, then removeChild() the original Raphael obj from the canvas, then redraw the object using the coordinate data from getBBox( false ). a hack but i have it working.
one note though: since the object the getBBox( false ) returns is the CORNER coordinates ( x, y) of the object you need to calculate the center of the re-drawn object by doing ...
x = box['x'] + ( box['width'] / 2 );
y = box['y'] + ( box['height'] / 2 );
where
box = shapeObj.getBBox( false );
another way to solve the same problem

How to rotate/transform mapbox-gl-draw features?

I'm using mapbox-gl-draw to add move-able features to my map. In addition to movability functionality, I am needing rotate/transform -ability functionality for the features akin to Leaflet.Path.Transform.
At current, would my only option to achieve be to create a custom mode?
e.g. something like:
map.on('load', function() {
Draw.changeMode('transform');
});
I am not able to convert my map and it's features to mapbox-gl-leaflet in order to implement Leaflet.Path.Transform as losing rotation / bearing / pitch support is not an option.
Long answer incoming. (see http://mapster.me/mapbox-gl-draw-rotate-mode and http://npmjs.com/package/mapbox-gl-draw-rotate-mode for some final products, https://github.com/mapstertech/mapbox-gl-draw-rotate-mode)
I've been working on something similar for a custom project, and not using a draw library. My project involves some pretty regularly sized objects, not very complex polygons, so the solution might be too simple for you but it may be the right path. I just have rotate and move.
Doing movement isn't too hard geographically. Here's some help to get you started. A basic JSBin is up at https://jsbin.com/yoropolewo/edit?html,output with some drag functionality (too tired to do rotate too).
First, register the necessary click events to have a dragging event. You can listen on the specific Mapbox layers for a mousedown, then on the whole document for a mousemove and mouseup.
To do individual shape rotation, you need to ensure that you are referring to the right feature. In this example I assume there's just one feature in the source data, but that's probably too simple for most uses, so you have to extrapolate. The source data is what we affect when we setData() later on. There are obviously numerous ways to do what I'm doing here, but I'm trying to be clear.
var currentDragging = false;
var currentDraggingFeature = false;
var currentDraggingType = false;
var firstDragEvent = false;
map.on('mousedown','my-layer-id',function(e) {
currentDragging = 'my-source-id'; // this must correspond to the source-id of the layer
currentDraggingFeature = e.features[0]; // you may have to filter this to make sure it's the right feature
currentDraggingType = 'move'; // rotation or move
firstDragEvent = map.unproject([e.originalEvent.layerX,e.originalEvent.layerY]);
});
window.addEventListener('mousemove',dragEvent);
window.addEventListener('mouseup',mouseUpEvent);
You will need a function, then, that takes an initial point, a distance, and a rotation, and returns a point back to you. Like this:
Number.prototype.toRad = function() {
return this * Math.PI / 180;
}
Number.prototype.toDeg = function() {
return this * 180 / Math.PI;
}
function getPoint(point, brng, dist) {
dist = dist / 63.78137; // this number depends on how you calculate the distance
brng = brng.toRad();
var lat1 = point.lat.toRad(), lon1 = point.lng.toRad();
var lat2 = Math.asin(Math.sin(lat1) * Math.cos(dist) +
Math.cos(lat1) * Math.sin(dist) * Math.cos(brng));
var lon2 = lon1 + Math.atan2(Math.sin(brng) * Math.sin(dist) *
Math.cos(lat1),
Math.cos(dist) - Math.sin(lat1) *
Math.sin(lat2));
if (isNaN(lat2) || isNaN(lon2)) return null;
return [lon2.toDeg(),lat2.toDeg()];
}
Now, the key is the unproject method in Mapbox GL JS, so you can move between x/y coordinates on the mouse and lng/lat on your map. Then, using the map.getSource().setData() function to set a new geoJSON.
I am turning the x/y into coordinates immediately here but you can do it at any point. Something like the following for moving:
function moveEvent(e) {
// In the case of move, you are just translating the points based on distance and angle of the drag
// Exactly how your translate your points here can depend on the shape
var geoPoint = map.unproject([e.layerX,e.layerY]);
var xDrag = firstDragEvent.lng - geoPoint.lng;
var yDrag = firstDragEvent.lat - geoPoint.lat;
var distanceDrag = Math.sqrt( xDrag*xDrag + yDrag*yDrag );
var angle = Math.atan2(xDrag, yDrag) * 180 / Math.PI;
// Once you have this information, you loop over the coordinate points you have and use a function to find a new point for each
var newFeature = JSON.parse(JSON.stringify(currentDraggingFeature));
if(newFeature.geometry.type==='Polygon') {
var newCoordinates = [];
newFeature.geometry.coordinates.forEach(function(coords) {
newCoordinates.push(getPoint(coords,distanceDrag,angle));
});
newFeature.geometry.coordinates = newCoordinates;
}
map.getSource(currentDragging).setData(newFeature);
}
Rotating is a little harder because you want the shape to rotate around a central point, and you need to know the distance of each point to that central point in order to do that. If you have a simple square polygon this calculation would be easy. If not, then using something like this would be helpful (Finding the center of Leaflet polygon?):
var getCentroid2 = function (arr) {
var twoTimesSignedArea = 0;
var cxTimes6SignedArea = 0;
var cyTimes6SignedArea = 0;
var length = arr.length
var x = function (i) { return arr[i % length][0] };
var y = function (i) { return arr[i % length][1] };
for ( var i = 0; i < arr.length; i++) {
var twoSA = x(i)*y(i+1) - x(i+1)*y(i);
twoTimesSignedArea += twoSA;
cxTimes6SignedArea += (x(i) + x(i+1)) * twoSA;
cyTimes6SignedArea += (y(i) + y(i+1)) * twoSA;
}
var sixSignedArea = 3 * twoTimesSignedArea;
return [ cxTimes6SignedArea / sixSignedArea, cyTimes6SignedArea / sixSignedArea];
}
Once you have the ability to know the polygon's center, you're golden:
function rotateEvent(e) {
// In the case of rotate, we are keeping the same distance from the center but changing the angle
var findPolygonCenter = findCenter(currentDraggingFeature);
var geoPoint = map.unproject([e.layerX,e.layerY]);
var xDistanceFromCenter = findPolygonCenter.lng - geoPoint.lng;
var yDistanceFromCenter = findPolygonCenter.lat - geoPoint.lat;
var angle = Math.atan2(xDistanceFromCenter, yDistanceFromCenter) * 180 / Math.PI;
var newFeature = JSON.parse(JSON.stringify(currentDraggingFeature));
if(newFeature.geometry.type==='Polygon') {
var newCoordinates = [];
newFeature.geometry.coordinates.forEach(function(coords) {
var xDist = findPolygonCenter.lng - coords[0];
var yDist = findPolygonCenter.lat - coords[1];
var distanceFromCenter = Math.sqrt( xDist*xDist + yDist*yDist );
var rotationFromCenter = Math.atan2(xDist, yDist) * 180 / Math.PI;
newCoordinates.push(
getPoint(coords,distanceFromCenter,rotationFromCenter+angle)
);
});
newFeature.geometry.coordinates = newCoordinates;
}
}
Of course, throughout, ensure that your coordinates are being passed and returned correctly from functions. Some of this code may have incorrect levels of arrays in it. It's very easy to run into bugs with the lat/lng object versus the geoJSON arrays.
I hope the explanation is brief but clear enough, and that you understand logically what we are doing to reorient these points. That's the main point, the exact code is details.
Maybe I should just make a module or fork GL Draw...