Google Earth API Moving a polygon - google-earth

I just starting coding with Google Earth using the GEPlugin control for .Net and still got a lot to learn.
What has got me puzzled is when I try to drag a polygon.
The method below is called whenever the mousemove event fires and should be moving each point of the polygon while retaining the orginal shape of the polygon. The lat / long for each point is changed but the polygon does not move position on the map.
Will moving a point in a polygon cause it to redraw, do I need to call a method to force a redraw or perhaps do something else entirely?
Thanks!
private void DoMouseMove(IKmlMouseEvent mouseEvent)
{
if (isDragging)
{
mouseEvent.preventDefault();
var placemark = mouseEvent.getTarget() as IKmlPlacemark;
if (placemark == null)
{
return;
}
IKmlPolygon polygon = placemark.getGeometry() as IKmlPolygon;
if (polygon != null)
{
float latOffset = startLatLong.Latitude - mouseEvent.getLatitude();
float longOffset = startLatLong.Longitude - mouseEvent.getLongitude();
KmlLinearRingCoClass outer = polygon.getOuterBoundary();
KmlCoordArrayCoClass coordsArray = outer.getCoordinates();
for(int i = 0; i < coordsArray.getLength(); i++)
{
KmlCoordCoClass currentPoint = coordsArray.get(i);
currentPoint.setLatLngAlt(currentPoint.getLatitude() + latOffset,
currentPoint.getLongitude() + longOffset, 0);
}
}
}
}

Consider voting for these issues to be resolved
http://code.google.com/p/earth-api-utility-library/issues/detail?id=33
http://code.google.com/p/earth-api-samples/issues/detail?id=167
You may find some hints at the following link:
http://earth-api-utility-library.googlecode.com/svn/trunk/extensions/examples/ruler.html

UPDATE:
I've released the extension library: https://bitbucket.org/mutopia/earth
See https://bitbucket.org/mutopia/earth/src/master/sample/index.html to run it.
See the drag() method in the sample code class, which calls setDragMode() and addDragEvent() to enable dragging of the KmlPolygon.
I successfully implemented this using takeOverCamera in the earth-api-utility-library and three events:
setDragMode: function (mode) {
// summary:
// Sets dragging mode on and off
if (mode == this.dragMode) {
Log.info('Drag mode is already', mode);
} else {
this.dragMode = mode;
Log.info('Drag mode set', mode);
if (mode) {
this.addEvent(this.ge.getGlobe(), 'mousemove', this.dragMouseMoveCallback);
this.addEvent(this.ge.getGlobe(), 'mouseup', this.dragMouseUpCallback);
this.addEvent(this.ge.getView(), 'viewchange', this.dragViewChange, false);
} else {
this.removeEvent(this.ge.getGlobe(), 'mousemove', this.dragMouseMoveCallback);
this.removeEvent(this.ge.getGlobe(), 'mouseup', this.dragMouseUpCallback);
this.removeEvent(this.ge.getView(), 'viewchange', this.dragViewChange, false);
}
}
},
This is in a utility library within a much larger project. dragMode is a boolean which adds and removes events. These three events control what happens when you drag. addEvent and removeEvent are my own wrapper functions:
addEvent: function (targetObject, eventID, listenerCallback, capture) {
// summary:
// Convenience method for google.earth.addEventListener
capture = setDefault(capture, true);
google.earth.addEventListener(targetObject, eventID, listenerCallback, capture);
},
removeEvent: function (targetObject, eventID, listenerCallback, capture) {
// summary:
// Convenience method for google.earth.removeEventListener
capture = setDefault(capture, true);
google.earth.removeEventListener(targetObject, eventID, listenerCallback, capture);
},
Ignoring the minor details, all the important stuff is in the callbacks to those events. The mousedown event locks the camera and sets the polygon I'm dragging as the dragObject (it's just a variable I'm using). It saves the original lat long coordinates.
this.dragMouseDownCallback = lang.hitch(this, function (event) {
var obj = event.getTarget();
this.lockCamera(true);
this.setSelected(obj);
this.dragObject = obj;
this.dragLatOrigin = this.dragLatLast = event.getLatitude();
this.dragLngOrigin = this.dragLngLast = event.getLongitude();
}
The mousemove callback updates to the latest lat long coordinates:
this.dragMouseMoveCallback = lang.hitch(this, function (event) {
if (this.dragObject) {
var lat = event.getLatitude();
var lng = event.getLongitude();
var latDiff = lat - this.dragLatLast;
var lngDiff = lng - this.dragLngLast;
if (Math.abs(latDiff) > this.dragSensitivity || Math.abs(lngDiff > this.dragSensitivity)) {
this.addPolyCoords(this.dragObject, [latDiff, lngDiff]);
this.dragLatLast = lat;
this.dragLngLast = lng;
}
}
});
Here I'm using some fancy sensitivity values to prevent updating this too often. Finally, addPolyCoords is also my own function which adds lat long values to the existing coordinates of the polygon - effectively moving it across the globe. I do this with the built in setLatitude() and setLongitude() functions for each coordinate. You can get the coordinates like so, where polygon is a KmlPolyon object:
polygon.getGeometry().getOuterBoundary().getCoordinates()
And of course, the mousedown callback turns off the drag mode so that moving the mouse doesn't continue to drag the polygon:
this.dragMouseUpCallback = lang.hitch(this, function (event) {
if (this.dragObject) {
Log.info('Stop drag', this.dragObject.getType());
setTimeout(lang.hitch(this, function () {
this.lockCamera(false);
this.setSelected(null);
}), 100);
this._dragEvent(event);
this.dragObject = this.dragLatOrigin = this.dragLngOrigin = this.dragLatLast = this.dragLngLast = null;
}
});
And finally, _dragEvent is called to ensure that the final coordinates are the actual coordinates the mouse event finished with (and not the latest mousemove call):
_dragEvent: function (event) {
// summary:
// Helper function for moving drag object
var latDiff = event.getLatitude() - this.dragLatLast;
var lngDiff = event.getLongitude() - this.dragLngLast;
if (!(latDiff == 0 && lngDiff == 0)) {
this.addPolyCoords(this.dragObject, [latDiff, lngDiff]);
Log.info('Moved ' + latDiff + ', ' + lngDiff);
}
},
The mousemove callback isn't too important and can actually be ignored - the only reason I use it is to show the polygon moving as the user moves their mouse. Removing it will result in the object being moved when they lift their mouse up.
Hopefully this incredibly long answer gives you some insights into how to implement dragging in the Google Earth API. And I also plan to release my library in the future when I've ironed out the kinks :)

Related

How to change the zoom centerpoint in an ILNumerics scene viewed with a camera

I would like to be able to zoom into an ILNumerics scene viewed by a camera (as in scene.Camera) with the center point of the zoom determined by where the mouse pointer is located when I start spinning the mouse scroll wheel. The default zoom behavior is for the zoom center to be at the scene.Camera.LookAt point. So I guess this would require the mouse to be tracked in (X,Y) continuously and for that point to be used as the new LookAt point? This seems to be like this post on getting the 3D coordinates from a mouse click, but in my case there's no click to indicate the location of the mouse.
Tips would be greatly appreciated!
BTW, this kind of zoom method is standard operating procedure in CAD software to zoom in and out on an assembly of parts. It's super convenient for the user.
One approach is to overload the MouseWheel event handler. The current coordinates of the mouse are available here, too.
Use the mouse screen coordinates to acquire (to "pick") the world
coordinate corresponding to the primitive under the mouse.
Adjust the Camera.Position and Camera.ZoomFactor to 'move' the camera closer to the point under the mouse and to achieve the required 'directional zoom' effect.
Here is a complete example from the ILNumerics website:
using System;
using System.Windows.Forms;
using ILNumerics;
using ILNumerics.Drawing;
using ILNumerics.Drawing.Plotting;
using static ILNumerics.Globals;
using static ILNumerics.ILMath;
namespace ILNumerics.Examples.DirectionalZoom {
public partial class Form1 : Form {
public Form1() {
InitializeComponent();
}
private void panel2_Load(object sender, EventArgs e) {
Array<float> X = 0, Y = 0, Z = CreateData(X, Y);
var surface = new Surface(Z, X, Y, colormap: Colormaps.Winter);
surface.UseLighting = true;
surface.Wireframe.Visible = false;
panel2.Scene.Camera.Add(surface);
// setup mouse handlers
panel2.Scene.Camera.Projection = Projection.Orthographic;
panel2.Scene.Camera.MouseDoubleClick += Camera_MouseDoubleClick;
panel2.Scene.Camera.MouseWheel += Camera_MouseWheel;
// initial zoom all
ShowAll(panel2.Scene.Camera);
}
private void Camera_MouseWheel(object sender, Drawing.MouseEventArgs e) {
// Update: added comments.
// the next conditionals help to sort out some calls not needed. Helpful for performance.
if (!e.DirectionUp) return;
if (!(e.Target is Triangles)) return;
// make sure to start with the SceneSyncRoot - the copy of the scene which receives
// user interaction and is eventually used for rendering. See: https://ilnumerics.net/scene-management.html
var cam = panel2.SceneSyncRoot.First<Camera>();
if (Equals(cam, null)) return; // TODO: error handling. (Should not happen in regular setup, though.)
// in case the user has configured limited interaction
if (!cam.AllowZoom) return;
if (!cam.AllowPan) return; // this kind of directional zoom "comprises" a pan operation, to some extent.
// find mouse coordinates. Works only if mouse is over a Triangles shape (surfaces, but not wireframes):
using (var pick = panel2.PickPrimitiveAt(e.Target as Drawable, e.Location)) {
if (pick.NextVertex.IsEmpty) return;
// acquire the target vertex coordinates (world coordinates) of the mouse
Array<float> vert = pick.VerticesWorld[pick.NextVertex[0], r(0, 2), 0];
// and transform them into a Vector3 for easier computations
var vertVec = new Vector3(vert.GetValue(0), vert.GetValue(1), vert.GetValue(2));
// perform zoom: we move the camera closer to the target
float scale = Math.Sign(e.Delta) * (e.ShiftPressed ? 0.01f : 0.2f); // adjust for faster / slower zoom
var offs = (cam.Position - vertVec) * scale; // direction on the line cam.Position -> target vertex
cam.Position += offs; // move the camera on that line
cam.LookAt += offs; // keep the camera orientation
cam.ZoomFactor *= (1 + scale);
// TODO: consider adding: the lookat point now moved away from the center / the surface due to our zoom.
// In order for better rotations it makes sense to place the lookat point back to the surface,
// by adjusting cam.LookAt appropriately. Otherwise, one could use cam.RotationCenter.
e.Cancel = true; // don't execute common mouse wheel handlers
e.Refresh = true; // immediate redraw at the end of event handling
}
}
private void Camera_MouseDoubleClick(object sender, Drawing.MouseEventArgs e) {
var cam = panel2.Scene.Camera;
ShowAll(cam);
e.Cancel = true;
e.Refresh = true;
}
// Some sample data. Replace this with your own data!
private static RetArray<float> CreateData(OutArray<float> Xout, OutArray<float> Yout) {
using (Scope.Enter()) {
Array<float> x_ = linspace<float>(0, 20, 100);
Array<float> y_ = linspace<float>(0, 18, 80);
Array<float> Y = 1, X = meshgrid(x_, y_, Y);
Array<float> Z = abs(sin(sin(X) + cos(Y))) + .01f * abs(sin(X * Y));
if (!isnull(Xout)) {
Xout.a = X;
}
if (!isnull(Yout)) {
Yout.a = Y;
}
return -Z;
}
}
// See: https://ilnumerics.net/examples.php?exid=7b0b4173d8f0125186aaa19ee8e09d2d
public static double ShowAll(Camera cam) {
// Update: adjusts the camera Position too.
// this example works only with orthographic projection. You will need to take the view frustum
// into account, if you want to make this method work with perspective projection also. however,
// the general functioning would be similar....
if (cam.Projection != Projection.Orthographic) {
throw new NotImplementedException();
}
// get the overall extend of the cameras scene content
var limits = cam.GetLimits();
// take the maximum of width/ height
var maxExt = limits.HeightF > limits.WidthF ? limits.HeightF : limits.WidthF;
// make sure the camera looks at the unrotated bounding box
cam.Reset();
// center the camera view
cam.LookAt = limits.CenterF;
cam.Position = cam.LookAt + Vector3.UnitZ * 10;
// apply the zoom factor: the zoom factor will scale the 'left', 'top', 'bottom', 'right' limits
// of the view. In order to fit exactly, we must take the "radius"
cam.ZoomFactor = maxExt * .50;
return cam.ZoomFactor;
}
}
}
Note, that the new handler performs the directional zoom only when the mouse is located over an object hold by this Camera! If, instead, the mouse is placed on the background of the scene or over some other Camera / plot cube object no effect will be visible and the common zoom feature is performed (zooming in/out to the look-at point).

Controlling robot makes a shaking movement

Im trying to control a robot by sending positions with 100hz. It's making a shaking movement when sending so much positions. When I send 1 position that is like 50 mm from his start position it moves smoothly. When I use my sensor to steer,(so it send every position from 0 to 50mm) it is shaking. I'm probably sending like X0-X1-X2-X1-X2-X3-X4-X5-X4-X5 and this is the reason why it might shake. How can I solve this making the robot move smoothly when I use my mouse to steer it?
Robot is asking 125hz
IR sensor is sending 100hz
Otherwise does the 25hz makes the diffrent?
Here is my code.
while(true)
// If sensor 1 is recording IR light.
if (listen1.newdata = true)
{
coX1 = (int) listen1.get1X(); //
coY1 = (int) listen1.get1Y();
newdata = true;
} else {
coX1 = 450;
coY1 = 300;
}
if (listen2.newdata = true)
{
coX2 = (int) listen2.get1X();
coY2 = (int) listen2.get1Y();
newdata = true;
} else {
coY2 = 150;
}
// If the sensor gets further then the workspace, it will automaticly correct it to these
// coordinates.
if (newdata = true)
{
if (coX1< 200 || coX1> 680)
{
coX1 = 450;
}
if (coY1<200 || coY1> 680)
{
coY1 = 300;
}
if (coY2<80 || coY2> 300)
{
coY2 = 150;
}
}
// This is the actually command send to a robot.
Gcode = String.format( "movej(p[0.%d,-0.%d, 0.%d, -0.5121, -3.08, 0.0005])"+ "\n", coX1, coY1, coY2);
//sends message to server
send(Gcode, out);
System.out.println(Gcode);
newdata = false;
}
}
private static void send(String movel, PrintWriter out) {
try {
out.println(movel); /*Writes to server*/
// System.out.println("Writing: "+ movel);
// Thread.sleep(250);
}
catch(Exception e) {
System.out.print("Error Connecting to Server\n");
}
}
}
# Edit
I discovered on wich way I can do this. It is via min and max. So basicly what I think I have to do is:
* put every individual coordinate in a array( 12 coordinates)
* Get the min and max out of this array
* Output the average of the min and max
Without knowing more about your robot characteristics and how you could control it, here are some general considerations:
To have a smooth motion of your robot, you should control it in speed with a well designed PID controller algorithm.
If you can only control it in position, the best you can do is monitoring the position & waiting for it to be "near enough" from the targetted position before sending the next position.
If you want a more detailed answer, please give more information on the command you send to the robot (movej), I suspect that you can do much more than just sending [x,y] coordinates.

Strange mouse event position problem

I have a function which returns mouse event positions.
// returns the xy point where the mouse event was occured.
function getXY(ev){
var xypoint = new Point();
if (ev.layerX || ev.layerY) { // Firefox
xypoint.x = ev.layerX;
xypoint.y = ev.layerY;
} else if (ev.offsetX || ev.offsetX == 0) { // Opera
xypoint.x = ev.offsetX;
xypoint.y = ev.offsetY;
}
return xypoint;
}
I am capturing mouse events to perform drawings on html5 canvas. Sometimes I am getting -ve values for xypoint. When I debug the application using firebug I am getting really strange behavior. for example if I put my break point at the 4th line of this function with condition (xypoint.x<0 || xypoint.y<0), it stops at the break point and I can see that layer.x, layer.y was positive and correct. But xypoint.x or xypoint.y is negative. If I reassign the values using firbug console I am getting correct values in xypoint. Can anyone explain me what is happening.
The above works fine if I move mouse with normal speed. If I am moving mouse at very rapid speed I am getting this behavior.
Thanks
Handling mouse position was an absolute pain with Canvas. You have to make a ton of adjustments. I use this, which has a few minor errors, but works even with the drag-and-drop divs I use in my app:
getCurrentMousePosition = function(e) {
// Take mouse position, subtract element position to get relative position.
if (document.layers) {
xMousePos = e.pageX;
yMousePos = e.pageY;
xMousePosMax = window.innerWidth+window.pageXOffset;
yMousePosMax = window.innerHeight+window.pageYOffset;
} else if (document.all) {
xMousePos = window.event.x+document.body.scrollLeft;
yMousePos = window.event.y+document.body.scrollTop;
xMousePosMax = document.body.clientWidth+document.body.scrollLeft;
yMousePosMax = document.body.clientHeight+document.body.scrollTop;
} else if (document.getElementById) {
xMousePos = e.pageX;
yMousePos = e.pageY;
xMousePosMax = window.innerWidth+window.pageXOffset;
yMousePosMax = window.innerHeight+window.pageYOffset;
}
elPos = getElementPosition(document.getElementById("cvs"));
xMousePos = xMousePos - elPos.left;
yMousePos = yMousePos - elPos.top;
return {x: xMousePos, y: yMousePos};
}
getElementPosition = function(el) {
var _x = 0,
_y = 0;
if(document.body.style.marginLeft == "" && document.body.style.marginRight == "" ) {
_x += (window.innerWidth - document.body.offsetWidth) / 2;
}
if(el.offsetParent != null) while(1) {
_x += el.offsetLeft;
_y += el.offsetTop;
if(!el.offsetParent) break;
el = el.offsetParent;
} else if(el.x || el.y) {
if(el.x) _x = el.x;
if(el.y) _y = el.y;
}
return { top: _y, left: _x };
}
Moral of the story? You need to figure in the offset of the canvas to have proper results. You're capturing the XY from the event, which has an offset that's not being captured in relation to the window's XY. Make sense?

Touches on transparent PNGs

I have a PNG in a UIImageView with alpha around the edges (let's say a circle). When I tap it, I want it to register as a tap for the circle if I'm touching the opaque bit, but a tap for the view behind if I touch the transparent bit.
(BTW: On another forum, someone said PNGs automatically do this, and a transparent PNG should pass the click on to the view below, but I've tested it and it doesn't, at least not in my case.)
Is there a flag I just haven't flipped, or do I need to create some kind of formula: "if tapped { get location; calculate distance from centre; if < r { touched circle } else { pass it on } }"?
-k.
I don't believe that PNGs automatically do this, but can't find any references that definitively say one way or the other.
Your radius calculation is probably simpler, but you could also manually check the alpha value of the touched pixel in your image to determine whether to count it as a hit. This code is targetted at OS X 10.5+, but with some minor modifications it should run on iPhone: Getting the pixel data from a CGImage object. Here is some related discussion on retrieving data from a UIImage: Getting data from an UIImage.
I figured it out...the PNG, bounding box transparency issue and being able to click through to another image behind:
var hitTestPoint1:Boolean = false;
var myHitTest1:Boolean = false;
var objects:Array;
clip.addEventListener(MouseEvent.MOUSE_DOWN, doHitTest);
clip.addEventListener(MouseEvent.MOUSE_UP, stopDragging);
clip.buttonMode = true;
clip.mouseEnabled = true;
clip.mouseChildren = true;
clip2.addEventListener(MouseEvent.MOUSE_DOWN, doHitTest);
clip2.addEventListener(MouseEvent.MOUSE_UP, stopDragging);
clip2.buttonMode = true;
clip2.mouseEnabled = true;
clip2.mouseChildren = true;
clip.rotation = 60;
function doHitTest(event:MouseEvent):void
{
objects = stage.getObjectsUnderPoint(new Point(event.stageX, event.stageY));
trace("Which one: " + event.target.name);
trace("What's under point: " + objects);
for(var i:int=0; i
function stopDragging(event:MouseEvent):void
{
event.target.stopDrag();
}
function realHitTest(object:DisplayObject, point:Point):Boolean
{
/* If we're already dealing with a BitmapData object then we just use the hitTest
* method of that BitmapData.
*/
if(object is BitmapData)
{
return (object as BitmapData).hitTest(new Point(0,0), 0, object.globalToLocal(point));
}
else {
/* First we check if the hitTestPoint method returns false. If it does, that
* means that we definitely do not have a hit, so we return false. But if this
* returns true, we still don't know 100% that we have a hit because it might
* be a transparent part of the image.
*/
if(!object.hitTestPoint(point.x, point.y, true))
{
return false;
}
else {
/* So now we make a new BitmapData object and draw the pixels of our object
* in there. Then we use the hitTest method of that BitmapData object to
* really find out of we have a hit or not.
*/
var bmapData:BitmapData = new BitmapData(object.width, object.height, true, 0x00000000);
bmapData.draw(object, new Matrix());
var returnVal:Boolean = bmapData.hitTest(new Point(0,0), 0, object.globalToLocal(point));
bmapData.dispose();
return returnVal;
}
}
}

How to rotate/transform mapbox-gl-draw features?

I'm using mapbox-gl-draw to add move-able features to my map. In addition to movability functionality, I am needing rotate/transform -ability functionality for the features akin to Leaflet.Path.Transform.
At current, would my only option to achieve be to create a custom mode?
e.g. something like:
map.on('load', function() {
Draw.changeMode('transform');
});
I am not able to convert my map and it's features to mapbox-gl-leaflet in order to implement Leaflet.Path.Transform as losing rotation / bearing / pitch support is not an option.
Long answer incoming. (see http://mapster.me/mapbox-gl-draw-rotate-mode and http://npmjs.com/package/mapbox-gl-draw-rotate-mode for some final products, https://github.com/mapstertech/mapbox-gl-draw-rotate-mode)
I've been working on something similar for a custom project, and not using a draw library. My project involves some pretty regularly sized objects, not very complex polygons, so the solution might be too simple for you but it may be the right path. I just have rotate and move.
Doing movement isn't too hard geographically. Here's some help to get you started. A basic JSBin is up at https://jsbin.com/yoropolewo/edit?html,output with some drag functionality (too tired to do rotate too).
First, register the necessary click events to have a dragging event. You can listen on the specific Mapbox layers for a mousedown, then on the whole document for a mousemove and mouseup.
To do individual shape rotation, you need to ensure that you are referring to the right feature. In this example I assume there's just one feature in the source data, but that's probably too simple for most uses, so you have to extrapolate. The source data is what we affect when we setData() later on. There are obviously numerous ways to do what I'm doing here, but I'm trying to be clear.
var currentDragging = false;
var currentDraggingFeature = false;
var currentDraggingType = false;
var firstDragEvent = false;
map.on('mousedown','my-layer-id',function(e) {
currentDragging = 'my-source-id'; // this must correspond to the source-id of the layer
currentDraggingFeature = e.features[0]; // you may have to filter this to make sure it's the right feature
currentDraggingType = 'move'; // rotation or move
firstDragEvent = map.unproject([e.originalEvent.layerX,e.originalEvent.layerY]);
});
window.addEventListener('mousemove',dragEvent);
window.addEventListener('mouseup',mouseUpEvent);
You will need a function, then, that takes an initial point, a distance, and a rotation, and returns a point back to you. Like this:
Number.prototype.toRad = function() {
return this * Math.PI / 180;
}
Number.prototype.toDeg = function() {
return this * 180 / Math.PI;
}
function getPoint(point, brng, dist) {
dist = dist / 63.78137; // this number depends on how you calculate the distance
brng = brng.toRad();
var lat1 = point.lat.toRad(), lon1 = point.lng.toRad();
var lat2 = Math.asin(Math.sin(lat1) * Math.cos(dist) +
Math.cos(lat1) * Math.sin(dist) * Math.cos(brng));
var lon2 = lon1 + Math.atan2(Math.sin(brng) * Math.sin(dist) *
Math.cos(lat1),
Math.cos(dist) - Math.sin(lat1) *
Math.sin(lat2));
if (isNaN(lat2) || isNaN(lon2)) return null;
return [lon2.toDeg(),lat2.toDeg()];
}
Now, the key is the unproject method in Mapbox GL JS, so you can move between x/y coordinates on the mouse and lng/lat on your map. Then, using the map.getSource().setData() function to set a new geoJSON.
I am turning the x/y into coordinates immediately here but you can do it at any point. Something like the following for moving:
function moveEvent(e) {
// In the case of move, you are just translating the points based on distance and angle of the drag
// Exactly how your translate your points here can depend on the shape
var geoPoint = map.unproject([e.layerX,e.layerY]);
var xDrag = firstDragEvent.lng - geoPoint.lng;
var yDrag = firstDragEvent.lat - geoPoint.lat;
var distanceDrag = Math.sqrt( xDrag*xDrag + yDrag*yDrag );
var angle = Math.atan2(xDrag, yDrag) * 180 / Math.PI;
// Once you have this information, you loop over the coordinate points you have and use a function to find a new point for each
var newFeature = JSON.parse(JSON.stringify(currentDraggingFeature));
if(newFeature.geometry.type==='Polygon') {
var newCoordinates = [];
newFeature.geometry.coordinates.forEach(function(coords) {
newCoordinates.push(getPoint(coords,distanceDrag,angle));
});
newFeature.geometry.coordinates = newCoordinates;
}
map.getSource(currentDragging).setData(newFeature);
}
Rotating is a little harder because you want the shape to rotate around a central point, and you need to know the distance of each point to that central point in order to do that. If you have a simple square polygon this calculation would be easy. If not, then using something like this would be helpful (Finding the center of Leaflet polygon?):
var getCentroid2 = function (arr) {
var twoTimesSignedArea = 0;
var cxTimes6SignedArea = 0;
var cyTimes6SignedArea = 0;
var length = arr.length
var x = function (i) { return arr[i % length][0] };
var y = function (i) { return arr[i % length][1] };
for ( var i = 0; i < arr.length; i++) {
var twoSA = x(i)*y(i+1) - x(i+1)*y(i);
twoTimesSignedArea += twoSA;
cxTimes6SignedArea += (x(i) + x(i+1)) * twoSA;
cyTimes6SignedArea += (y(i) + y(i+1)) * twoSA;
}
var sixSignedArea = 3 * twoTimesSignedArea;
return [ cxTimes6SignedArea / sixSignedArea, cyTimes6SignedArea / sixSignedArea];
}
Once you have the ability to know the polygon's center, you're golden:
function rotateEvent(e) {
// In the case of rotate, we are keeping the same distance from the center but changing the angle
var findPolygonCenter = findCenter(currentDraggingFeature);
var geoPoint = map.unproject([e.layerX,e.layerY]);
var xDistanceFromCenter = findPolygonCenter.lng - geoPoint.lng;
var yDistanceFromCenter = findPolygonCenter.lat - geoPoint.lat;
var angle = Math.atan2(xDistanceFromCenter, yDistanceFromCenter) * 180 / Math.PI;
var newFeature = JSON.parse(JSON.stringify(currentDraggingFeature));
if(newFeature.geometry.type==='Polygon') {
var newCoordinates = [];
newFeature.geometry.coordinates.forEach(function(coords) {
var xDist = findPolygonCenter.lng - coords[0];
var yDist = findPolygonCenter.lat - coords[1];
var distanceFromCenter = Math.sqrt( xDist*xDist + yDist*yDist );
var rotationFromCenter = Math.atan2(xDist, yDist) * 180 / Math.PI;
newCoordinates.push(
getPoint(coords,distanceFromCenter,rotationFromCenter+angle)
);
});
newFeature.geometry.coordinates = newCoordinates;
}
}
Of course, throughout, ensure that your coordinates are being passed and returned correctly from functions. Some of this code may have incorrect levels of arrays in it. It's very easy to run into bugs with the lat/lng object versus the geoJSON arrays.
I hope the explanation is brief but clear enough, and that you understand logically what we are doing to reorient these points. That's the main point, the exact code is details.
Maybe I should just make a module or fork GL Draw...