How does one determine the (x,y) coordinates while dragging "dragover" and after the drop ("drop") in HTML5 DnD?
I found a webpage that describes x,y for dragover (look at e.offsetX and e.layerX to see if set for various browsers but for the life of me they ARE NOT set).
What do you use for the drop(e) function to find out WHERE in the current drop target it was actually dropped?
I'm writing an online circuit design app and my drop target is large.
Do I need to drop down to mouse-level to find out x,y or does HTML5 provided a higher level abstraction?
The properties clientX and clientY should be set (in modern browsers):
function drag_over(event) {
console.log(event.clientX);
console.log(event.clientY);
event.preventDefault();
return false;
}
function drop(event) {
console.log(event.clientX);
console.log(event.clientY);
event.preventDefault();
return false;
}
Here's a (somewhat) practical example that works in Firefox and Chrome.
To build on top of robertc's answer and work around the issue:
The clientX and clientY are going to be inaccurate proportional to the distance of the cursor from the upper left corner of the element which was dropped.
You may simply calculate the delta distance by subtracting the client coords recorded at dragStart from the client coords recorded at dragEnd. Bear in mind the the element which was dropped still has the exact same position at dragEnd which you may update by the delta distance to get the new left and top positional coordinates.
Here's an demo in react, although the same can be accomplished in plain JS.
Related
In my board game, the points are given by throwing 7 sea-shells cowry shell. These shells are dropped onto a sphere in Unity so they get rolled over randomly to different places. Once the rigidbody.isSleeping() returns true, I do a Raycast(from the belly side downwards) to figure out the orientation of the shell. If it is NOT a hit we know the shells belly is turned upside which means a point.
All is good and very realistic when in single player mode. Reason is I just activate the gravity of the shells and they dropped on to sphere, gets rolled randomly and when stopped i get the marks as stated above.
Now the problem is I am making the game multiplayer. In this case, I sent the randomly generated marks from the server and client will have to animate the shells to represent the marks. For example, if server send 3, out of 7 shells, 3 should have it's belly turned upside.
Trying to do this has been a major problem for me. I tried to transform.Rotate() when the velocity is reduced but it was not very reliable and sometimes acts crazy. Rotating afterrigidbody.isSleeping() works but very unrealistic.
I know I am trying to defy physics here, but there may be some ways to achieve what I want with minimum artificial effect.
I actually need some ideas.
Update - 1
After infor I receive below, I did found some information here, some advanced stuff here. Since the latter link had some advanced stuff, I wanted to start small. So I followed the first link and did below test.
I recorded the position, rotation & velocity of the sea shell with autosimulation enabled and logged them to a file. Then i used the Physics.Simulate() for the same scenario and logged the same.
Comparing the two tells me that data in both cases are kind of similar. So seems like for my requirements I need to simulate the sea-shell drop and then apply that sequence to the actual object.
Now my problem is how can I apply the results of physics.simulate() results (position, rotation, velocity etc..) to the actual sea-shell so the animation can be seen. If I set the positions to my gameobject within the simulation loop nothing happens.
public void Simulate()
{
rbdy = GetComponent<Rigidbody>();
rbdy.AddForce(new Vector3(0f, 0f, 10f));
rbdy.useGravity = true;
rbdy.mass = 1f;
//Simulate where it will be in 5 seconds
int i = 0;
while (simulateTime >= Time.fixedDeltaTime)
{
simulateTime -= Time.fixedDeltaTime;
Debug.Log($"position: {rbdy.position.ToString()} rotation: {rbdy.rotation.ToString()} Velocity {rbdy.velocity.magnitude}");
gameObject.transform.position = rbdy.position;
Physics.Simulate(Time.fixedDeltaTime);
}
}
So, how can I get this simulated data applied to actual gameobject in the scene?
Assume Physics are deterministic, just set the velocity and position and let it simulate on each client. Output should be the same. If the output differs slighly, you could adjust it and it may be only barely noticable.
Physics.simulate may be interesting to read, even if it's kind of the opposite of what you want.
You can throw in the client, record the steps in realtime or using physics.simulate (see point 2) and transmit the animation data as binary - then use it in the other clients to play the animation.
Background:
I am working on a web based mapping application for hiking. So the map based on leaflet offers routes on hiking trails that are labeled. As any hiking trail can be part of multiple routes, routes - respectively the corresponding polylines representing the routes - can overlap.
Problem:
Each route has its tooltip (triggered by mouseover, {sticky:true}) showing its label which works as expected for non-overlapping polylines but as soon as two or more routes overlap only the polyline "on top" gets its tooltip opened. This behaviour is not bad per se but as all routes are equally important I would like to show all labels of the routes at the pointer's location (or something like a maximum of 5 labels + x more). I weren't able to find any issue related to this topic.
What I tried:
- Create a feature group for all routes, bind the tooltip to the group, hoping that the tooltip function provides an array of all polylines crossing the pointer's position. As it turned out, I only get information of the polyline on top
- I tried the same with a mousemove event on the map, no success
- Comparing pointer's layerPoint coordinates with all routes' _rings & _parts layPoint arrays to find matching layerPoints, but the success rate is only about 5% as these layerPoints only cover actual points of the polyline but not the connection between two points. Additionally, there is a margin around each polyline that triggers the tolltip before the pointer even touches the polyline (too improve touch action, I guess)
- A solution to the margin problem is to add positive and negative margins to each polyline point before comparing it to the pointer coordinates which improves the outcome but doesn't solve the main problem.
Sidenote:
- All routes are drawn into a single canvas
Long story short, I need external help to accomplish the goal. Maybe some of you have an idea or can provide a solution. Any input is appreciated.
** UPDATE: **
A working but pretty inefficient solution is as follows
Approach:
Calculate the shortest distance from the pointer to all routes in viewport. If distance from the pointer to a route is under a certain threshold, add them to the array of route labels that should be displayed.
Steps:
1.) bind a blank tooltip to the a feature group containing all routes
2.) bind mousemove event to the feature group with the follwing function
var routesFeatureGroup = L.featureGroup(routesGroup)
.bindTooltip('', {sticky: true})
.on('mousemove', function(e){
var routeLabels = [e.layer.options.label]; // add triggering route's label by default
var mouseCoordAbs = el.$map.project(e.latlng);
$.each(vars.objectsInViewport.routes, function(i, v){
if (e.layer.options.id != el.$routes[i].options.id && el.$routes[i]._pxBounds.contains(e.layerPoint)){
var nearestLatlngOnPolyline = getNearestPolylinePoint(e.latlng, el.$routes[i]);
var polyPointCoordAbs = el.$map.project(nearestLatlngOnPolyline);
var distToMouseX = polyPointCoordAbs.x - mouseCoordAbs.x;
var distToMouseY = polyPointCoordAbs.y - mouseCoordAbs.y;
var distToMouse = Math.sqrt(distToMouseX*distToMouseX + distToMouseY*distToMouseY);
if (distToMouse < 15) {
routeLabels.push(el.$routes[i].options.label);
}
}
})
var routesFeatureGroup.setTooltipContent(routeLabels.join('<br>'));
})
Explanation:
I already gather all objects (routes and markers) in the current viewport for another part of the app. All routes currently visible are stored in vars.objectsInViewport.routes (respectively their ids), so I dont have to go through all routes. The layer that triggered the mousemove event is added by default. I then check for each of the routes currently visible if:
- their id is different to the layer that trigger the mousemove event (as this label is added by default)
- if their bounds (in cartesian coordinates: "_pxBounds") contain the cartesian layerPoint of the mousemove event (for a rough approch to exclude routes that don't intersect)
If these conditions are met for a route, calculate the closest latlng point from the pointer to the route. I do this with a custom function, which is a bit to long to post it in this context. (I will if someone asks for it)
The mouse position and the latlng point on the polyline / route are then converted to absolute coordinates using the map-project method
http://leafletjs.com/reference.html#map-project
At last, the distance between these to points is calculated using pythagoras. It is pixel based, so that the zoom level isn't a factor. If the distance is below a certain threshold (15px) they are close enough to the pointer to be considered as being hovered (with the default margins around a polyline), so the label of the route is added to the label array.
Finally the tooltip for the feature group is filled with all labels.
Results are pretty promising even though the operation is pretty expensive. I added a timeout of 50ms to reduce the function call a bit:
var tooltipTimeout;
var routesFeatureGroup = L.featureGroup(routesGroup)
.bindTooltip('', {sticky: true})
.on('mousemove', function(e){
clearTimeout(tooltipTimeout);
tooltipTimeout = setTimeout(function(){
// collect labels
// ...
},50);
.on('mouseout', function(){
clearTimeout(tooltipTimeout);
})
I can give you an idea of how to do this, but I am not 100% sure that it will do the job. There is a plugin for Leaflet (Mapbox) that can tell you if a point is within a Polygon and it returns all the Polygons that contain that point.
If this plugin doesn't work for polylines you can create a polygon from a polyline by just going back from the last point to the first and closing the line (I am not sure if this suits you solution). For example if you have a polyline of connected points of [0, 1, 2, .... n-1, n] you then go back with connecting [n with n-1, n-1 with n-2, ... 1 with 0]. This way you will have the same shape of the polyline but it will be a polygon. This isn't the most optimized solution, it is a quick fix that uses a known and available plugin.
Once you get all the tooltips, you can open all of them at once for each polygon/polyline. Or maybe open some helper tooltip where the user can select which one he wants to open.
I hope this helps! If you figure out a better solution (or find a plugin that does the job) please post it here.
I'm using iPhone ARToolkit and I'm wondering how it works.
I want to know how with a destination location, a user location and a compass, this toolkit can know it user is looking to that destination.
How can I know the maths behind this calculations?
The maths that AR ToolKit uses is basic trigonometry. It doesn't use the technique that Thomas describes which I think would be a better approach (apart from step 5. See below)
Overview of the steps involved.
The iPhone's GPS supplies the device's location and you already have the coordinates of the location you want to look at.
First it calculates the difference between the latitude and the longitude values of the two points. These two difference measurements mean you can construct a right-angled triangle and calculate what angle from your current position another given position is. This is the relevant code:
- (float)angleFromCoordinate:(CLLocationCoordinate2D)first toCoordinate:(CLLocationCoordinate2D)second {
float longitudinalDifference = second.longitude - first.longitude;
float latitudinalDifference = second.latitude - first.latitude;
float possibleAzimuth = (M_PI * .5f) - atan(latitudinalDifference / longitudinalDifference);
if (longitudinalDifference > 0) return possibleAzimuth;
else if (longitudinalDifference < 0) return possibleAzimuth + M_PI;
else if (latitudinalDifference < 0) return M_PI;
return 0.0f;
}
At this point you can then read the compass value from the phone and determine what specific compass angle(azimuth) your device is pointing at. The reading from the compass will be the angle directly in the center of the camera's view. The AR ToolKit then calculates the full range of angle's currently displayed on screen as the iPhone's field of view is known.
In particular it does this by calculating what the angle of the leftmost part of the view is showing:
double leftAzimuth = centerAzimuth - VIEWPORT_WIDTH_RADIANS / 2.0;
if (leftAzimuth < 0.0) {
leftAzimuth = 2 * M_PI + leftAzimuth;
}
And then calculates the right most:
double rightAzimuth = centerAzimuth + VIEWPORT_WIDTH_RADIANS / 2.0;
if (rightAzimuth > 2 * M_PI) {
rightAzimuth = rightAzimuth - 2 * M_PI;
}
We now have:
The angle relative to our current position of something we want to display
A range of angles which are currently visible on the screen
This is enough to plot a marker on the screen in the correct position (kind of...see problems section below)
It also does similar calculations related to the devices inclination so if you look at the sky you hopefully won't see a city marker up there and if you point it at your feet you should in theory see cities on the opposite side of the planet. There are problems with these calculation in this toolkit however.
The problems...
Device orientation is not perfect
The value I've just explained the calculation of assumes you're holding the device in an exact position relative to the earth. i.e. perfectly landscape or portrait. Your user probably won't always be doing that. If you tilt the device slightly your horizon line will no longer be horizontal on screen.
The earth is actually 3D!
The earth is 3-dimensional. Few of the calculations in the toolkit account for that. The calculations it performs are only really accurate when you're pointing the device towards the horizon.
For example if you try to plot a point on the opposite side of the globe (directly under your feet) this toolkit behaves very strangely. The approach used to calculate the azimuth range on screen is only valid when looking at the horizon. If you point your camera at the floor you can actually see every single compass point. The toolkit however, thinks you're still only looking at compass reading ± (width of view / 2). If you rotate on the spot you'll see your marker move to edge of the screen, disappear and then reappear on the other side. What you would expect to see is the marker stay on screen as you rotate.
The solution
I've recently implemented an app with AR which I initially hoped AR Toolkit would do the heavy lifting for me. I came across the problems just described which aren't acceptable for my app so had to roll my own.
Thomas' approach is a good method up to point 5 which as I explained above only works when pointing towards the horizon. If you need to plot anything outside of that it breaks down. In my case I have to plot objects that are overhead so it's completely unsuitable.
I addressed this by using OpenGL ES to plot my markers where they actually are in 3D space and move the OpenGL viewport around according to readings from the gyroscope while continuously re-calibrating against the compass. The 3D engine handles all the hard work of determining what's on screen.
Hope that's enough to get you started. I wish I could provide more detail than that but short of posting a lot of hacky code I can't. This approach however did address both problems described above. I hope to open source that part of my code at some point but it's very rough and coupled to my problem domain at the moment.
that is all information needed. with iphone-location and destination-location you can calculate the destination-angle (with respect to true north).
The only missing thing is to know where the iPhone is currently looking at which is returned by the compass (magnetic north + current location -> true north).
edit: Calculations: (this is just an idea: there may exist a better solution without a lot coordinate-transformations)
convert current and destination location to ecef-coordinates
transform destination ecef coordinate to enu (east, north, up) local coordinate system with current location as reference location. You can also use this.
ignore the height-value and use the enu-coordinate to get the direction: atan2(deast, dnorth)
The compass returns already the angle the iPhone is looking at
display the destination on the screen if dest_angle - 10° <= compass_angle <= dest_angle + 10°
with respect to the cyclic-angle-space. The constant of 10° is just a guessed value. You should either try some values to find out a useful one or you have to analyse some properties of the iPhone-camera.
The coordinate-transformation-equations become much simpler if you assume that the earth is a sphere and not an ellipsoid. Most links if have postet are assuming an wgs-84 ellipsoid becasue gps also does afaik).
Hello i have some problems with the Bing Map control.
If i zoom to near to the polylines they begin to disappear (from bottom to top and from right to left)
The Polylines are generated dynamically with an ItemsControl (that one which is included in the maps namespace) bound to a collection of my own LocationData from ViewModel thats converted by a IValueConverter to the map specific LocationPoints.
Some values that are not accessible from ViewModel are set in the loaded event.
The map and the container stretch over the whole screen.
So if the lines begin to disappear and i zoom out via a button in my ApplicationBar
private void ZoomOut_Click(object sender, RoutedEventArgs e)
{
map1.ZoomLevel -= 1.0;
}
the Application exits without exception...
I have tested it on a real device with and without debugger and the debugger only says that he have lost the connection to the device.
Anyone have this or similar problems and hopefully solved it?
Thanks for any help.
PS: My LocationData contains approximately 100 - 200 points that are split up to 3 - 7 lines that can't be to much or?
Yes, hundreds of points is too much, but that's the least of your problems. The way you have coded this, you are reconverting and replotting your points every time there is a pan or zoom.
Don't use the type converter. Convert your points once, cache the converted points and bind to the converted points.
Research quadtrees and how they apply to culling your point set in proportion to zoom level.
Apply a clipping rectangle. In my experience, half a degree larger each side of your display region works well.
Study the Bing map event model and redesign your code so that you only cull, clip and plot when map manipulation stops.
Ideally, write your cull, clip and plot logic so that it is asynch and can be signalled to abort so that if manipulation restarts before cull, clip and plot is finished, it can be aborted and restarted.
Using the techniques above I am able to get performance comparable to the built-in map.
The Screen-to-world problem on the iPhone
I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...
The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?
An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.
The question:
What are some of the simplest ways to
know if I touched specific locations
in 3D space in the iPhone OpenGL ES
world?
You can now find gluUnProject in http://code.google.com/p/iphone-glu/. I've no association with the iphone-glu project and haven't tried it yet myself, just wanted to share the link.
How would you use such a function? This PDF mentions that:
The Utility Library routine gluUnProject() performs this reversal of the transformations. Given the three-dimensional window coordinates for a location and all the transformations that affected them, gluUnProject() returns the world coordinates from where it originated.
int gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16], const GLdouble projMatrix[16],
const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz);
Map the specified window coordinates (winx, winy, winz) into object coordinates, using transformations defined by a modelview matrix (modelMatrix), projection matrix (projMatrix), and viewport (viewport). The resulting object coordinates are returned in objx, objy, and objz. The function returns GL_TRUE, indicating success, or GL_FALSE, indicating failure (such as an noninvertible matrix). This operation does not attempt to clip the coordinates to the viewport or eliminate depth values that fall outside of glDepthRange().
There are inherent difficulties in trying to reverse the transformation process. A two-dimensional screen location could have originated from anywhere on an entire line in three-dimensional space. To disambiguate the result, gluUnProject() requires that a window depth coordinate (winz) be provided and that winz be specified in terms of glDepthRange(). For the default values of glDepthRange(), winz at 0.0 will request the world coordinates of the transformed point at the near clipping plane, while winz at 1.0 will request the point at the far clipping plane.
Example 3-8 (again, see the PDF) demonstrates gluUnProject() by reading the mouse position and determining the three-dimensional points at the near and far clipping planes from which it was transformed. The computed world coordinates are printed to standard output, but the rendered window itself is just black.
In terms of performance, I found this quickly via Google as an example of what you might not want to do using gluUnProject, with a link to what might lead to a better alternative. I have absolutely no idea how applicable it is to the iPhone, as I'm still a newb with OpenGL ES. Ask me again in a month. ;-)
You need to have the opengl projection and modelview matrices. Multiply them to gain the modelview projection matrix. Invert this matrix to get a matrix that transforms clip space coordinates into world coordinates. Transform your touch point so it corresponds to clip coordinates: the center of the screen should be zero, while the edges should be +1/-1 for X and Y respectively.
construct two points, one at (0,0,0) and one at (touch_x,touch_y,-1) and transform both by the inverse modelview projection matrix.
Do the inverse of a perspective divide.
You should get two points describing a line from the center of the camera into "the far distance" (the farplane).
Do picking based on simplified bounding boxes of your models. You should be able to find ray/box intersection algorithms aplenty on the web.
Another solution is to paint each of the models in a slightly different color into an offscreen buffer and reading the color at the touch point from there, telling you which brich was touched.
Here's source for a cursor I wrote for a little project using bullet physics:
float x=((float)mpos.x/screensize.x)*2.0f -1.0f;
float y=((float)mpos.y/screensize.y)*-2.0f +1.0f;
p2=renderer->camera.unProject(vec4(x,y,1.0f,1));
p2/=p2.w;
vec4 pos=activecam.GetView().col_t;
p1=pos+(((vec3)p2 - (vec3)pos) / 2048.0f * 0.1f);
p1.w=1.0f;
btCollisionWorld::ClosestRayResultCallback rayCallback(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z));
game.dynamicsWorld->rayTest(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z), rayCallback);
if (rayCallback.hasHit())
{
btRigidBody* body = btRigidBody::upcast(rayCallback.m_collisionObject);
if(body==game.worldBody)
{
renderer->setHighlight(0);
}
else if (body)
{
Entity* ent=(Entity*)body->getUserPointer();
if(ent)
{
renderer->setHighlight(dynamic_cast<ModelEntity*>(ent));
//cerr<<"hit ";
//cerr<<ent->getName()<<endl;
}
}
}
Imagine a line that extends from the viewer's eye
through the screen touch point into your 3D model space.
If that line intersects any of the cube's faces, then the user has touched the cube.
Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".
One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.
The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.
Google for opengl screen to world (for example there’s a thread where somebody wants to do exactly what you are looking for on GameDev.net). There is a gluUnProject function that does precisely this, but it’s not available on iPhone, so that you have to port it (see this source from the Mesa project). Or maybe there’s already some publicly available source somewhere?