I use the Leaflet plug-in "Leaflet.ImageOverlay.Rotated.js" to use its L.imageOverlay.rotated(...) thing in order to overlay certain map pieces in various places on top of the normal map.
It does this by taking an image and having me tell it its top-left, top-right and bottom-left coordinates to figure out how to rotate, tilt and stretch/squeeze it properly.
It took me a very long time to figure these coordinates out by hand. For this reason, I'm looking for some sort of "geopositioning mode", perhaps enabled by this extension, which would simply let me click three times on the map to tell it where these points go. That would be so simple for the developers to do and would help so much. It's such an obvious thing to do that I strongly suspect it's already implemented and ready.
Is there such a "mode"? If not, how am I expected to find the positions without spending so much time and trial-and-error as I did for the first overlay map image?
Added: I should also clarify that the image should be shown in this mode so that you can re-adjust the points and watch in real time as the image bends/warps, to get it just right.
you can develop a modul for this problem.
find minimum 4 point on raster map.
click on tilemap for 4 points
than find different slope and distance same 2 points.
maybe you must rotate and use affine transformation.
Related
I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.
I am trying to do a number of things via MATLAB but I am getting a bit lost with what techniques to use. My ultimate goal is to extract various measurements from a users fingerprint presentation, e.g. how far the finger over/undershoots, the co-ordinates of where the finger enters, the angle of the finger.
In my current setup, I have a web camera recording footage of a top down view of the presentation which I then take the video file and break down into individual frames. https://www.dropbox.com/s/zhvo1vs2615wr29/004.bmp?dl=0
What I am trying to work on at the moment is using ROI based image processing to create a binary mask around the edges of the scanner. I'm using the imbw function to get a binarised image and getting this as a result. https://www.dropbox.com/s/1re7a3hl90pggyl/mASK.bmp?dl=0
What I could use is some guidance on where to go from here. I want to be able to take measurements from the defined ROI to work out various metrics e.g. how far a certain point is from the ROI so I must have some sort of border for the scanner edges. From my experience in image processing so far, this has been hard to clearly define. I would like to get a clearer image where the finger is outlined and defined and the background (i.e. the scanner light/blocks) are removed.
Any help would be appreciated.
Thanks
I am working on a iOS 5 iPhone project where users can choose an image on their device, then trace an object inside of the picture (trace an apple out of a picture of a fruit basket), and then the picture needs to be uploaded with the "tagged" object so it can be pulled down later. Other people will then pull down the image and try to find where the tagged object is in the picture (Think "Where's Waldo?").
I have been trying to figure out the best way of tracing the object. Before, I had a user press the top left, top right, bottom left, and bottom right points around the object and create a square view around the object. The info for that view was uploaded and then pulled down for the user to find the object. The downside is that all objects are obviously not squares/rectangles so I need to do a free form shape.
I was thinking of allowing the user to draw over the object and then somehow I need to be able to tell what is inside of the trace (For example, inside of a circle that they traced), but a problem I forsee is making sure the trace they made is closed so I can fill in the shape (which is a whole not problem).
Any advice welcome on the best way of starting this.
Thanks!
UIBezierPath might be a very useful friend here. It allows you to create any shape you need, and it supports both drawing and hit-detection. I recently did an implementation for a storybook where I could trace out a shape with their finger, freeze it, and then use the shape for tap detection.
The basic idea is this:
When your finger touches the screen, start recording positions. Discard any positions that are too close to the previous position (eg, only record a point if it is >min-distance from the last recorded point). While doing this, you can draw the UIBezierPath so you can see what you are tracing out. Modify the UIBezierPath by adding points to it, instead of recreating it every time.* When you lift your finger, close the bezier path. Quite simple.
Now, this will result in a polygon (ie, straight edges). If your min-distance is low enough or if you are using it for hit-detection (as you say), it won't really matter. However, if you want to smooth the path, you have to use the curve-to methods, which slightly complicate it - but should you wish to follow up on this more, read up on splines and spline generation from a point series.
*note: otherwise you'll get lag while drawing large shapes because recreating a bezier path from an increasingly large series of points gets expensive. Modifying the existing path makes it much, much, much faster.
Really hope someone can help me as I'm a bit stuck :S
I have a custom map of an event using the CATiledLayer so users can zoom in and scroll around the map. What I would like to do now is add the functionality to let the user know where they currently are on the map. I know it can be done as I've seen an app do this before. I'm not sure how to go about doing it though, maybe I need to convert lat/lon into pixels but I'm not sure if thats possible (depending on how big the image is, etc).
On another site it was mentioned to find out the boundaries of the map and then I can add pins to the map, but I'm not sure how to go about doing this? Will I need to find every coordinate (lat/lon) within the boundary so I can add the pin of where the user is currently?
If anyone can give me with any advice or pointers, I'd much appreciate it
You can use the route-me library by adding your own map source class. A good article that explains how to do it is here http://mobilegeo.wordpress.com/2010/07/07/route-me-native-iphone-mapping-framework/
I'm facing a challenge right now in trying to map GPS coords to a map that's an artist's rendition. In particular this is for a ski mountain, so the artist's rendition is a "trail map". The trail map is not accurate in that the whole mountain has been squeezed onto the one view, yet the actual topology of the mountain doesn't conform to the drawing.
I've tried several approaches:
1) Triangulation using known GPS coordinates of the lift stations. This is fairly simple to implement, yet this is not accurate enough and the algorithm fails if the rendition differs enough from the GPS map.
2) Creating a uniform grid for both the GPS map and the Trailmap, then doing a mapping from cells in the GPS map to the Trailmap. The downside to this is it can be a lot of busy work with no easy UI for doing it.
3) Calculating the vectors of each lift (being a straight line), find the closet lift station to a given GPS point, and calculate the estimated Trailmap location using this vector.
I'm considering #2, which is essentially the simplest solution. But if you've found a better way, I'd love to hear it.
I'm creating a dice game for the iPhone. I'm using SIO2 as engine, but I think this question is more general OpenGL-related.
Since the iPhone lacks support for anti-aliasing, my dice looks kind of edgy. If possible, I'd like to make the edges of the die rounded and smooth instead of sharp. I've found one app, MotionX, that manages to do this, and I think without using anti-aliasing. See screenshot here. If you look closely at the dice edges, you see there is a floating transition from the brightly lit top face to the shadowed side face. This looks kind of round from far away.
Does anyone know how to recreate such an effect?
You need to create the dice with slightly rounded edges and corners. That way there won't be a sharp transition between each face.
If your modelling package can create them you could use Superquadrics to create this sort of model. You can change the parameters of the equation to produce the effect.
See the top left figure on this image
(source: free-online.co.uk)
If you look closely at the dice edges, you see there is a floating transition from the brightly lit top face to the shadowed side face
Not sure if the iPhone supports this, but you may be able to achieve this effect with a normal map:
http://en.wikipedia.org/wiki/Normal_mapping
Of course, you'll need to truncate the corners to get them sufficiently round enough that the normal map will get you the rest of the way.