How to correct middle coordinate between two coordinates - iphone

I have app that draw red line on the apple map while user drive car with iPhone.
But sometimes when driver drives too fast on corners I don't get all coordinates so it connects two points with straight line (and I get flying car :)).
Is there any solution how to make this more precise.
On the image I show what happens.
Large error on this image is at 'terminal ave and Clark...'
As user drive car I store each new coordinate in database (local) and based on that I draw route.
But this errors drive me creasy.
Any idea or example how to fix this error on corners.

I think you can not do anything precise. The only thing you can do is to "guess" the route the driver got.
You can soft this "straight line effect" by some kind of interpolation between the two points which are too far away one from the other.
In video games industry, when an autonomous character is moving from one point to another, to avoid him just going straight forward steering Behaviors are used so he go in a soft and rounded way.
Maybe you read about them and see if u can apply them to your project.
Steering behaviours

Related

Does Leaflet have a "geopositioning mode"?

I use the Leaflet plug-in "Leaflet.ImageOverlay.Rotated.js" to use its L.imageOverlay.rotated(...) thing in order to overlay certain map pieces in various places on top of the normal map.
It does this by taking an image and having me tell it its top-left, top-right and bottom-left coordinates to figure out how to rotate, tilt and stretch/squeeze it properly.
It took me a very long time to figure these coordinates out by hand. For this reason, I'm looking for some sort of "geopositioning mode", perhaps enabled by this extension, which would simply let me click three times on the map to tell it where these points go. That would be so simple for the developers to do and would help so much. It's such an obvious thing to do that I strongly suspect it's already implemented and ready.
Is there such a "mode"? If not, how am I expected to find the positions without spending so much time and trial-and-error as I did for the first overlay map image?
Added: I should also clarify that the image should be shown in this mode so that you can re-adjust the points and watch in real time as the image bends/warps, to get it just right.
you can develop a modul for this problem.
find minimum 4 point on raster map.
click on tilemap for 4 points
than find different slope and distance same 2 points.
maybe you must rotate and use affine transformation.

Displaying ARKit nodes in relation to real objects

I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.

Objective C Drawing free form shape on top of image

I am working on a iOS 5 iPhone project where users can choose an image on their device, then trace an object inside of the picture (trace an apple out of a picture of a fruit basket), and then the picture needs to be uploaded with the "tagged" object so it can be pulled down later. Other people will then pull down the image and try to find where the tagged object is in the picture (Think "Where's Waldo?").
I have been trying to figure out the best way of tracing the object. Before, I had a user press the top left, top right, bottom left, and bottom right points around the object and create a square view around the object. The info for that view was uploaded and then pulled down for the user to find the object. The downside is that all objects are obviously not squares/rectangles so I need to do a free form shape.
I was thinking of allowing the user to draw over the object and then somehow I need to be able to tell what is inside of the trace (For example, inside of a circle that they traced), but a problem I forsee is making sure the trace they made is closed so I can fill in the shape (which is a whole not problem).
Any advice welcome on the best way of starting this.
Thanks!
UIBezierPath might be a very useful friend here. It allows you to create any shape you need, and it supports both drawing and hit-detection. I recently did an implementation for a storybook where I could trace out a shape with their finger, freeze it, and then use the shape for tap detection.
The basic idea is this:
When your finger touches the screen, start recording positions. Discard any positions that are too close to the previous position (eg, only record a point if it is >min-distance from the last recorded point). While doing this, you can draw the UIBezierPath so you can see what you are tracing out. Modify the UIBezierPath by adding points to it, instead of recreating it every time.* When you lift your finger, close the bezier path. Quite simple.
Now, this will result in a polygon (ie, straight edges). If your min-distance is low enough or if you are using it for hit-detection (as you say), it won't really matter. However, if you want to smooth the path, you have to use the curve-to methods, which slightly complicate it - but should you wish to follow up on this more, read up on splines and spline generation from a point series.
*note: otherwise you'll get lag while drawing large shapes because recreating a bezier path from an increasingly large series of points gets expensive. Modifying the existing path makes it much, much, much faster.

compare one image in matlab with a database of images and show the most similar

I have a database of images of one person who is using his hands to show various words and phrases in sign language. The background is white and the only thing changing is the shape of the person's hands and their locations. Now in my gui in matlab, I want the user to be able to choose another image from the same person that was taken at another time doing a sign but wearing the same clothes and then the program will have to compare this against the images in the database and show the most similar. Obviously I can't do pixel by pixel comparison as the images were taken by a hand held mobile camera and slight movement has been inevitable so I should try and locate the hands in the images and compare their shapes. I have no idea how to go about this? I have to say I am new to image processing toolbox in matlab.
Your help is much appreciated
I am doing a phD in computer vision, and I can tell you that it is an unsolved problem. (even in your simple framewrok, with white background)
If you are interested, you might read some works about it ar MIT:
http://people.csail.mit.edu/rywang/handtracking/
or at Oxford:
http://www.robots.ox.ac.uk/~vgg/research/sign_language/index.html
http://www.robots.ox.ac.uk/~vgg/research/hands/index.html
I disagree with you. Such a project can achieve results quickly.
This becomes a problem as soon as the project has to deal with "real life".
Using a single camera, and a completely known background; Opencv provides a simple way to extract hand shape in a image (in about 20 lines of code). You will find plenty of source on the web (have a look at calcbackproj).
After that, what you will have to do is to play with shape, and search for characteristic points.
Begin with some simple signs (example : a circle and a V). How would you recognize one from the other?
There are thousands of papers on sign language; just read the older one to simple ideas flowing :)

iphone 2d drawing newbie question

I've been programming the iphone for a couple of months now and have 3 apps in the store already.
However, I have not done any kind of graphics programming in the platform.
Given that I'm planning on starting my 5th app (the 4th is under Apple's review) I wanted to ask for some pointers as to where to get information for this (been googling for a while but nothing matches what I'm looking for)
I need to create an App where I can 'drop' some shapes from a menu (a rectangle, circle, squares, and then some complex shapes) onto a main window.
the idea is that the user can drag them around. BUT, I want them to 'snap' to each other (kind of like in a CAD package where a circle has quadrants on the edges that snap to any other geometry entity in the drawing).
So if I had a circle on the left of the screen and a rectangle on the right and then I move the circle around, it would stop moving to the right If I hit the rectangles edges. Not completely stop but giving some sort of 'resistance' to the continuity of the movement.
Also, if I have several overlapping drawings, is there a way to 'divide' them (any overlapping becomes a shape on itself but is removed from any other shape composing the overlap)?
The reason for this is that I need to calculate the area of the drawing (along with other properties)
I'm thinking of CALayer 1, 2, 3,..., n on of top of the other, each one with a drawing (with CGPath?), that may or may not overlap the others.
Then I need to somehow obtain information of the 'projection' of all those on a single CALayer.
I'm clueless here.
Should I look into Quartz2D? is CALayer and CGPath enough for this?
this is not for a game. Just an engineering application I have in mind.
Any help is appreciated.
regards
dh
iPhone Application Programming Guide has a chapter on drawing.
You might get some ideas where to start by looking at the appropriate lectures from iPhone Application Programming lectures at Stanford. They include hight quality video lectures (filmed by Apple) over at iTunesU, slides and example source code.