Position image above map - swift

this question applies to any item but in this case I would like to position my image above the map, currently it gets place below the map.
I am new to swift 1.2 so I do not know how to put together an example, any help would be greatly appreciated.

Are you talking about above in X-Y terms or in Z terms? If Z terms, and using interface builder, you can go to editor->Arrange->Bring forward, or just move the image in the relation to the other UI elements, as their order in the below menu defines their Z-order.

Related

Matlab: Find pattern in an image given a skeletonized template

I am stuck at a current project:
I have an input picture showing the ground with some shapes on it. I have to find a specific shape with a given template.
I have to use distance transformation into skeletonization. My question now is: How can I compare two skeletons? As far as I noticed and have been told, the most methods from the Image Processing Toolbox to match templates don't work, since they are not scale-invariant and rotation invariant.
Also some skeletons are really showing the shapes, others are just one or two short lines, with which I couldn't identify the shapes, if I didn't know what they should be.
I've used edge detection, and region growing on the input so there are only interessting shapes left.
On the template I used distance transformation and skeletonization.
Really looking forward to some tips.
Greetings :)
You could look into convolutions?
Basically move your template over your image and see if there is a match, and where.
The max value of your array [x,y] is the location of your object in the image.
Matlab has a built-in 2D convolution function for this

Create Diagonal Tile Map in Phaser.js?

I am trying to make a web game, and before committing to one language, I need to decide weather to use Phaser.js or EaselJS. When looking at Phaser.js, it seems all of the tile maps are front on and flat..... example. Is there a way to make a diagonal tile map on Phaser.js.... example?
Thanks

Create custom map in Leaflet with coordinates

I have a historical city map that I want to display using Leaflet.
I like to set the coordinates of this image to reflect the real world, e.g so I can click on the image and get the real coordinates.
I guess I can just make it an overlay to a real map, but there must be a better solution just define at what coordinates of the corners of the image.
For this image, the approx real world coordinates is NW: 60.34343, 18.43360, SE: 60.33761, 18.44819
My code, so far, is here:
http://stage1876.xn--regrund-80a.se/example3.html
Any ideas how to proceed? It feels like it there should be an easy way to do this?
Any help would be so appreciated!
EDIT: The implementation (so far) with tiles are optional. I could go for a one image-map as well.

Not able to calibrate camera view to 3D Model

I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}

How to determine if iPad user taps within an irregular shaped image?

I've hooked up a UITapGestureRecognizer to a UIImageView containing the image I'd like to display on an iPad screen and am able to consume the user taps just fine. However, my image is that of a hand on a table and I'd like to know if the user has tapped on the hand or on the table part of the image. I can get the x,y coordinates of the user tap with CGPoint tapLocation = [recognizer locationInView:self.view]; but I'm at a loss for how to map that CGPoint to, say, the region of the image that contains the hand vs. the region that contains the table. Everything I've read so far deals with determining if a CGPoint is in a particular rectangular area, but what if you need to determine if that CGPoint is located in the boundaries of a more irregular shape? Is that even possible? Any suggestions or just pointing me in the right direction would be a big help. Thanks!
You could use pointInside:withEvent: to define the hit area programmatically.
To elaborate, you just take the point and evaluate to see if it falls in the area you're after with a series of if statements. If it does, return TRUE. If it doesn't, return FALSE. If this is related to this post, then you could use a circular conditional to compare the distance of the point to the center of your circle using Pythagorean Theorem.
late to the party,
but the core tool you want here is a "point in polygon" routine.
this is a generic approach, independent of iOS.
google has lots of info,
but the general approach is:
1) define your closed polygon.
- it sounds like this might be a bit of work in your case.
2) choose any point not equal to your original point.
(yes, any point)
3) for each edge in the polygon,
determine if the ray from your original point through the seconds point intersects with that polygon edge.
- this requires a line-segment-intersect-ray routine, also available on the 'tubes.
4) if the number of intersections is odd, it's inside the polygon.
if the count is even, it's outside.
for general geometry-type issues,
i highly recommend Paul Bourke: http://local.wasp.uwa.edu.au/~pbourke/geometry/insidepoly/
You can use a bounding rectangle that covers most or all of the hand.
If the user is using his finger to tap either the hand or the table, I doubt that you want him or her to be extremely precise with the tap.
An extension of the bounding rectangle answer,
you could define several smaller bounding rectangles that would approximate a hand without covering the rest of the screen.
OR
you could use a list of rectangles, for each of your objects and put the hand at the end of the list. In this case, if you had a tap on button X on the top right hand of the screen which is technically inside the hand rectangle, it would choose the button X because that rectangle is found first.
define the shape by a black and white bitmap (1 bit per pixel). Check if the particular bit is set. This would eat a lot of memory if you had a lot of large shapes, but for one bitmap with a hand, it should not be a big deal.
define the shape as a polygon. Then you need to do point-in-polygon test. Wikipedia has a wonderful article on this, with links to code here: http://en.wikipedia.org/wiki/Point_in_polygon
iPad libraries might have this already implemented. Sorry, I cannot help you there, not an iPad developer.