I have a few different OBJ files that I am able to parse and display. This code is based on Jeff LaMarche's The Start of a WaveFront OBJ File Loader Class. However, I need some means of detecting what coordinates I have selected within a displayed model. Usually there is one model displayed at a time but sometimes there will be two or more on the screen and I want to set up a NSNotificationCenter object to notify other sections of code as to which object is "selected". I have also looked at javacom's "OpenGL ES for iPhone : A Simple Tutorial" and would like to model the behavior of what I'm trying to program after his.
This is my current line of logic:
Setup a means to detect where a user has touched the screen
Have those coordinates compared with the current coordinates of a OBJ-based model
If they match, indicate said touch as being within the bounds of the object
The touchable set of coordinates must scale with the model. Currently the model is able to scale so I will most likely need to be able follow this scaling.
Also note, I don't need to move the model around on the screen. Just detect when it's been touched whether there is one model or several being displayed.
While this is most likely quite simple, I've been stumped by this for months now. I would really appreciate any light others can shed on this topic.
Use gluUnProject on the touch coordinates to get a vector going from the screen into the world, and then intersect it with your models to see if one of them has been touched. gluUnProject isn't by default available on iPhone, but you can look up implementations of it. http://www.mesa3d.org/ has an open source implementation.
Read about gluUnProject here: http://web.iiit.ac.in/~vkrishna/data/unproj.html
Related
Im planning to make a domino draw game but im not quite sure how to handle the tile placement so that each tile snaps to each value when a player sets a domino, could any one give me an idea on how to achive this?
Example image
This question is probably too broad. You really need to make an attempt at implementing a solution yourself and then come back if you run into a specific problem. However, if you really have no idea where to begin, you broadly need to do a few things:
Create a GameObjects for your dominoes and attach a scripts to the,
which define their numbers, set their corresponding texture etc.
Create an invisible play surface which is made up of a grid representing places where tiles can be put down.
Add code to handle picking up, moving and putting down your dominoes.
In the code that handles moving and/or putting down dominoes:
check whether the grid location where the domino is going to be placed is valid (e.g adjacent to another domino),
then check adjacent grid domino values
For adjacent dominoes, check their orientation
depending on the orientation relative to the domino you are placing, check the values at the nearest or both ends of the domino
if the adjacent domino values match a value on the domino being moved allow it to be placed, otherwise don't allow it to be placed
In the above example, "placing" a domino would simply mean moving it to a point on the play surface grid in either a vertical or horizontal orientation.
This is a very broad overview and there are plenty of gotchas that I haven't covered which may or may not give you trouble.
Edit: You could also do this without using a grid but it would be a little trickier when it comes to finding and inspecting adjacent dominoes.
I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.
I am working on a iOS 5 iPhone project where users can choose an image on their device, then trace an object inside of the picture (trace an apple out of a picture of a fruit basket), and then the picture needs to be uploaded with the "tagged" object so it can be pulled down later. Other people will then pull down the image and try to find where the tagged object is in the picture (Think "Where's Waldo?").
I have been trying to figure out the best way of tracing the object. Before, I had a user press the top left, top right, bottom left, and bottom right points around the object and create a square view around the object. The info for that view was uploaded and then pulled down for the user to find the object. The downside is that all objects are obviously not squares/rectangles so I need to do a free form shape.
I was thinking of allowing the user to draw over the object and then somehow I need to be able to tell what is inside of the trace (For example, inside of a circle that they traced), but a problem I forsee is making sure the trace they made is closed so I can fill in the shape (which is a whole not problem).
Any advice welcome on the best way of starting this.
Thanks!
UIBezierPath might be a very useful friend here. It allows you to create any shape you need, and it supports both drawing and hit-detection. I recently did an implementation for a storybook where I could trace out a shape with their finger, freeze it, and then use the shape for tap detection.
The basic idea is this:
When your finger touches the screen, start recording positions. Discard any positions that are too close to the previous position (eg, only record a point if it is >min-distance from the last recorded point). While doing this, you can draw the UIBezierPath so you can see what you are tracing out. Modify the UIBezierPath by adding points to it, instead of recreating it every time.* When you lift your finger, close the bezier path. Quite simple.
Now, this will result in a polygon (ie, straight edges). If your min-distance is low enough or if you are using it for hit-detection (as you say), it won't really matter. However, if you want to smooth the path, you have to use the curve-to methods, which slightly complicate it - but should you wish to follow up on this more, read up on splines and spline generation from a point series.
*note: otherwise you'll get lag while drawing large shapes because recreating a bezier path from an increasingly large series of points gets expensive. Modifying the existing path makes it much, much, much faster.
Really hope someone can help me as I'm a bit stuck :S
I have a custom map of an event using the CATiledLayer so users can zoom in and scroll around the map. What I would like to do now is add the functionality to let the user know where they currently are on the map. I know it can be done as I've seen an app do this before. I'm not sure how to go about doing it though, maybe I need to convert lat/lon into pixels but I'm not sure if thats possible (depending on how big the image is, etc).
On another site it was mentioned to find out the boundaries of the map and then I can add pins to the map, but I'm not sure how to go about doing this? Will I need to find every coordinate (lat/lon) within the boundary so I can add the pin of where the user is currently?
If anyone can give me with any advice or pointers, I'd much appreciate it
You can use the route-me library by adding your own map source class. A good article that explains how to do it is here http://mobilegeo.wordpress.com/2010/07/07/route-me-native-iphone-mapping-framework/
I'm facing a challenge right now in trying to map GPS coords to a map that's an artist's rendition. In particular this is for a ski mountain, so the artist's rendition is a "trail map". The trail map is not accurate in that the whole mountain has been squeezed onto the one view, yet the actual topology of the mountain doesn't conform to the drawing.
I've tried several approaches:
1) Triangulation using known GPS coordinates of the lift stations. This is fairly simple to implement, yet this is not accurate enough and the algorithm fails if the rendition differs enough from the GPS map.
2) Creating a uniform grid for both the GPS map and the Trailmap, then doing a mapping from cells in the GPS map to the Trailmap. The downside to this is it can be a lot of busy work with no easy UI for doing it.
3) Calculating the vectors of each lift (being a straight line), find the closet lift station to a given GPS point, and calculate the estimated Trailmap location using this vector.
I'm considering #2, which is essentially the simplest solution. But if you've found a better way, I'd love to hear it.
Is there any ready made class/formula somewhere I can use it for control my viewpoint by accelerometer's/compass's XYZ?
I want to achieve the same view control like acrossair uses.
I have the parts (OpenGL space, filtered accel-, compass values, and a cubic panoramic view mapped to a cube around my origin).
Can somebody suggest me where to start at least?
I've got into the problem since, so the posts about the steps of the solution can be followed here:
xCode - augmented reality at gotoandplay.freeblog.hu
A brief sketch about the whole process below:
How to get the transformation matrix from the raw iPhone sensor (accelerometer, magnetometer) data http://gotoandplay.freeblog.hu/files/2010/06/iPhoneDeviceOrientationTransformationMatrix-thumb.jpg
If all you are looking for is a means of rotating the model view matrix for your scene, you could look at the source code to my Molecules application or an even simpler cube example that I wrote for my iPhone development class. Both contain code to incrementally rotate the model view matrix in response to touch input, so you would just need to replace the touch input with accelerometer values.
Additionally, Apple's GLGravity sample application does something very similar to what you want.