How to select objects in OpenGL on iPhone without using glUnProject or GL_SELECT? - iphone

I have 3 OpenGL objects which are shown at the same time. If the user touches any one of them, then that particular OpenGL object alone should display in screen.

Just use gluUnProject to convert your touch point to a point on your near clipping plane and a point on your far clipping plane. Use the ray between those two points in a ray-triangle intersection algorithm. Figure out which triangle was closest, and whatever object that triangle is part of is your object. Another approach is to give each object a unique ID color. Then, whenever the user touches the screen, render using your unique ID colors with no lighting, but don't present the render buffer. Now you can just check the color of the pixel where the user touched and compare it against your list of object color ID's. Quick and easy, and it supports up to 16,581,375 unique objects.

You would have to parse every object of your scene and check the possible collision of each one of them with the ray you computed thanks to gluUnProject.
Depending on whether you want to select a face or an object, you could test the collision of the ray with bounding volumes (e.g. bounding box) of your objects for efficiency purposes.

Related

Render order according to hierarchy in Unity

I am trying to understand how Unity decides what to draw first in a 2D game. I could just give everything an order in layer, but I have so many objects that it would be so much easier if it was just drawing in the order of the hierarchy. I could write a script that gives every object its index, but I also want to see it in editor.
So the question is, is there an option that I can check so that it uses the order in the hierarchy window as the default sorting order?
From your last screenshot I could see you are using SpriteRenderer.
The short answer to the question "is there an option that I can check so that it uses the order in the hierarchy window as the default sorting order?" would be no, there isn't by default*.
Sprite renderers calculates which object is in front of others in one of two ways:
By using the distance to the camera, this will draw the objects closest to the camera on top of the rest of the objects in that same order in layer, as per the docs:
Sprite Sort Point
This property is only available when the Sprite Renderer’s Draw Mode is set to Simple.
In a 2D project, the Main Camera is set to Orthographic Projection mode by default. In this mode, Unity renders Sprites in the order of their their distance to the camera, along the direction of the Camera’s view.
If you want to keep everything on the same sorting layer/order in layer you can change the order in which the objects appear by moving one of the two objects further away from the camera (this is probably further down the z axis). For example if your Cashew is on z = 0, and you place the walnut on z = 1 then the cashew will be drawn on top of the walnut. If Cashew is on z=0 and the walnut is on z = -1 then the walnut will be draw on top (Since negative is closer to the camera). If both of the objects are on z - 0 they are both equally as far away from the camera, so it becomes a coin toss for which object gets drawn in front, as it does not take into account the hierarchy.
The second way the order can be changed is by creating different sorting layers, and adjusting the order in layer in the sprite renderer component. But you already figured that out.
*However, that doesn't mean it cannot be done, technically...
If you feel adventurous there is nothing stopping you from making an editor script that automates setting the order in layer for you based on the position in the hierarchy. This script would loop through all the objects in your hierarchy, grab the index of the object in the hierarchy, and assign the index to the Order in Layer.
I don't think Unity has such feature (https://docs.unity3d.com/Manual/2DSorting.html).
Usually you shall define some Sorting Layers:
far background
background
foreground
and assign Sprite Renderer of each sprite to one of Sorting Layers

Open GL - ES 2.0 : Touch detection

Hi Guys I am doing some work on iOS and the work requires use of OpenGL es. So now I have a bunch of squares, cubes and triangles on the screen. Some of these geometries might overlap. Any ideas/ approaches for touch detection?
Regards
To follow up on the answer already given, squares, cubes and triangles are convex shapes so you can perform ray-object intersection quite easily, even directly from the geometry rather than from the mathematical description of the perfect object.
You're going to need to be able to calculate the distance of a point from the plane and the intersection of a ray with the plane. As a simple test you can implement yourself very quickly, for each polygon on the convex shape work out the intersection between the ray and the plane. Then check whether that point is behind all the planes defined by polygons that share an edge with the one you just tested. If so then the hit is on the surface of the object — though you should be careful about coplanar adjoining polygons and rounding errors.
Once you've found a collision you can easily get the length of the ray to the point of collision. The object with the shortest distance is the one that's in front.
If that's fast enough then great, otherwise you'll probably want to look into partitioning the world or breaking objects down to their silhouettes. Convex objects are really simple — consider all the edges that run between one polygon and the next. If only exactly one of those polygons is front facing then the edge is part of the silhouette. All the silhouettes edges together can be projected to a convex 2d shape on the view plane. You can then test touches by performing a 2d point-in-polygon from that.
A further common alternative that eliminates most of the maths is picking. You'd render the scene to an invisible buffer with each object appearing as a solid blob in a suitably unique colour. To test for touch, you'd just do a glReadPixels and inspect the colour.
For the purposes of glu on the iPhone, you can grab SGI's implementation (as used by MESA). I've used its tessellator in a shipping, production project before.
I had that problem in the past. What I have used is an implementation of glu unproject that you can find on google (it uses the inverse of the model view projection matrix and the viewport size). This allows you to map the 2D screen coordinates to a 3D vector into the world. Then, you can use this vector to intersect with your objects and see which one intersects (or comes really close to doing so).
I do hope there are better ways of doing this, so I look forward to other answers as well!
Once you get the inverse-modelview and cast your ray (vector), you still need to know if the ray intersects your geometry. One approach would be to grab the depth (z in view coordinate system) of the object's center and extend (stretch) your vector just that far. Then see if the vector's "head" ends within the volume of your object or not (you need the objects center and e.g. Its radius, if it's a sphere)

Best Z ordering method for tile-based, cocos2D-iPhone games?

I want to make isometric, tile-based, iPhone games with cocos2D.
Sprites need to be drawn on-top of other sprites that are "behind" it. I'm looking for the best way to do this.
I'd like to avoid the painter's algorithm because it involves sorting all the sprites every frame which is expensive.
The Z buffer algorithm is supported by the GPU and cocos2D so this is what I'd like to use, but there is a problem. Some sprites, like buildings for example, occupy multiple tiles. Assigning a Z value to such sprites is difficult.
These are the options I've thought of:
Comparing two buildings and determining which one is "in-front" is
easy. So buildings can be sorted then assigned a Z value based on
the sort order. This wouldn't be any different from the painter's
algorithm. The OpenGl ES Z buffer wouldn't be necessary.
Assign a Z value to each building based purely on its location on
the map (without knowledge of where other buildings are). I'm
finding this difficult. I think it is possible, but I haven't been
able to come up with a formula yet.
Use multiple sprites for images that occupy more than one tile, so
all sprites will be exactly the same size. Z orders can then be
easily assigned based on what tile the sprite is occupying. The
problem with this solution is that it makes the game logic much more
complicated. All operations on a single building will have to be
repeated for each sprite the building is made-up of. I'd like to
treat each object as a single entity.
Modify the cocos2D code to allow sprites to have multiple Z values
at different points. If a sprite can have multiple Z values based on
what tile a particular part of the sprite falls on, then calculating
a Z value for that section is easy. I won't need to compare the
sprite to any other sprites. I believe this is possible by using
multiple quads for each sprite. The problem with this is that it is
a bit complicated for me since I am new to OpenGL ES and cocos2D. I
don't completely understand how all of the internal data structures
work. Although it seems like the most elegant solution if a formula
cannot be found.
I will up-vote any suggestions or references to helpful resources.
For #2, you can compute the Manhattan distance of the center of the object and use this value as the z-value of that object. It will work as long as you avoid very long objects in your map like 5x1 object or worse. But if you really need a long object to be placed in a tiled map, managing the z-order of objects in the map by setting a z-value using a formula is impossible.
To prove this:
1.) Place two 2x2 objects in a map horizontally and leave a unit tile between them.
2.) Place a 3x1 object between them. Let's name the 2x2 objects to A and B, and the 3x1 object to C.
3.) If you just rotate C(not changing its position), z-order of A and B interchange.
-If B is now in front, some objects behind B will be in front of A because of just the rotation of C. And it's costly to know which objects in back of both A and B previously will become in front of A after C's rotation.

Cocoa Touch - How to check if a non-rectangular object in a UIImageView intersects another object?

Say I have a UIImageView that contains an image of an object that is not rectangular, i.e. a round ball. How can I check if another UIImageView (rectangular or not) intersects, or contains a point in, that object (not its frame)?
Basic example:
I have two balls rolling around on the screen, and I want to check for collision. But I don't want to check if their rects intersects eachother, since the balls are not rectangular.
I think if you have limited set of possible shapes then it is better to perform the check for each possible pair of object shapes rather then some generic algorithm. For example two circles intersect if the distance between their centers is less then the sum of their radiuses etc.

Screen-to-World coordinate conversion in OpenGLES an easy task?

The Screen-to-world problem on the iPhone
I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...
The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?
An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.
The question:
What are some of the simplest ways to
know if I touched specific locations
in 3D space in the iPhone OpenGL ES
world?
You can now find gluUnProject in http://code.google.com/p/iphone-glu/. I've no association with the iphone-glu project and haven't tried it yet myself, just wanted to share the link.
How would you use such a function? This PDF mentions that:
The Utility Library routine gluUnProject() performs this reversal of the transformations. Given the three-dimensional window coordinates for a location and all the transformations that affected them, gluUnProject() returns the world coordinates from where it originated.
int gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16], const GLdouble projMatrix[16],
const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz);
Map the specified window coordinates (winx, winy, winz) into object coordinates, using transformations defined by a modelview matrix (modelMatrix), projection matrix (projMatrix), and viewport (viewport). The resulting object coordinates are returned in objx, objy, and objz. The function returns GL_TRUE, indicating success, or GL_FALSE, indicating failure (such as an noninvertible matrix). This operation does not attempt to clip the coordinates to the viewport or eliminate depth values that fall outside of glDepthRange().
There are inherent difficulties in trying to reverse the transformation process. A two-dimensional screen location could have originated from anywhere on an entire line in three-dimensional space. To disambiguate the result, gluUnProject() requires that a window depth coordinate (winz) be provided and that winz be specified in terms of glDepthRange(). For the default values of glDepthRange(), winz at 0.0 will request the world coordinates of the transformed point at the near clipping plane, while winz at 1.0 will request the point at the far clipping plane.
Example 3-8 (again, see the PDF) demonstrates gluUnProject() by reading the mouse position and determining the three-dimensional points at the near and far clipping planes from which it was transformed. The computed world coordinates are printed to standard output, but the rendered window itself is just black.
In terms of performance, I found this quickly via Google as an example of what you might not want to do using gluUnProject, with a link to what might lead to a better alternative. I have absolutely no idea how applicable it is to the iPhone, as I'm still a newb with OpenGL ES. Ask me again in a month. ;-)
You need to have the opengl projection and modelview matrices. Multiply them to gain the modelview projection matrix. Invert this matrix to get a matrix that transforms clip space coordinates into world coordinates. Transform your touch point so it corresponds to clip coordinates: the center of the screen should be zero, while the edges should be +1/-1 for X and Y respectively.
construct two points, one at (0,0,0) and one at (touch_x,touch_y,-1) and transform both by the inverse modelview projection matrix.
Do the inverse of a perspective divide.
You should get two points describing a line from the center of the camera into "the far distance" (the farplane).
Do picking based on simplified bounding boxes of your models. You should be able to find ray/box intersection algorithms aplenty on the web.
Another solution is to paint each of the models in a slightly different color into an offscreen buffer and reading the color at the touch point from there, telling you which brich was touched.
Here's source for a cursor I wrote for a little project using bullet physics:
float x=((float)mpos.x/screensize.x)*2.0f -1.0f;
float y=((float)mpos.y/screensize.y)*-2.0f +1.0f;
p2=renderer->camera.unProject(vec4(x,y,1.0f,1));
p2/=p2.w;
vec4 pos=activecam.GetView().col_t;
p1=pos+(((vec3)p2 - (vec3)pos) / 2048.0f * 0.1f);
p1.w=1.0f;
btCollisionWorld::ClosestRayResultCallback rayCallback(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z));
game.dynamicsWorld->rayTest(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z), rayCallback);
if (rayCallback.hasHit())
{
btRigidBody* body = btRigidBody::upcast(rayCallback.m_collisionObject);
if(body==game.worldBody)
{
renderer->setHighlight(0);
}
else if (body)
{
Entity* ent=(Entity*)body->getUserPointer();
if(ent)
{
renderer->setHighlight(dynamic_cast<ModelEntity*>(ent));
//cerr<<"hit ";
//cerr<<ent->getName()<<endl;
}
}
}
Imagine a line that extends from the viewer's eye
through the screen touch point into your 3D model space.
If that line intersects any of the cube's faces, then the user has touched the cube.
Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".
One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.
The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.
Google for opengl screen to world (for example there’s a thread where somebody wants to do exactly what you are looking for on GameDev.net). There is a gluUnProject function that does precisely this, but it’s not available on iPhone, so that you have to port it (see this source from the Mesa project). Or maybe there’s already some publicly available source somewhere?