Lightning effect in opengl es - iphone

Is there a way to create a lightning effect on the iPhone using opengl?(like this app)
Right now I have modified the glpaint sample to draw random points around a line (between two points that the user touches) and then connecting them, but the result is a zigzag line that constantly jumps around and lags horribly on the actual device.

you'll probably just want to make a triangle strip from the center of the device to the point that is being touched, then apply a drawn lightning texture to that resultant polygon.
You can animate the texture in order to get the jumping lightning effect.

A simple way to create a lightning effect is to compute the lightning path using a 2D Perlin function, rendering it to a glow buffer, blurring it with a Gaussian blur shader, and merging it with the scene. You can make the lightning move by computing two paths (start and end) with an identical number of path nodes and moving each node of the start path successively towards the corresponding node of the end path. Once the end path has been reached, it becomes the start path and a new end path is computed.

Related

I want to check if a line between two vector points falls into a particular area or a set of vector coordinates?

Note : I am coding in Unity, using C# script. I cannot use trigger hit detection using Raycasting because there are a lot of trigger colliders in between the target and source which detect touches and the sort. So the ray hits the other triggers before even reaching it's target, which is not desirable.
What I want to accomplish basically is return a boolean value if a vector line crosses or intersects a particular set of vector coordinates or an area.
For example: Detecting a laser entering into a fog in between it's path when shooting at it's target. The fog is a trigger collider based game object.
Edit: Another example would be to check if a line crosses a 2D box area in a 2D graph. Keep in mind, I cannot use collision detection or Raycast hit here.
There is no need for code, just explain the concept of how it could be accomplished. Though a code snippet is also welcome. Thankyou!
[...] So the ray hits the other triggers before even reaching it's target, which is not desirable.
What about putting that on Layers? You can specify LayerMasks for Raycasts.
You could consider the edges of the 2D box as lines i.e 4 lines and check for intersection of a line with these 4 edges using line to line intersection technique. If the line however is small enough to fit inside the box, and you want to treat it as a valid intersection, then check if the two points of the line is inside the bounds of the box.

Find angle face under mouse pointer in Unity 3d

I have a projector component and I need to find the angle that projected texture falls at to exclude the projecting on vertical faces.
My projector is under the mouse pointer and works ok when it is over an horizontal face:
I would like the projector to switch off on vertical faces to avoid this bad effect:
If possible, I would like to do it in the shader code to avoid the vertical projected image even if the cursor is located on the corners of an horizontal face and a part "goes out" on vertical face.
I found this solution in C#:
if (Physics.Raycast(MouseRay,out hitInfo)){
if(hitInfo.normal.y>0) {
// draw
} else {
// not draw
}
}
But only it works on curved surfaces and not, for example, on the face cubes.
How can I do this properly?
Normally they would use an image on a quad using TGA transparency, which rotates itself to the face that the middle of the object is aligned to, using ray to find the vertex and making it's absolute normal.
Other ways of doing it would be quite tricky, perhaps using decals... If you did it using a shader, it would take so much time... it's a case of problem solving not being ordered in order of importance for fast development. Technically you can project a volumetric texture on top of whatever object you are using... that way you can add your barred circle projected from a point in space towards the object, as a mathematical formula, it takes a while to do, check out volumetric textures, i have written some and in your case it needs the mouse pos sent to texture and maths to add transparent zone and red zone to texture. takes all day.
It's fine to have a flat circle that flips around when you change the pointer onto a different face, it will just look like a physical card and it's much easier to code, 10 minutes instead of many hours.

Fill a touch drawn path of CGPoints using OpenGL or CoreGraphics

I have a NSArray of points that make up a path. I can detect when it self-intersects. When this happens, I try to fill the path.
First I used CoreGraphics, now I'm using openGl to draw a triangle array. Doesn't work well as you can see in the image.
How do I fill only the circular area while leaving the "tail" alone? I was thinking of a reverse flood fill but don't think CG has any API functions for this...
Maybe instead of actually drawing the path you can just approximate the diameter of the path and draw a circle with your approximation.
Here is some code to detect a circle gesture on the iPhone:
http://www.mobileorchard.com/iphone-circle-gesture-detection/
Record all of the points in a doubly-linked list. When it comes time to fill, walk the list from the start and find the point that's closest to the end. Then, lineto that point, then lineto each point in reverse order, stopping with the second point in the list. The fill will implicitly close the path, which will jump from where you left off (the second point) back to the start (first) point.
This is just off the top of my head; you can play with a couple of variations on this to see what works best. You might record the closest previous node in each node, but this could get expensive for many nodes.

Screen-to-World coordinate conversion in OpenGLES an easy task?

The Screen-to-world problem on the iPhone
I have a 3D model (CUBE) rendered in an EAGLView and I want to be able to detect when I am touching the center of a given face (From any orientation angle) of the cube. Sounds pretty easy but it is not...
The problem:
How do I accurately relate screen-coordinates (touch point) to world-coordinates (a location in OpenGL 3D space)? Sure, converting a given point into a 'percentage' of the screen/world-axis might seem the logical fix, but problems would arise when I need to zoom or rotate the 3D space. Note: rotating & zooming in and out of the 3D space will change the relationship of the 2D screen coords with the 3D world coords...Also, you'd have to allow for 'distance' in between the viewpoint and objects in 3D space. At first, this might seem like an 'easy task', but that changes when you actually examine the requirements. And I've found no examples of people doing this on the iPhone. How is this normally done?
An 'easy' task?:
Sure, one might undertake the task of writing an API to act as a go-between between screen and world, but the task of creating such a framework would require some serious design and would likely take 'time' to do -- NOT something that can be one-manned in 4 hours...And 4 hours happens to be my deadline.
The question:
What are some of the simplest ways to
know if I touched specific locations
in 3D space in the iPhone OpenGL ES
world?
You can now find gluUnProject in http://code.google.com/p/iphone-glu/. I've no association with the iphone-glu project and haven't tried it yet myself, just wanted to share the link.
How would you use such a function? This PDF mentions that:
The Utility Library routine gluUnProject() performs this reversal of the transformations. Given the three-dimensional window coordinates for a location and all the transformations that affected them, gluUnProject() returns the world coordinates from where it originated.
int gluUnProject(GLdouble winx, GLdouble winy, GLdouble winz,
const GLdouble modelMatrix[16], const GLdouble projMatrix[16],
const GLint viewport[4], GLdouble *objx, GLdouble *objy, GLdouble *objz);
Map the specified window coordinates (winx, winy, winz) into object coordinates, using transformations defined by a modelview matrix (modelMatrix), projection matrix (projMatrix), and viewport (viewport). The resulting object coordinates are returned in objx, objy, and objz. The function returns GL_TRUE, indicating success, or GL_FALSE, indicating failure (such as an noninvertible matrix). This operation does not attempt to clip the coordinates to the viewport or eliminate depth values that fall outside of glDepthRange().
There are inherent difficulties in trying to reverse the transformation process. A two-dimensional screen location could have originated from anywhere on an entire line in three-dimensional space. To disambiguate the result, gluUnProject() requires that a window depth coordinate (winz) be provided and that winz be specified in terms of glDepthRange(). For the default values of glDepthRange(), winz at 0.0 will request the world coordinates of the transformed point at the near clipping plane, while winz at 1.0 will request the point at the far clipping plane.
Example 3-8 (again, see the PDF) demonstrates gluUnProject() by reading the mouse position and determining the three-dimensional points at the near and far clipping planes from which it was transformed. The computed world coordinates are printed to standard output, but the rendered window itself is just black.
In terms of performance, I found this quickly via Google as an example of what you might not want to do using gluUnProject, with a link to what might lead to a better alternative. I have absolutely no idea how applicable it is to the iPhone, as I'm still a newb with OpenGL ES. Ask me again in a month. ;-)
You need to have the opengl projection and modelview matrices. Multiply them to gain the modelview projection matrix. Invert this matrix to get a matrix that transforms clip space coordinates into world coordinates. Transform your touch point so it corresponds to clip coordinates: the center of the screen should be zero, while the edges should be +1/-1 for X and Y respectively.
construct two points, one at (0,0,0) and one at (touch_x,touch_y,-1) and transform both by the inverse modelview projection matrix.
Do the inverse of a perspective divide.
You should get two points describing a line from the center of the camera into "the far distance" (the farplane).
Do picking based on simplified bounding boxes of your models. You should be able to find ray/box intersection algorithms aplenty on the web.
Another solution is to paint each of the models in a slightly different color into an offscreen buffer and reading the color at the touch point from there, telling you which brich was touched.
Here's source for a cursor I wrote for a little project using bullet physics:
float x=((float)mpos.x/screensize.x)*2.0f -1.0f;
float y=((float)mpos.y/screensize.y)*-2.0f +1.0f;
p2=renderer->camera.unProject(vec4(x,y,1.0f,1));
p2/=p2.w;
vec4 pos=activecam.GetView().col_t;
p1=pos+(((vec3)p2 - (vec3)pos) / 2048.0f * 0.1f);
p1.w=1.0f;
btCollisionWorld::ClosestRayResultCallback rayCallback(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z));
game.dynamicsWorld->rayTest(btVector3(p1.x,p1.y,p1.z),btVector3(p2.x,p2.y,p2.z), rayCallback);
if (rayCallback.hasHit())
{
btRigidBody* body = btRigidBody::upcast(rayCallback.m_collisionObject);
if(body==game.worldBody)
{
renderer->setHighlight(0);
}
else if (body)
{
Entity* ent=(Entity*)body->getUserPointer();
if(ent)
{
renderer->setHighlight(dynamic_cast<ModelEntity*>(ent));
//cerr<<"hit ";
//cerr<<ent->getName()<<endl;
}
}
}
Imagine a line that extends from the viewer's eye
through the screen touch point into your 3D model space.
If that line intersects any of the cube's faces, then the user has touched the cube.
Two solutions present themselves. Both of them should achieve the end goal, albeit by a different means: rather than answering "what world coordinate is under the mouse?", they answer the question "what object is rendered under the mouse?".
One is to draw a simplified version of your model to an off-screen buffer, rendering the center of each face using a distinct color (and adjusting the lighting so color is preserved identically). You can then detect those colors in the buffer (e.g. pixmap), and map mouse locations to them.
The other is to use OpenGL picking. There's a decent-looking tutorial here. The basic idea is to put OpenGL in select mode, restrict the viewport to a small (perhaps 3x3 or 5x5) window around the point of interest, and then render the scene (or a simplified version of it) using OpenGL "names" (integer identifiers) to identify the components making up each face. At the end of this process, OpenGL can give you a list of the names that were rendered in the selection viewport. Mapping these identifiers back to original objects will let you determine what object is under the mouse cursor.
Google for opengl screen to world (for example there’s a thread where somebody wants to do exactly what you are looking for on GameDev.net). There is a gluUnProject function that does precisely this, but it’s not available on iPhone, so that you have to port it (see this source from the Mesa project). Or maybe there’s already some publicly available source somewhere?

How can I render reflections in OpenGL ES on the iPhone without a stencil buffer?

I'm looking for an alternative technique for rendering reflections in OpenGL ES on the iPhone. Usually I would do this by using the stencil buffer to mark where the reflection can be seen (the reflective surface) and then render the reversed image only in those pixels. Thus when the reflected object moves off the surface its reflection is no longer seen. However, since the iPhone's implementation doesn't support the stencil buffer I can't determine how to hide the portions of the reflection that fall outside of the surface.
To clarify, the issue isn't rendering the reflections themselves, but hiding them when they wouldn't be visible.
Any ideas?
Render the reflected scene first; copy out to a texture using glCopyTexImage2D; clear the framebuffer; draw the scene proper, applying the copied texture to the reflective surface.
I don't have an answer for reflections, but here's how I'm doing shadows without the stencil buffer, perhaps it will give you an idea:
I perform basic front-face/back-face determination of the mesh from the point of view of the light source. I then get a list of all edges that connect a front triangle to a back triangle. I treat this edge list as a line "loop". I project the vertices of this loop along the object-light ray until it intersects the ground. These intersection points are then used to calculate a 2D polygon on the same plane as the ground. I then use a tesselation algorithm to turn that poly into triangles. (This works fine as long as your lights sources or objects don't move too often.)
Once I have the triangles, I render them with a slight offset such that the depth buffer will allow the shadow to pass. Alternatively you can use a decaling algorithm such as the one in the Red Book.