Drawing sprite in OpenGL ES that scales with distance - iphone

I'm building a game in OpenGL ES 1 that involves a terrain map shown in perspective. I want to draw some sprites on the map that scale with distance. I'm able to draw sprites, but they're always the same size no matter how far away they are from the camera.
I believe I could dynamically calculate the size based on the distance from the camera, the viewport width, etc., but I'd much prefer having the size calculated automatically.
Here's my code:
GLfloat quadratic[] = { 1.0f, 0.0f, 0.0f };
glPointParameterfv(GL_POINT_DISTANCE_ATTENUATION, quadratic);
glPointSize(40);
glPointParameterf(GL_POINT_SIZE_MAX, maxSize);
glPointParameterf(GL_POINT_SIZE_MIN, 1.0f);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glEnable(GL_POINT_SPRITE_OES);
GLfloat point_array[] =
{
territoryOrigin.x, territoryOrigin.y, 10.0,
};
glVertexPointer(3, GL_FLOAT, 0, point_array);
glDrawArrays(GL_POINTS, 0, 1);
glTexEnvi(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_FALSE);
glDisable(GL_POINT_SPRITE_OES);

Ok, I figured this out. Basically I drew quads that act like 'pop-up' cutouts where the angle of the pop-up is determined by the current viewing rotation. Then I disable the depth test when performing the drawing so they don't cut into the 3D terrain they're drawn on. The benefit of this approach is I don't need to calculate a scale value - it's taken care of because I'm drawing regular quads in a perspective viewport.
First, I need to determine the viewing angle on a scale from 0 (overhead) to pi/2 (ground-level). I do that with this equation:
viewingAngle = (currentRotation / 90.0) * M_PI_2;
currentRotation is simply the angle I'm using in glRotatef. Given the viewing angle, I can calculate the vertical height and depth of the 'pop-up' edge of the quad. Basically, it's simple trigonometry from here. Imagine looking at the pop-up cutout from the side. It has a fixed base, and it has an edge that raises from a horizontal position to a vertical position. This edge traces the outline of a circle quadrant. And at any given point, it forms a right triangle you can use to calculate position values.
If you know the angle (as seen in the snippet above) and the hypotenuse (which in this case is the y-height of the pop-up image texture as if it were laying flat) then you can solve for the opposide side of the triangel by multiplying the sin of the angle times the hypotenuse. This value corresponds to the depth at which the pop-up edge must be lifted off the ground. Since sin(angle) = opposite/hypotenuse, I can solve for 'opposite' as such:
popUpValueZ = sinf(viewingAngle) * imageHeight;
Next, I needed to calculate the y-size of the pop-up image. In the imaginary triangle above, this corresponds to the side adjacent to the pop-up angle. As such, cosine is used to calculate its value:
popUpValueY = cosf(viewingAngle) * imageHeight;
Now I can use popUpValueY and popUpValueZ to determine my vertices. They will act as the height and depth of my quad, respectively. As the viewing angle gets lower to the ground, the Z value increases off the ground and the Y value gets shorter and shorter, so that it begins to resemble a vertical plane instead of a horizontal one.
The other thing I had to do:
glDisable(GL_DEPTH_TEST);
I found that these pop-up 'pseudo-sprites' were fighting with the 3D terrain, so I simply disabled the depth test before drawing them. This way, they scale exactly as they should based on their position within the perspective viewport, but they always appear on top of anything drawn earlier than them. In my particular case I do want these sprites to be occluded by terrain drawn in front of it, so I just re-enabled depth testing when drawing the closer terrain.

Related

Cull off parts above the mesh

So, I want to make scene same to this Sphere Scene
Now I have mesh with random generation as a ground and a sphere. But I dont't know how to cull off spheres geometry above mesh. Tried to use Stencil, and hightmap. Stencil rendered ground in front, but sphere above ground is still rendered. Using heightmap, to get know if it needs to render (I compared height map and worldPos) is problematic, because the texture is superimposed over the all sphere, and not projected onto it. Can you help. Is there any shader function to cull off all above mesh.
I did something similar for an Asteroids demo a few years ago. Whenever an asteroid was hit, I used a height map - really, just a noise map - to offset half of the vertices on the asteroid model to give it a broken-in-half look. For the other half, I just duplicated the asteroid model and offset the other half using the same noise map. The effect is that the two "halves" matched perfectly.
Here's what I'd try:
Your sphere model should be a complete sphere.
You'll need a height map for the terrain.
In your sphere's vertex shader, for any vertex north of the equator:
Sample the height map.
Set the vertex's Y coordinate to the height from the height map. This will effectively flatten the top of the sphere, and then offset it based on your height map. You will likely have to scale the height value here to get something rational.
Transform the new x,y,z as usual.
Note that you are not texturing the sphere. You're modifying the geometry. This needs to happen in the geometry part of the pipeline, not in the fragment shader.
The other thing you'll need to consider is how to add the debris - rocks, etc. - so that it matches the geometry offset on the sphere. Since you've got a height map, that should be straightforward.
To start with, I'd just get your vertex shader to flatten the top half of the sphere. Once that works, add in the height map.
For this to look convincing, you'll need a fairly high-resolution sphere and height map. To cut down on geometry, you could use a plane for the terrain and a hemisphere for the bottom part. Just discard any fragment for the plane that is not within the spherical volume you're interested in. (You could also use a circular "plane" rather than a rectangular plane, but getting the vertices to line up with the sphere and filling in holes at the border can be tricky.)
As I realised, there's no standard way to cull it without artifacts. The only way it can be done is using raymarching rendering.

How to resize a window in openGL

I'm making a game in OpenGL. I have a viewport that is multiplied by a transformation matrix using glOrthof(). I'm almost done with the game, but I've made a last minute decision to scale everything down a little bit to increase visibility. I have included a diagram depicting how my screen is currently set up (the black box) and how I would like to scale it (the red box).
given the width and height of the black box, and x and y in the diagram, would it be possible to adjust the viewport, or perhaps do some sort of matrix multiplication to increase the window size?
I don't want to actually scale the game, I just want to increase the size of the window (which I guess will ultimately scale the game, but I want to preserve the relative scale).
right now, this is how I'm setting up my view:
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-backingWidth/2.0, backingWidth/2.0, -backingHeight/2.0, backingHeight/2.0, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
where backingWidth, and backingHeight are the width and height of the black box respectively.
I'm pretty new to OpenGL, so all help is appreciated.
If you want to see more area in the same viewport size, you can just increase the values given to glOrthof.
If you multiply the top/left/bottom/right of glOrthof to be twice as large, then you will see twice as much in each direction, and everything will be half the original size in each direction, because you're putting twice as much content into the same number of pixels.
You can multiply glOrthof by any scale factor you want.
==EDIT==
Sorry, I noticed that you want to only scale X to the right, and not about the center. In that case leave the left attribute the same, and just add more to the right value of glOrthof.

Problem with glTranslatef

I use the glTranslate command to shift the position of a sprite which I load from a texture in my iPhone OpenGL App. My problem is after I apply glTranslatef, the image appears a little blurred. When I comment that line of code, the image is crystal clear. How can i resolve this issue???
You're probably not hitting the screen pixel grid exactly. This will cause texture filtering to blur it. The issue is a bit complicated: Instead of seeing the screen an texture as a array of points, see it as sheets of grid ruled paper (the texture sheet can be stretched, sheared, scaled). To make things look crisp the grids must align perfectly. The texture coordinates (0,0) and (1,1) don't hit the center of the texels but the outer edges of the texture sheet. Thus you need a little bit to offset and scale to address the texel centers. And the same goes for placing the target quads on the screen, where the vertex position must be aligned with the edges of the screen, not the pixel centers. If your projection and modelview matrix are not setup in a way that one unit in modelview space is one pixel wide and the projection fills the whole screen (or window viewport) it's difficult to get this right.
One normally starts with
glViewport(0,0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// modelview XY range 0..width x 0..height now covers the whole viewport
// (0,0) don't address the lower left pixel but the lower left edge of this
// (width,height) similarily addresses the upper right corner
// drawing a 0..width x 0..height quad with texture coordinates 0..1 x 0..1
// will cover it perfectly
This will work as long as the quad as exactly the same dimensions (i.e. it's vertex positions match) the texture coordinates and the vertex positions are integers.
Now the interesting part is: What if they don't meet those conditions. Then aliasing occours. In GL_NEAREST filtering mode things still look crisp, but some lines/rows are simply missing. In GL_LINEAR filtering mode neighbouring pixels are interpolated with the interpolation factor beding determined how far off grid they are (in laymans terms, the actual implementation looks slightly different).
So how to solve your issue: Draw sprites in a projection/modelview that matches with the viewport, use only integer coordinates for the vertex coordinates and make your texture cover the whole picture. If you're using only a part of the texture coordinate range, things get even more interesting, since one addressed the texture grid, not the texel centers.
I would recommend looking at your modelview matrix declaration and be sure that glLoadIdentity() is being called to ensure that the matrix stack is clean before applying the transform.

How to set up a user Quartz2D coordinate system with scaling that avoids fuzzy drawing?

This topic has been scratched once or twice, but I am still puzzled. And Google was not friendly either.
Since Quartz allows for arbitrary coordinate systems using affine transforms, I want to be able to draw things such as floorplans using real-life coordinate, e.g. feet.
So basically, for the sake of an example, I want to scale the view so that when I draw a 10x10 rectangle (think a 10-inch box for example), I get a 60x60 pixels rectangle.
It works, except the rectangle I get is quite fuzzy. Another question here got an answer that explains why. However, I'm not sure I understood that reason why, and moreover, I don't know how to fix it. Here is my code:
I set my coordinate system in my awakeFromNib custom view method:
- (void) awakeFromNib {
CGAffineTransform scale = CGAffineTransformMakeScale(6.0, 6.0);
self.transform = scale;
}
And here is my draw routine:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect r = CGRectMake(10., 10., 10., 10.);
CGFloat lineWidth = 1.0;
CGContextStrokeRectWithWidth(context, r, lineWidth);
}
The square I get is scaled just fine, but totally fuzzy. Playing with lineWidth doesn't help: when lineWidth is set smaller, it gets lighter, but not crisper.
So is there a way to set up a view to have a scaled coordinate system, so that I can use my domain coordinates? Or should I go back and implementing scaling in my drawing routines?
Note that this issue doesn't occur for translation or rotation.
Thanks
the [stroked] rectangle I get is quite fuzzy.
Usually, this is because you plotted the rectangle on whole-number co-ordinates and your line width is 1.
In PostScript (and thus in its descendants: AppKit, PDF, and Quartz), drawing units default to points, 1 point being exactly 1/72 inch. The Mac and iPhone currently* treat every such point as 1 pixel, regardless of the actual resolution of the screen(s), so, in a practical sense, points (by default, on the Mac and iPhone) are equal to pixels.
In PostScript and its descendants, integral co-ordinates run between points. 0, 0, for example, is the lower-left corner of the lower-left point. 1, 0 is the lower-right corner of that same point (and the lower-left corner of the next point to the right).
A stroke is centered on the path you're stroking. Thus, half will be inside the path, half outside.
In the (conceptually) 72-dpi world of the Mac, these two facts combine to produce a problem. If 1 pt is equal to 1 pixel, and you apply a 1-pt stroke between two pixels, then half of the stroke will hit each of those pixels.
Quartz, at least, will render this by painting the current color into both pixels at one-half of the color's alpha. It determines this by how much of the pixel is covered by the conceptual stroke; if you used a 1.5-pt line width, half of that is 0.75 pt, which is three-quarters of each 1-pt pixel, so the color will be rendered at 0.75 alpha. This, of course, goes to the natural conclusion: If you use a 2-pt line width, each pixel is completely covered, so the alpha will be 1. That's why you can see this effect with a 1-pt stroke and not a 2-pt stroke.
There are several workarounds:
Half-point translation: Exactly what it says on the box, you translate up and right by half a point, compensating for the aforementioned 1-pt-cut-in-half division.
This works in simple cases, but flakes out when you involve any other co-ordinate transformations except whole-point translations. That is to say, you can translate by 30, 20 and it'll still work, but if you translate by 33+1/3, 25.252525…, or if you scale or rotate at all, your half-point translation will be useless.
Inner stroke: Clip first, then double the line width (because you're only going to draw half of it), then stroke.
This can require gstate juggling if you have a lot of other drawing to do, since you don't want that clipping path affecting your other drawing.
Outer stroke: Essentially the same as an inner stroke, except that you reverse the path before clipping.
Can be better (less gstate juggling) than an inner stroke if you're sure that the paths you want to stroke won't overlap. On the other hand, if you also want to fill the path, the gstate juggling returns.
*This won't last forever. Apple's been dropping hints for some time that they're going to change at least the Mac's drawing resolution at some point. The API foundation for such a change is pretty much all there now; it's all a matter of Apple throwing the switch.
Well, as often, explaining the issue lead me to a solution.
The problem is that the view transform property is applied to it after it has been drawn into a bit buffer. The scaling transform has to be applied before drawing, ie. in the drawRect method. So scratch the awakeFromNib I gave, and here is a correct drawRect:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform scale = CGAffineTransformMakeScale(6.0, 6.0);
CGContextConcatCTM(context, scale);
CGRect r = CGRectMake(10., 10., 10., 10.);
CGFloat lineWidth = 0.1;
CGContextStrokeRectWithWidth(context, r, lineWidth);
}

OpenGL ES - how to keep some object at a fixed size?

I'm working on a little game in OpenGL ES.
In the background, there is a world/map. The map is just a large texture.
Zoom/pinch/pan is used to move around. And I'm using glOrthof (left, right, bottom, top, zNear, zFar) to implement the zoom/pinch.
When I zoom in, the sprites on top of the map is also zoomed in. But I would like to have some sprites stay at a fixed size.
I could probably calculate a scale factor, depending on the parameters to glOrthof, but there must be a more natural and straightforward way of doing that, instead of scaling the sprites down when I zoom in.
If I add some text or some GUI elements on top of the map, they should definately have a fixed size.
Is there a solution to do this, or do I have to leave fixed values in glOrthof and implement zoom/pinch in another way?
EDIT: To be more clear: I want sprites that zoom in/out along with the map, but stay at the same size.
I have some elements that are like the pins on the iPhone's map application. When you zoom, the pins stay the same size, but move around on the screen to stay on the same spot on the map. That is mainly what I want a solution for.
Solutions for this already came below, thanks!
First call glOrthof with the settings you have, then draw the things that scale. Then make another call to glOrthof with different settings (after glLoadIdentity probably), and then draw the things that should not be scaled.
you can use something like this to draw fixed size elements at a given 3D position, keeping the current projection settings :
// go to correct coordinates
double v[3] = { x , y , z };
glRasterPos3dv( v );
glBitmap( 0 , 0 , 0 , 0 , -center_pix_x , -center_pix_y , NULL );
// and draw pixels
glPixelStorei( GL_PACK_LSB_FIRST , true );
glPixelStorei( GL_PACK_ALIGNMENT , 1 );
glDrawPixels( img_width , img_height , GL_RGBA , GL_UNSIGNED_BYTE , img_data_ptr );
center_pix are the coordinates of the reference point in the sprite that will match the 3D point.
Found one solution in this thread:
Drawing "point-like" shapes in OpenGL, indifferent to zoom
Point sprites... Apple's GLPaint example also uses this.
Quite simple to use. Uses the current texture.
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(40.0f);
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, 4);
These will move when the map moves, but does not change the size.
Edit: A small tip: Remember that the point coordinate is the middle of the texture, not a corner or anything. I struggled a bit with my sprites apparently "moving", because I used only the 35x35 upper left pixels in a 64x64 texture. Move your graphics to the middle of the texture and you'll be fine.