How to resize a window in openGL - iphone

I'm making a game in OpenGL. I have a viewport that is multiplied by a transformation matrix using glOrthof(). I'm almost done with the game, but I've made a last minute decision to scale everything down a little bit to increase visibility. I have included a diagram depicting how my screen is currently set up (the black box) and how I would like to scale it (the red box).
given the width and height of the black box, and x and y in the diagram, would it be possible to adjust the viewport, or perhaps do some sort of matrix multiplication to increase the window size?
I don't want to actually scale the game, I just want to increase the size of the window (which I guess will ultimately scale the game, but I want to preserve the relative scale).
right now, this is how I'm setting up my view:
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(-backingWidth/2.0, backingWidth/2.0, -backingHeight/2.0, backingHeight/2.0, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
where backingWidth, and backingHeight are the width and height of the black box respectively.
I'm pretty new to OpenGL, so all help is appreciated.

If you want to see more area in the same viewport size, you can just increase the values given to glOrthof.
If you multiply the top/left/bottom/right of glOrthof to be twice as large, then you will see twice as much in each direction, and everything will be half the original size in each direction, because you're putting twice as much content into the same number of pixels.
You can multiply glOrthof by any scale factor you want.
==EDIT==
Sorry, I noticed that you want to only scale X to the right, and not about the center. In that case leave the left attribute the same, and just add more to the right value of glOrthof.

Related

Drawing sprite in OpenGL ES that scales with distance

I'm building a game in OpenGL ES 1 that involves a terrain map shown in perspective. I want to draw some sprites on the map that scale with distance. I'm able to draw sprites, but they're always the same size no matter how far away they are from the camera.
I believe I could dynamically calculate the size based on the distance from the camera, the viewport width, etc., but I'd much prefer having the size calculated automatically.
Here's my code:
GLfloat quadratic[] = { 1.0f, 0.0f, 0.0f };
glPointParameterfv(GL_POINT_DISTANCE_ATTENUATION, quadratic);
glPointSize(40);
glPointParameterf(GL_POINT_SIZE_MAX, maxSize);
glPointParameterf(GL_POINT_SIZE_MIN, 1.0f);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glEnable(GL_POINT_SPRITE_OES);
GLfloat point_array[] =
{
territoryOrigin.x, territoryOrigin.y, 10.0,
};
glVertexPointer(3, GL_FLOAT, 0, point_array);
glDrawArrays(GL_POINTS, 0, 1);
glTexEnvi(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_FALSE);
glDisable(GL_POINT_SPRITE_OES);
Ok, I figured this out. Basically I drew quads that act like 'pop-up' cutouts where the angle of the pop-up is determined by the current viewing rotation. Then I disable the depth test when performing the drawing so they don't cut into the 3D terrain they're drawn on. The benefit of this approach is I don't need to calculate a scale value - it's taken care of because I'm drawing regular quads in a perspective viewport.
First, I need to determine the viewing angle on a scale from 0 (overhead) to pi/2 (ground-level). I do that with this equation:
viewingAngle = (currentRotation / 90.0) * M_PI_2;
currentRotation is simply the angle I'm using in glRotatef. Given the viewing angle, I can calculate the vertical height and depth of the 'pop-up' edge of the quad. Basically, it's simple trigonometry from here. Imagine looking at the pop-up cutout from the side. It has a fixed base, and it has an edge that raises from a horizontal position to a vertical position. This edge traces the outline of a circle quadrant. And at any given point, it forms a right triangle you can use to calculate position values.
If you know the angle (as seen in the snippet above) and the hypotenuse (which in this case is the y-height of the pop-up image texture as if it were laying flat) then you can solve for the opposide side of the triangel by multiplying the sin of the angle times the hypotenuse. This value corresponds to the depth at which the pop-up edge must be lifted off the ground. Since sin(angle) = opposite/hypotenuse, I can solve for 'opposite' as such:
popUpValueZ = sinf(viewingAngle) * imageHeight;
Next, I needed to calculate the y-size of the pop-up image. In the imaginary triangle above, this corresponds to the side adjacent to the pop-up angle. As such, cosine is used to calculate its value:
popUpValueY = cosf(viewingAngle) * imageHeight;
Now I can use popUpValueY and popUpValueZ to determine my vertices. They will act as the height and depth of my quad, respectively. As the viewing angle gets lower to the ground, the Z value increases off the ground and the Y value gets shorter and shorter, so that it begins to resemble a vertical plane instead of a horizontal one.
The other thing I had to do:
glDisable(GL_DEPTH_TEST);
I found that these pop-up 'pseudo-sprites' were fighting with the 3D terrain, so I simply disabled the depth test before drawing them. This way, they scale exactly as they should based on their position within the perspective viewport, but they always appear on top of anything drawn earlier than them. In my particular case I do want these sprites to be occluded by terrain drawn in front of it, so I just re-enabled depth testing when drawing the closer terrain.

With a 2D iPhone OpenGL ES 1.1 app, how do I get my depth buffer working for textures?

I'm making a 2D videogame. Right now I don't have that many sprites and one texture with no depth buffer works fine. But when I expand to multiple textures I want to use a depth buffer so that I don't have to make multiple passes over the same texture and so that I don't have to organize my textures with respect to any depth constraints.
When I try to get the depth buffer working I can only get a blank screen with the correct clear color. I'm going to explain my working setup without the depth buffer and list questions I have for upgrading to the depth buffer:
Right now my vertices only have position(x,y) and texture(x,y) coords. There is nothing else. No lighting, no normals, no color, etc. Is it correct that the only upgrade I have to make here is to add a z coord to my position?
Right now I am using:
glOrthof(-2, 2, -3, 3, -1, 1);
this works with no depth buffer. But when I add the depth buffer I think I need to change the near and far values. What should I change them to?
Right now for my glTexImage2D() I am using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, size.x, size.y, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
when I add the depth buffer do I have to change any of those arguments?
With my call to glClearDepthf();, should I be using one of the near or far values that I use in my call to glOrthof()? which one?
Since your working with 2D and ortho I find that it helps to have a viewport with coordinates that match your resolution, so this will keep things more readable:
CGRect rect = self.view.bounds;
if (ORTHO) {
if (highRes && (retina == 1)) {
glOrthof(0.0, rect.size.width/2, 0.0 , rect.size.height/2, -1, 1000.0);
} else {
glOrthof(0.0, rect.size.width, 0.0 , rect.size.height, -1, 1000.0);
}
glViewport(0, 0, rect.size.width*retina, rect.size.height*retina);
}
Notice that I always use 320x480 coordinates even on retina, this way I can use the same coordinates for both res, and a .5 will give me pixel perfect on retina, but you can go the other way.
Regarding depth I use a -1 to 1000 depth, so I can draw up to -1000 Z.
Make sure you're binding the depth buffer correctly, something like this:
// Need a depth buffer
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, framebufferWidth, framebufferHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
Or your problem can be as simple as using a depth that's behind your camera and lights or bigger than your buffer, try to use a depth between 0 and -1 (-0.5 for ex.), with my glOrthof you can go up to -1000;
EDIT
Values in glOrthof for near and far specify a quantity (distance), not coordinates, this can be confusing when specifying depth values.
When you specify 1000 for the far parameter, what we are actually saying is the far clipping plane is a 1000 units distant from the viewer, the same with the near field, unfortunately specifying a clipping plane behind the viewer will take negative values, which contributes to the confusion.
So when it comes drawing time we have a clipping plane that's 1000 units from the viewer in front (far or into the screen), in terms of coordinates Z is negative when bellow the viewing plane (into the screen), our actually drawing world is between Z = 1 and Z = -1000, being -1000 the farthest we can go with these parameters.
If you arn't going to use an exisiting library lie Cocos2D for example then you will have to write a manager to manage the Depth buffer yourself based on either
Order that they were added to the screen
User Customised Z value so you can swap them around as needed

Problem with glTranslatef

I use the glTranslate command to shift the position of a sprite which I load from a texture in my iPhone OpenGL App. My problem is after I apply glTranslatef, the image appears a little blurred. When I comment that line of code, the image is crystal clear. How can i resolve this issue???
You're probably not hitting the screen pixel grid exactly. This will cause texture filtering to blur it. The issue is a bit complicated: Instead of seeing the screen an texture as a array of points, see it as sheets of grid ruled paper (the texture sheet can be stretched, sheared, scaled). To make things look crisp the grids must align perfectly. The texture coordinates (0,0) and (1,1) don't hit the center of the texels but the outer edges of the texture sheet. Thus you need a little bit to offset and scale to address the texel centers. And the same goes for placing the target quads on the screen, where the vertex position must be aligned with the edges of the screen, not the pixel centers. If your projection and modelview matrix are not setup in a way that one unit in modelview space is one pixel wide and the projection fills the whole screen (or window viewport) it's difficult to get this right.
One normally starts with
glViewport(0,0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// modelview XY range 0..width x 0..height now covers the whole viewport
// (0,0) don't address the lower left pixel but the lower left edge of this
// (width,height) similarily addresses the upper right corner
// drawing a 0..width x 0..height quad with texture coordinates 0..1 x 0..1
// will cover it perfectly
This will work as long as the quad as exactly the same dimensions (i.e. it's vertex positions match) the texture coordinates and the vertex positions are integers.
Now the interesting part is: What if they don't meet those conditions. Then aliasing occours. In GL_NEAREST filtering mode things still look crisp, but some lines/rows are simply missing. In GL_LINEAR filtering mode neighbouring pixels are interpolated with the interpolation factor beding determined how far off grid they are (in laymans terms, the actual implementation looks slightly different).
So how to solve your issue: Draw sprites in a projection/modelview that matches with the viewport, use only integer coordinates for the vertex coordinates and make your texture cover the whole picture. If you're using only a part of the texture coordinate range, things get even more interesting, since one addressed the texture grid, not the texel centers.
I would recommend looking at your modelview matrix declaration and be sure that glLoadIdentity() is being called to ensure that the matrix stack is clean before applying the transform.

How to set up a user Quartz2D coordinate system with scaling that avoids fuzzy drawing?

This topic has been scratched once or twice, but I am still puzzled. And Google was not friendly either.
Since Quartz allows for arbitrary coordinate systems using affine transforms, I want to be able to draw things such as floorplans using real-life coordinate, e.g. feet.
So basically, for the sake of an example, I want to scale the view so that when I draw a 10x10 rectangle (think a 10-inch box for example), I get a 60x60 pixels rectangle.
It works, except the rectangle I get is quite fuzzy. Another question here got an answer that explains why. However, I'm not sure I understood that reason why, and moreover, I don't know how to fix it. Here is my code:
I set my coordinate system in my awakeFromNib custom view method:
- (void) awakeFromNib {
CGAffineTransform scale = CGAffineTransformMakeScale(6.0, 6.0);
self.transform = scale;
}
And here is my draw routine:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect r = CGRectMake(10., 10., 10., 10.);
CGFloat lineWidth = 1.0;
CGContextStrokeRectWithWidth(context, r, lineWidth);
}
The square I get is scaled just fine, but totally fuzzy. Playing with lineWidth doesn't help: when lineWidth is set smaller, it gets lighter, but not crisper.
So is there a way to set up a view to have a scaled coordinate system, so that I can use my domain coordinates? Or should I go back and implementing scaling in my drawing routines?
Note that this issue doesn't occur for translation or rotation.
Thanks
the [stroked] rectangle I get is quite fuzzy.
Usually, this is because you plotted the rectangle on whole-number co-ordinates and your line width is 1.
In PostScript (and thus in its descendants: AppKit, PDF, and Quartz), drawing units default to points, 1 point being exactly 1/72 inch. The Mac and iPhone currently* treat every such point as 1 pixel, regardless of the actual resolution of the screen(s), so, in a practical sense, points (by default, on the Mac and iPhone) are equal to pixels.
In PostScript and its descendants, integral co-ordinates run between points. 0, 0, for example, is the lower-left corner of the lower-left point. 1, 0 is the lower-right corner of that same point (and the lower-left corner of the next point to the right).
A stroke is centered on the path you're stroking. Thus, half will be inside the path, half outside.
In the (conceptually) 72-dpi world of the Mac, these two facts combine to produce a problem. If 1 pt is equal to 1 pixel, and you apply a 1-pt stroke between two pixels, then half of the stroke will hit each of those pixels.
Quartz, at least, will render this by painting the current color into both pixels at one-half of the color's alpha. It determines this by how much of the pixel is covered by the conceptual stroke; if you used a 1.5-pt line width, half of that is 0.75 pt, which is three-quarters of each 1-pt pixel, so the color will be rendered at 0.75 alpha. This, of course, goes to the natural conclusion: If you use a 2-pt line width, each pixel is completely covered, so the alpha will be 1. That's why you can see this effect with a 1-pt stroke and not a 2-pt stroke.
There are several workarounds:
Half-point translation: Exactly what it says on the box, you translate up and right by half a point, compensating for the aforementioned 1-pt-cut-in-half division.
This works in simple cases, but flakes out when you involve any other co-ordinate transformations except whole-point translations. That is to say, you can translate by 30, 20 and it'll still work, but if you translate by 33+1/3, 25.252525…, or if you scale or rotate at all, your half-point translation will be useless.
Inner stroke: Clip first, then double the line width (because you're only going to draw half of it), then stroke.
This can require gstate juggling if you have a lot of other drawing to do, since you don't want that clipping path affecting your other drawing.
Outer stroke: Essentially the same as an inner stroke, except that you reverse the path before clipping.
Can be better (less gstate juggling) than an inner stroke if you're sure that the paths you want to stroke won't overlap. On the other hand, if you also want to fill the path, the gstate juggling returns.
*This won't last forever. Apple's been dropping hints for some time that they're going to change at least the Mac's drawing resolution at some point. The API foundation for such a change is pretty much all there now; it's all a matter of Apple throwing the switch.
Well, as often, explaining the issue lead me to a solution.
The problem is that the view transform property is applied to it after it has been drawn into a bit buffer. The scaling transform has to be applied before drawing, ie. in the drawRect method. So scratch the awakeFromNib I gave, and here is a correct drawRect:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform scale = CGAffineTransformMakeScale(6.0, 6.0);
CGContextConcatCTM(context, scale);
CGRect r = CGRectMake(10., 10., 10., 10.);
CGFloat lineWidth = 0.1;
CGContextStrokeRectWithWidth(context, r, lineWidth);
}

OpenGL ES - how to keep some object at a fixed size?

I'm working on a little game in OpenGL ES.
In the background, there is a world/map. The map is just a large texture.
Zoom/pinch/pan is used to move around. And I'm using glOrthof (left, right, bottom, top, zNear, zFar) to implement the zoom/pinch.
When I zoom in, the sprites on top of the map is also zoomed in. But I would like to have some sprites stay at a fixed size.
I could probably calculate a scale factor, depending on the parameters to glOrthof, but there must be a more natural and straightforward way of doing that, instead of scaling the sprites down when I zoom in.
If I add some text or some GUI elements on top of the map, they should definately have a fixed size.
Is there a solution to do this, or do I have to leave fixed values in glOrthof and implement zoom/pinch in another way?
EDIT: To be more clear: I want sprites that zoom in/out along with the map, but stay at the same size.
I have some elements that are like the pins on the iPhone's map application. When you zoom, the pins stay the same size, but move around on the screen to stay on the same spot on the map. That is mainly what I want a solution for.
Solutions for this already came below, thanks!
First call glOrthof with the settings you have, then draw the things that scale. Then make another call to glOrthof with different settings (after glLoadIdentity probably), and then draw the things that should not be scaled.
you can use something like this to draw fixed size elements at a given 3D position, keeping the current projection settings :
// go to correct coordinates
double v[3] = { x , y , z };
glRasterPos3dv( v );
glBitmap( 0 , 0 , 0 , 0 , -center_pix_x , -center_pix_y , NULL );
// and draw pixels
glPixelStorei( GL_PACK_LSB_FIRST , true );
glPixelStorei( GL_PACK_ALIGNMENT , 1 );
glDrawPixels( img_width , img_height , GL_RGBA , GL_UNSIGNED_BYTE , img_data_ptr );
center_pix are the coordinates of the reference point in the sprite that will match the 3D point.
Found one solution in this thread:
Drawing "point-like" shapes in OpenGL, indifferent to zoom
Point sprites... Apple's GLPaint example also uses this.
Quite simple to use. Uses the current texture.
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(40.0f);
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, 4);
These will move when the map moves, but does not change the size.
Edit: A small tip: Remember that the point coordinate is the middle of the texture, not a corner or anything. I struggled a bit with my sprites apparently "moving", because I used only the 35x35 upper left pixels in a 64x64 texture. Move your graphics to the middle of the texture and you'll be fine.