Problem with glTranslatef - iphone

I use the glTranslate command to shift the position of a sprite which I load from a texture in my iPhone OpenGL App. My problem is after I apply glTranslatef, the image appears a little blurred. When I comment that line of code, the image is crystal clear. How can i resolve this issue???

You're probably not hitting the screen pixel grid exactly. This will cause texture filtering to blur it. The issue is a bit complicated: Instead of seeing the screen an texture as a array of points, see it as sheets of grid ruled paper (the texture sheet can be stretched, sheared, scaled). To make things look crisp the grids must align perfectly. The texture coordinates (0,0) and (1,1) don't hit the center of the texels but the outer edges of the texture sheet. Thus you need a little bit to offset and scale to address the texel centers. And the same goes for placing the target quads on the screen, where the vertex position must be aligned with the edges of the screen, not the pixel centers. If your projection and modelview matrix are not setup in a way that one unit in modelview space is one pixel wide and the projection fills the whole screen (or window viewport) it's difficult to get this right.
One normally starts with
glViewport(0,0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// modelview XY range 0..width x 0..height now covers the whole viewport
// (0,0) don't address the lower left pixel but the lower left edge of this
// (width,height) similarily addresses the upper right corner
// drawing a 0..width x 0..height quad with texture coordinates 0..1 x 0..1
// will cover it perfectly
This will work as long as the quad as exactly the same dimensions (i.e. it's vertex positions match) the texture coordinates and the vertex positions are integers.
Now the interesting part is: What if they don't meet those conditions. Then aliasing occours. In GL_NEAREST filtering mode things still look crisp, but some lines/rows are simply missing. In GL_LINEAR filtering mode neighbouring pixels are interpolated with the interpolation factor beding determined how far off grid they are (in laymans terms, the actual implementation looks slightly different).
So how to solve your issue: Draw sprites in a projection/modelview that matches with the viewport, use only integer coordinates for the vertex coordinates and make your texture cover the whole picture. If you're using only a part of the texture coordinate range, things get even more interesting, since one addressed the texture grid, not the texel centers.

I would recommend looking at your modelview matrix declaration and be sure that glLoadIdentity() is being called to ensure that the matrix stack is clean before applying the transform.

Related

Why is my Unity plane seemingly 10 times too big

I'm a relative Unity noob. I have a fairly simple scene. Currently in the following you will see a plane (object WorldTilemapGfx) and 2 sprites (Tile C: 0 R: 0, and Tile C: 1 R: 0).
In the following picture you see I've selected one of the sprites. Its scale is 1 x 1, and its at position 1, 0.
Now I select the other sprite.
So far the positions and sizes seem ok.
Now if I select the game object with a "plane" mesh it shows in the inspector as scale 2, 1. This is the scale I expect since it is supposed to be as wide as two of the tiles above, and as high as only 1 of them.
However its visually 10 times too big.
If I increase the X scale of one of my tiles by 10, then the relative sizes between tile and plane look ok
Also the image used for my tile is 256 x 256.
Can someone suggest what I am missing? Thanks.
See Unity Mesh Primitives
Plane
This is a flat square with edges ten units long oriented in the XZ plane of the local coordinate space. It is textured so that the whole image appears exactly once within the square. A plane is useful for most kinds of flat surface, such as floors and walls. A surface is also needed sometimes for showing images or movies in GUI and special effects. Although a plane can be used for things like this, the simpler quad primitive is often a more natural fit to the task.
whereas
Quad
The quad primitive resembles the plane but its edges are only one unit long and the surface is oriented in the XY plane of the local coordinate space. Also, a quad is divided into just two triangles whereas the plane contains two hundred. A quad is useful in cases where a scene object must be used simply as a display screen for an image or movie. Simple GUI and information displays can be implemented with quads, as can particles, sprites
and “impostor” images that substitute for solid objects viewed at a distance.
Ok.. confirmed using a Quad gave me what I expected in scale.. I now understand that the underlying plane mesh is actually 10 x 10 in size.
https://forum.unity.com/threads/really-dumb-question-scale-of-plane-compared-to-cube.33835/#:~:text=aNTeNNa%20trEE%20said%3A-,The%20plane%20is%20a%2010x10%20unit%20mesh.,a%20quick%20floor%20or%20wall.

Cull off parts above the mesh

So, I want to make scene same to this Sphere Scene
Now I have mesh with random generation as a ground and a sphere. But I dont't know how to cull off spheres geometry above mesh. Tried to use Stencil, and hightmap. Stencil rendered ground in front, but sphere above ground is still rendered. Using heightmap, to get know if it needs to render (I compared height map and worldPos) is problematic, because the texture is superimposed over the all sphere, and not projected onto it. Can you help. Is there any shader function to cull off all above mesh.
I did something similar for an Asteroids demo a few years ago. Whenever an asteroid was hit, I used a height map - really, just a noise map - to offset half of the vertices on the asteroid model to give it a broken-in-half look. For the other half, I just duplicated the asteroid model and offset the other half using the same noise map. The effect is that the two "halves" matched perfectly.
Here's what I'd try:
Your sphere model should be a complete sphere.
You'll need a height map for the terrain.
In your sphere's vertex shader, for any vertex north of the equator:
Sample the height map.
Set the vertex's Y coordinate to the height from the height map. This will effectively flatten the top of the sphere, and then offset it based on your height map. You will likely have to scale the height value here to get something rational.
Transform the new x,y,z as usual.
Note that you are not texturing the sphere. You're modifying the geometry. This needs to happen in the geometry part of the pipeline, not in the fragment shader.
The other thing you'll need to consider is how to add the debris - rocks, etc. - so that it matches the geometry offset on the sphere. Since you've got a height map, that should be straightforward.
To start with, I'd just get your vertex shader to flatten the top half of the sphere. Once that works, add in the height map.
For this to look convincing, you'll need a fairly high-resolution sphere and height map. To cut down on geometry, you could use a plane for the terrain and a hemisphere for the bottom part. Just discard any fragment for the plane that is not within the spherical volume you're interested in. (You could also use a circular "plane" rather than a rectangular plane, but getting the vertices to line up with the sphere and filling in holes at the border can be tricky.)
As I realised, there's no standard way to cull it without artifacts. The only way it can be done is using raymarching rendering.

HLSL lighting based on texture pixels instead of screen

In HLSL, how can I calculate lighting based on pixels of a texture, instead of pixels that make up the object?
In other words, if I have a 64x64px texture being rendered on a 1024x768px screen, I want to calculate the lighting as it affects the 64x64px space, resulting in jagged pixels instead of a smooth line.
I've researched dozens of answers but I'm not sure how I can determine at all times if a fragment is a part of a pixel that should be fully lit or not. Maybe this is the wrong approach?
The current implementation uses a diffuse texture and a normal map. It results in what appear as artifacts (diagonal lines) in the output:
Note: The reason it almost looks correct is because of the normal map, which causes some adjacent pixels to have normals that are angled just enough to light some pixels and not others.

Drawing sprite in OpenGL ES that scales with distance

I'm building a game in OpenGL ES 1 that involves a terrain map shown in perspective. I want to draw some sprites on the map that scale with distance. I'm able to draw sprites, but they're always the same size no matter how far away they are from the camera.
I believe I could dynamically calculate the size based on the distance from the camera, the viewport width, etc., but I'd much prefer having the size calculated automatically.
Here's my code:
GLfloat quadratic[] = { 1.0f, 0.0f, 0.0f };
glPointParameterfv(GL_POINT_DISTANCE_ATTENUATION, quadratic);
glPointSize(40);
glPointParameterf(GL_POINT_SIZE_MAX, maxSize);
glPointParameterf(GL_POINT_SIZE_MIN, 1.0f);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glEnable(GL_POINT_SPRITE_OES);
GLfloat point_array[] =
{
territoryOrigin.x, territoryOrigin.y, 10.0,
};
glVertexPointer(3, GL_FLOAT, 0, point_array);
glDrawArrays(GL_POINTS, 0, 1);
glTexEnvi(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_FALSE);
glDisable(GL_POINT_SPRITE_OES);
Ok, I figured this out. Basically I drew quads that act like 'pop-up' cutouts where the angle of the pop-up is determined by the current viewing rotation. Then I disable the depth test when performing the drawing so they don't cut into the 3D terrain they're drawn on. The benefit of this approach is I don't need to calculate a scale value - it's taken care of because I'm drawing regular quads in a perspective viewport.
First, I need to determine the viewing angle on a scale from 0 (overhead) to pi/2 (ground-level). I do that with this equation:
viewingAngle = (currentRotation / 90.0) * M_PI_2;
currentRotation is simply the angle I'm using in glRotatef. Given the viewing angle, I can calculate the vertical height and depth of the 'pop-up' edge of the quad. Basically, it's simple trigonometry from here. Imagine looking at the pop-up cutout from the side. It has a fixed base, and it has an edge that raises from a horizontal position to a vertical position. This edge traces the outline of a circle quadrant. And at any given point, it forms a right triangle you can use to calculate position values.
If you know the angle (as seen in the snippet above) and the hypotenuse (which in this case is the y-height of the pop-up image texture as if it were laying flat) then you can solve for the opposide side of the triangel by multiplying the sin of the angle times the hypotenuse. This value corresponds to the depth at which the pop-up edge must be lifted off the ground. Since sin(angle) = opposite/hypotenuse, I can solve for 'opposite' as such:
popUpValueZ = sinf(viewingAngle) * imageHeight;
Next, I needed to calculate the y-size of the pop-up image. In the imaginary triangle above, this corresponds to the side adjacent to the pop-up angle. As such, cosine is used to calculate its value:
popUpValueY = cosf(viewingAngle) * imageHeight;
Now I can use popUpValueY and popUpValueZ to determine my vertices. They will act as the height and depth of my quad, respectively. As the viewing angle gets lower to the ground, the Z value increases off the ground and the Y value gets shorter and shorter, so that it begins to resemble a vertical plane instead of a horizontal one.
The other thing I had to do:
glDisable(GL_DEPTH_TEST);
I found that these pop-up 'pseudo-sprites' were fighting with the 3D terrain, so I simply disabled the depth test before drawing them. This way, they scale exactly as they should based on their position within the perspective viewport, but they always appear on top of anything drawn earlier than them. In my particular case I do want these sprites to be occluded by terrain drawn in front of it, so I just re-enabled depth testing when drawing the closer terrain.

OpenGL ES - how to keep some object at a fixed size?

I'm working on a little game in OpenGL ES.
In the background, there is a world/map. The map is just a large texture.
Zoom/pinch/pan is used to move around. And I'm using glOrthof (left, right, bottom, top, zNear, zFar) to implement the zoom/pinch.
When I zoom in, the sprites on top of the map is also zoomed in. But I would like to have some sprites stay at a fixed size.
I could probably calculate a scale factor, depending on the parameters to glOrthof, but there must be a more natural and straightforward way of doing that, instead of scaling the sprites down when I zoom in.
If I add some text or some GUI elements on top of the map, they should definately have a fixed size.
Is there a solution to do this, or do I have to leave fixed values in glOrthof and implement zoom/pinch in another way?
EDIT: To be more clear: I want sprites that zoom in/out along with the map, but stay at the same size.
I have some elements that are like the pins on the iPhone's map application. When you zoom, the pins stay the same size, but move around on the screen to stay on the same spot on the map. That is mainly what I want a solution for.
Solutions for this already came below, thanks!
First call glOrthof with the settings you have, then draw the things that scale. Then make another call to glOrthof with different settings (after glLoadIdentity probably), and then draw the things that should not be scaled.
you can use something like this to draw fixed size elements at a given 3D position, keeping the current projection settings :
// go to correct coordinates
double v[3] = { x , y , z };
glRasterPos3dv( v );
glBitmap( 0 , 0 , 0 , 0 , -center_pix_x , -center_pix_y , NULL );
// and draw pixels
glPixelStorei( GL_PACK_LSB_FIRST , true );
glPixelStorei( GL_PACK_ALIGNMENT , 1 );
glDrawPixels( img_width , img_height , GL_RGBA , GL_UNSIGNED_BYTE , img_data_ptr );
center_pix are the coordinates of the reference point in the sprite that will match the 3D point.
Found one solution in this thread:
Drawing "point-like" shapes in OpenGL, indifferent to zoom
Point sprites... Apple's GLPaint example also uses this.
Quite simple to use. Uses the current texture.
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(40.0f);
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, 4);
These will move when the map moves, but does not change the size.
Edit: A small tip: Remember that the point coordinate is the middle of the texture, not a corner or anything. I struggled a bit with my sprites apparently "moving", because I used only the 35x35 upper left pixels in a 64x64 texture. Move your graphics to the middle of the texture and you'll be fine.