Translate 3d object center coordinates to 2d visible viewport coordinates - iphone

I have loaded an wavefront object in Iphone OpenGL.
It can be rotated around x/y axis, panned around, zoomed in/out.
My task is - when object is tapped, highlight it's 2d center coordinates on screen for example like this: (Imagine that + is in the center of visible object.)
When loading OpenGL object I store it's:
object center position in world,
x,y,z position offset,
x,y,z rotation,
zoom scale.
When user taps on the screen, I can distinguish which object was tapped. But - as user can tap anywhere on object - Tapped point is not center.
When user touches an object, I want to be able to find out corresponding object visible approximate center coordinates.
How can I do that?
Most code in google I could find is meant - to translate 3d coordinates to 2d but without rotation.
Some variables in code:
Vertex3D centerPosition;
Vertex3D currentPosition;
Rotation3D currentRotation;
//centerPosition.x, centerPosition.y, centerPosition.z
//currentPosition.x, currentPosition.y, currentPosition.z
//currentRotation.x, currentRotation.y, currentRotation.z
Thank You in advance.
(To find out which object I tapped - re-color each object in different color, thus I know what color user tapped.)
object drawSelf function:
// Save the current transformation by pushing it on the stack
glPushMatrix();
// Load the identity matrix to restore to origin
glLoadIdentity();
// Translate to the current position
glTranslatef(currentPosition.x, currentPosition.y, currentPosition.z);
// Rotate to the current rotation
glRotatef(currentRotation.x, 1.0, 0.0, 0.0);
glRotatef(currentRotation.y, 0.0, 1.0, 0.0);
glRotatef(currentRotation.z, 0.0, 0.0, 1.0);
// Enable and load the vertex array
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glNormalPointer(GL_FLOAT, 0, vertexNormals);
// Loop through each group
if (textureCoords != NULL)
{
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(valuesPerCoord, GL_FLOAT, 0, textureCoords);
}
for (OpenGLWaveFrontGroup *group in groups)
{
if (textureCoords != NULL && group.material.texture != nil)
[group.material.texture bind];
// Set color and materials based on group's material
Color3D ambient = group.material.ambient;
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, (const GLfloat *)&ambient);
Color3D diffuse = group.material.diffuse;
glColor4f(diffuse.red, diffuse.green, diffuse.blue, diffuse.alpha);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, (const GLfloat *)&diffuse);
Color3D specular = group.material.specular;
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, (const GLfloat *)&specular);
glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, group.material.shininess);
glDrawElements(GL_TRIANGLES, 3*group.numberOfFaces, GL_UNSIGNED_SHORT, &(group.faces[0]));
}
if (textureCoords != NULL)
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
// Restore the current transformation by popping it off
glPopMatrix();

ok, as I said, you'll need to apply the same transformations to the object center that are applied to the object's vertices by the graphics pipeline; only this time, the graphics pipeline won't help you - you'll have to do it yourself. And it involves some matrix calculations, so I'd suggest getting a good maths library like the OpenGL Maths library, which has the advatage that function names etc. are extremly similar to OpenGL.
step 1: transform the center form object coordinates to modelview coordinates
in your code, you set up your 4x4 modelview matrix like this:
// Load the identity matrix to restore to origin
glLoadIdentity();
// Translate to the current position
glTranslatef(currentPosition.x, currentPosition.y, currentPosition.z);
// Rotate to the current rotation
glRotatef(currentRotation.x, 1.0, 0.0, 0.0);
glRotatef(currentRotation.y, 0.0, 1.0, 0.0);
glRotatef(currentRotation.z, 0.0, 0.0, 1.0);
you need to multiply that matrix with the object center, and OpenGL does not help you with that, since it's not a maths library itself. If you use glm, there are functions like rotate(), translate() etc that function similiar to glRotatef() & glTranslatef(), and you can use them to build your modelview matrix. Also, since the matrix is 4x4, you'll have to append 1.f as 4th component to the object center ( called 'w-component' ), otherwise you can't multiply it with a 4x4 matrix.
Alternatively, you could query the current value of th modelview matrix directly from OpenGl:
GLfloat matrix[16];
glGetFloatv (GL_MODELVIEW_MATRIX, matrix);
but then you'll have to write your own code for the multiplication...
step 2: go from modelview coordinates to clip coordinates
from what you posted, I can't tell whether you ever change the projection matrix ( is there a glMatrixMode( GL_PROJECTION ) somewhere? ) - if you never touch the projection matrix, you can omit this step; otherwise you'll now need to multiply the transformed object center with the projection matrix as well.
step 3: perspective division
divide all 4 components of the object center by the 4th - then throw away the 4th component, keeping only xyz.
If you omitted step 2, you can also omit the division.
step 4: map the object center coordinates to window coordinates
the object center is now defined in normalized device coordinates, with x&y components in range [-1.f, 1.f]. the last step is mapping them to your viewport, i.e. to pixel positions. the z-component does not really matter to you anyway, so let's ignore z and call the x & y component obj_x and obj_y, respectively.
the viewport dimensions should be set somewhere in your code with glViewport( viewport_x, viewport_y, width, height ). from the function arguments, you can then calculate the pixel position for the center like this:
pixel_x = width/2 * obj_x + viewport_x + width/2;
pixel_y = height/2 * obj_y + viewport_y + height/2;
and that's basically it.

Related

Cannot display QImage correctly on QQuickPaintedItem providing world tranform matrix

In Qt Quick project, I derived a custom class from QQuickPaintedItem and mapped screen coordinate system to Cartesian coordinate system by providing a transform matrix, now I want to display a png on the custom class with QPainter->drawImage, however, the y coordinate of the image is inversed, how to fix it, thanks!
Below is the code snippet:
void DrawArea::paint(QPainter *painter)
{
painter->setRenderHint(QPainter::Antialiasing, true);
QTransform transform;
transform.setMatrix(800.0/10.0, 0.0, 0.0,
0.0, -600.0/10.0, 0.0,
400, 300, 1.0);
painter->setWorldTransform(transform);
painter->drawImage(QRectF(0, 0, 3, 3), m_image, QRectF(0, 0, m_image.width(),
m_image.height()));
}
the window size is 800x600, the Cartesian coordinate is from -5 to 5 with both x and y.
The y coord is inversed due to -600.0/10.0, but if I remove the minus sign as 600.0/10.0, the image is correct displayed, but the image extend below y=0 axis in Cartesian coordinate system.

Convert screen coordinates to Metal's Normalized Device Coordinates

I am trying to render a 2D triangle using user touches. So, I will let a user touch three points on the screen and those points will be used as vertices of a triangle.
You're already aware that you need to return clip-space coordinates (technically not normalized device coordinates) from your vertex shader. The question is how and where to go from UIKit coordinates to Metal's clip-space coordinates.
Let's start by defining these different spaces. Note that below, I actually am using NDC coordinates for the sake of simplicity, since in this particular case, we aren't introducing perspective by returning vertex positions with w != 1. (Here I'm referring to the w coordinate of the clip-space position; in the following discussion, w always refers to the view width).
We pass the vertices into our vertex shader in whatever space is convenient (this is often called model space). Since we're working in 2D, we don't need the usual series of transformations to world space, then eye space. Essentially, the coordinates of the UIKit view are our model space, world space, and eye space all in one.
We need some kind of orthographic projection matrix to move from this space into clip space. If we strip out the unnecessary parts related to the z axis and assume that our view bounds' origin is (0, 0), we come up with the following transformation:
We could pass this matrix into our vertex shader, or we could do the transformation prior to sending the vertices to the GPU. Considering how little data is involved, it really doesn't matter at this point. In fact, using a matrix at all is a little wasteful, since we can just transform each coordinate with a couple of multiplies and an add. Here's how that might look in a Metal vertex function:
float2 inverseViewSize(1.0f / width, 1.0f / height); // passed in a buffer
float clipX = (2.0f * in.position.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -in.position.y * inverseViewSize.y) + 1.0f;
float4 clipPosition(clipX, clipY, 0.0f, 1.0f);
Just to verify that we get the correct results from this transformation, let's plug in the upper-left and lower-right points of our view to ensure they wind up at the extremities of clip space (by linearity, if these points transform correctly, so will all others):
These points appear correct, so we're done. If you're concerned about the apparent distortion introduced by this transformation, note that it is exactly canceled by the viewport transformation that happens prior to rasterization.
Here is a function that will convert UIKit view-based coordinates to Metal's clip space coordinates (based on warrenm`s answer). It can be added directly to a shader file & called from the vertex shader function.
float2 convert_to_metal_coordinates(float2 point, float2 viewSize) {
float2 inverseViewSize = 1 / viewSize;
float clipX = (2.0f * point.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -point.y * inverseViewSize.y) + 1.0f;
return float2(clipX, clipY);
}
You'll want to pass the viewSize (UIKit's bounds) to Metal somehow, say via a buffer parameter on the vertex function.
Translated Thompsonmachine's code to swift, using SIMD values which is what I need to pass to shaders.
func convertToMetalCoordinates(point: CGPoint, viewSize: CGSize) -> simd_float2 {
let inverseViewSize = CGSize(width: 1.0 / viewSize.width, height: 1.0 / viewSize.height)
let clipX = Float((2.0 * point.x * inverseViewSize.width) - 1.0)
let clipY = Float((2.0 * -point.y * inverseViewSize.height) + 1.0)
return simd_float2(clipX, clipY)
}

Why do vertices of a quad and the localScale of the quad not match in Unity?

I have a Quad whose vertices I'm printing like this:
public MeshFilter quadMeshFilter;
for(var vertex in quadMeshFilter.mesh.vertices)
{
print(vertex);
}
And, the localScale like this:
public GameObject quad;
print(quad.transform.localScale);
Vertices are like this:
(-0.5, -0.5), (0.5, 0.5), (0.5, -0.5), (-0.5, 0.5)
while the localScale is:
(6.4, 4.8, 0)
How is this possible - because the vertices make a square but localScale does not.
How do I use vertices and draw another square in front of the quad?
I am not well versed in the matters of meshes, but I believe I know the answer to this question.
Answer
How is this possible
Scale is a value which your mesh is multiplied in size by in given directions (x, y, z). A scale of 1 is default size. A scale of 2 is double size and so on. Your localSpace coordinates will then be multiplied by this scale.
Say a localSpace coordinate is (1, 0, 2), the scale however, is (3, 1, 3). Meaning that the result is (1*3, 0*1, 2*3).
How do I use vertices and draw another square in front of the quad?
I'd personally just create the object and then move it via Unity's Transform system. Since it allows you to change the worldSpace coordinates using transform.position = new Vector3(1f, 5.4f, 3f);
You might be able to move each individual vertex in WorldSpace too, but I haven't tried that before.
I imagine it is related to this bit of code though: vertices[i] = transform.TransformPoint(vertices[i]); since TransformPoint converts from localSpace to worldSpace based on the Transform using it.
Elaboration
Why do I get lots of 0's and 5's in my space coordinates despite them having other positions in the world?
If I print the vertices of a quad using the script below. I get these results, which have 3 coordinates and can be multiplied as such by localScale.
Print result:
Script:
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
This first result is what we call local space.
There also exists something called WorldSpace. You can convert between local- and worldSpace.
localSpace is the objects mesh vertices in relation to the object itself while worldSpace is the objects location in the Unity scene.
Then you get the results as seen below, first the localSpace coordinates as in the first image, then the WorldSpace coordinates converted from these local coordinates.
Here is the script I used to print the above result.
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
Debug.Log("World Space");
for (int i = 0; i < vertices.Length; ++i)
{
vertices[i] = transform.TransformPoint(vertices[i]);
Debug.Log(vertices[i]);
}
Good luck with your future learning process.
This becomes clear once you understand how Transform hierarchies work. Its a tree, in which parent transform [3x3] matrix (position, rotation, scale (rotation is actually a quaternion but lets assume its euler for simplicity so that math works). by extension of this philosophy, the mesh itself can be seen as child to the gameoobject that holds it.
If you imagine a 1x1 quad (which is what is described by your vertexes), parented to a gameobject, and that gameobject's Transform has a non-one localScale, all the vertexes in the mesh get multiplied by that value, and all the positions are added.
now if you parent that object to another gameObject, and give it another localScale, this will again multiply all the vertex positions by that scale, translate by its position etc.
to answer your question - global positions of your vertexes are different than contained in the source mesh, because they are feed through a chain of Transforms all the way up to the scene root.
This is both the reason that we only have localScale and not scale, and this is also the reason why non-uniform scaling of objects which contain rotated children can sometimes give very strange results. Transforms stack.

Drawing multiple moving objects

I'm currently working on an iOS game where, long story short, I need to draw a lot of moving cubes - approximate maximum of 200 per frame. Emphasis on moving because yes, I have indeed Googled away for hours on this topic and have yet to find a suitable solution for fast, efficient drawing of multiple objects where their position updates every frame.
Through my endless amounts of research on this subject most seem to mention VBOs, however I'm not sure this would suit my case where the position of every object changes every frame.
I'm using OpenGL 1 at the moment - I have working code and on generation 3/4+ devices (the ones which support OpenGL 2, ha) it runs at a reasonable framerate - however when testing on my (old, yes) 2nd-gen iPod touch, it is very sluggish and essentially unplayable.
My code comprises of a static array of vertices for a 'cube' and an array containing the position and colour of every cube. My game logic loop updates the position of every cube in the array. At the moment I'm looping through the cube array, calling glTranslatef and glDrawArrays for every cube. From what I've read this is very inefficient, however I'm completely confused as to how you would optimise it. Any ideas?
(maybe I shouldn't be aiming for old, discontinued iOS devices but given my belief that my code is incredibly inefficient, I figure it'll help my future endeavours regardless if I find a way to address this)
For such simple objects I would make one big VBO say 200 Objects * NrVerticesPerCube, put all the data interleaved Vertex,Normal,UV,Vertex,Normal,UV, etc.
I do something similar in a keyframe animation of a beaver in my game, I start with something like this:
glGenBuffers(1, &vboObjects[vboGroupBeaver]);
glBindBuffer(GL_ARRAY_BUFFER, vboObjects[vboGroupBeaver]);
glBufferData(GL_ARRAY_BUFFER, beaverVerts*8*sizeof(GLfloat), 0, GL_STATIC_DRAW);
vbo_buffer = glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES);
NSString *path;
path = [[NSBundle mainBundle] pathForResource:#"beaver01" ofType:#"bin"];
NSFileHandle *model = [NSFileHandle fileHandleForReadingAtPath:path];
float vertice[8];
int counter = 0;
while (read([model fileDescriptor], &vertice, 8*sizeof(float))) {
memcpy(vbo_buffer, vertice, 8*sizeof(GLfloat)); // 0
vbo_buffer += 8*sizeof(GLfloat);
counter++;
}
glUnmapBufferOES(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
This creates my VBO buffer with the correct size (in this case 8 * sizeof(GLfloat) wich is 3 Verts, 3 Normals and 2UV), and copies the first keyframe to the buffer, you could do the same with you initial object positions, or just leave that and compute latter...
Then in each frame I do interpolation between 2 keyframes for each vertex of my beaver, and just make one draw call, this is very fast for the 4029 vertices my beaver has, and works at 60FPS on my iPhone 3G.
For you doing only gltranslates it would be even simpler, just add the values of x,y,z to each vertice of each cube.
You would update it like this:
glBindBuffer(GL_ARRAY_BUFFER, vboObjects[vboGroupBeaver]);
GLvoid* vbo_buffer = glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES);
Bind the vbo buffer and mapit to a buffer var.
Calculate the stuff you want on a temp var.
memcpy(vbo_buffer, currentVert, 6*sizeof(GLfloat)); // 0
vbo_buffer += 8*sizeof(GLfloat);
Copy it and update buffer to next object, repeat until all objects updated...
You could also do all the updates in a seperate array and copy the whole array, but then you would be copying extra info that usually doesn't change (normals and UV). Or you could not use interleaved data and copy that...
glUnmapBufferOES(GL_ARRAY_BUFFER);
Unmap the VBO buffer
glVertexPointer(3, GL_FLOAT, 8*sizeof(GLfloat), (GLvoid*)((char*)NULL));
glNormalPointer(GL_FLOAT, 8*sizeof(GLfloat), (GLvoid*)((char*)NULL+3*sizeof(GLfloat)));
glTexCoordPointer(2, GL_FLOAT,8*sizeof(GLfloat), (GLvoid*)((char*)NULL+6*sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLES, 0, beaverVerts);
Setup your draw call, and draw it all...
If you need to rotate objects and not just gltranslate them, you will need to add some matrix multiplications along the way...
EDIT **
ok, making a gltranste by hand is actually very easy (rotation, etc is a bit trickier).
I'm using a an interleaved plane drawed using TRIANGLE_STRIP instead of triangles, but the principle is the same.
float beltInter[] = {
0.0, 0.0, 0.0, // vertices[0]
0.0, 0.0, 1.0, // Normals [0]
6.0, 1.0, // UV [0]
0.0, 480, 0.0, // vertices[1]
0.0, 0.0, 1.0, // Normals [1]
0.0, 1.0, // UV [1]
320.0, 0.0, 0.0, // vertices[2]
0.0, 0.0, 1.0, // Normals [2]
6.0, 0.0, // UV [2]
320.0, 480, 0.0, // vertices[3]
0.0, 0.0, 1.0, // Normals [3]
0.0, 0.0 // UV [3]
};
So this is interleaved vertex, you got vertex then Normals then UV, if you're not using textures substitute UV for color.
The easiest way is to have an array with all the objects inside (made easy if all your objects are the same size) and make the position updates after draw (instead of in the middle of the opengl frame), better still make a seperate thread, create 2 VBOs update one of them while drawing from the other, something like this:
Thread 1 OpenGL DrawFrom VBO0
Thread 2 Game Updates, update positions on internal array and copy to VBO1, set Var saying VBO1 yes ready (so thread 1 only changes from drawing to VBO1 when all the updates are done).
Thread 1 OpenGL DrawFrom VBO1
Thread 2 Game update, same thing but update VBO0
continue with same logic
this is called double buffering and you use it to garanty stability, without this sometimes your game logic will be updating the VBO while the graphics card needs it and the graphics card will have to wait, resulting in lower FPS.
Anyway, back on topic
to make the equivalent to gltranslatef(10,20,30) just do:
int maxvertices = 4;
float x = 10;
float y = 20;
float z = 30;
int counter = 0;
int stride = 8; // stride is 8 = 3 x vertice + 3 x normal + 2 x UV change to 3 x color or 4 x color depending on your needs
glBindBuffer(GL_ARRAY_BUFFER, vboObjects[myObjects]);
GLvoid* vbo_buffer = glMapBufferOES(GL_ARRAY_BUFFER, GL_WRITE_ONLY_OES);
while (counter < (maxVertices*8)) {
beltInter[counter] += x; // just sum the corresponding values to each
beltInter[counter+1] += y;
beltInter[counter+2] += z;
memcpy(vbo_buffer, currentVert, 3*sizeof(GLfloat)); // again only copy what you need, in this case only copying the vertices, if your're updating all the data, you can just do a single memcpy at the end instead of these partial ones
vbo_buffer += stride*sizeof(GLfloat); // forward the buffer
counter += stride; // only update the vertex, but you could update everything
}
glUnmapBufferOES(GL_ARRAY_BUFFER);
glVertexPointer(3, GL_FLOAT, stride*sizeof(GLfloat), (GLvoid*)((char*)NULL));
glNormalPointer(GL_FLOAT, stride*sizeof(GLfloat), (GLvoid*)((char*)NULL+3*sizeof(GLfloat)));
glTexCoordPointer(2, GL_FLOAT,stride*sizeof(GLfloat), (GLvoid*)((char*)NULL+6*sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, maxVertices);
Of course the update values doesn't have to be the same for all the objects, infact using a base array like this you can update all the info as you go along and just have the routine to copy it to VBO when needed.
All this was written from memory on the fly, so there maybe dragons :-)
Hope that helps.
You could optimise quite a bit by sticking all the coords for all your cubes in a single array, and drawing it with a single glDrawArrays call.
I'm not sure why you'd want to split up the cubes into separate arrays, except maybe because it makes your data structure more elegant/object oriented, but that's the first place I'd look at making an improvement.
Dump the cube coordinates in one big array, and give each cube object an index into that array so that you can still keep your update logic fairly compartmentalised (as in, cube n owns the coordinates in the range x to y and is responsible for updating them, but when you actually draw the coordinates you run glDrawArrays directly on the centralised coord array instead of looping through the cube objects and rendering them individually).

OpenGL ES 1.1 2D Ring with Texture iPhone

I would appreciate some help with the following. I'm trying to render a ring shape on top of another object in OpenGL ES 1.1 for an iPhone game. The ring is essentially the difference between two circles.
I have a graphic prepared for the ring itself, which is transparent in the centre.
I had hoped to just create a circle, and apply the texture to that. The texture is a picture of the ring that occupies the full size of the texture (i.e. the outside of the ring touches the four sides of the texture). The centre of the ring is transparent in the graphic being used.
It needs to be transparent in the centre to let the object underneath show through. The ring is rendering correctly, but is a solid black mass in the centre, not transparent. I'd appreciate any help to solve this.
Code that I'm using to render the circle is as follows (not optimised at all: I will move the coords in proper buffers etc for later code, but I have written it this way to just try and get it working...)
if (!m_circleEffects.empty())
{
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
int segments = 360;
for (int i = 0; i < m_circleEffects.size(); i++)
{
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(m_circleEffects[i].position.x, m_circleEffects[i].position.y, 0);
glBindTexture(GL_TEXTURE_2D, m_Texture);
float radius = 1.764706;
GLfloat circlePoints[segments * 3];
GLfloat textureCoords[segments * 2];
int circCount = 3;
int texCount = 2;
for (GLfloat i = 0; i < 360.0f; i += (360.0f / segments))
{
GLfloat pos1 = cosf(i * M_PI / 180);
GLfloat pos2 = sinf(i * M_PI / 180);
circlePoints[circCount] = pos1 * radius;
circlePoints[circCount+1] = pos2 * radius;
circlePoints[circCount+2] = (float)z + 5.0f;
circCount += 3;
textureCoords[texCount] = pos1 * 0.5 + 0.5;
textureCoords[texCount+1] = pos2 * 0.5 + 0.5;
texCount += 2;
}
glVertexPointer(3, GL_FLOAT, 0, circlePoints);
glTexCoordPointer(2, GL_FLOAT, 0, textureCoords);
glDrawArrays(GL_TRIANGLE_FAN, 0, segments);
}
m_circleEffects.clear();
glDisable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
I've been experimenting with trying to create a ring rather than a circle, but I haven't been able to get this right yet.
I guess that the best approach is actually to not create a circle, but a ring, and then get the equivalent texture coordinates as well. I'm still experimenting with the width of the ring, but, it is likely that the radius of the ring is 1/4 width of the total circle.
Still a noob at OpenGL and trying to wrap my head around it. Thanks in advance for any pointers / snippets that might help.
Thanks.
What you need to do is use alpha blending, which blends colors into each other based on their alpha values (which you say are zero in the texture center, meaning transparent). So you have to enable blending by:
glEnable(GL_BLEND);
and set the standard blending functions for using a color's alpha component as opacity:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
But always keep in mind in order to see the transparent object correctly blended over the object behind, you need to render your objects in back to front order.
But if you only use the alpha as a object/no-object indicator (only values of either 0 or 1) and don't need partially transparent colors (like glass, for example), you don't need to sort your objects. In this case you should use the alpha test to discard fragments based on their alpha values, so that they don't pollute the depth-buffer and prevent the behind lying object from being rendered. An alpha test set with
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.5f);
will only render fragments (~pixels) that have an alpha of more than 0.5 and will completely discard all other fragments. If you only have alpha values of 0 (no object) or 1 (object), this is exactly what you need and in this case you don't actually need to enable blending or even sort your objects back to front.