OpenGL ES orthographic projection matrix not working - swift

So my goal is simple. I am trying to get my coordinate space set up, so that the origin is at the bottom left of the screen, and the top right coordinates are (screen.width, screen.height).
Also this is a COMPLETELY 2d engine, so no 3d stuff is needed. I just need those coordinates to work.
Right now I am trying to plot a couple points on the screen. Mostly at places like (0, 0), (width, height), (width / 2, height /2) etc so I can see if things are working right.
Unfortunately right now my efforts to get this going are in vain, and instead of having multiple points I have one in the dead center of the device (obviously they are all overlapping).
So here is my code what exactly am I doing wrong?
Vertex Shader
uniform vec4 color;
uniform float pointSize;
uniform mat4 orthoMatrix;
attribute vec3 position;
varying vec4 outColor;
varying vec3 center;
void main() {
center = position;
outColor = color;
gl_PointSize = pointSize;
gl_Position = vec4(position, 1) * orthoMatrix;
}
And here is how I make the matrix. I am using GLKit so it is theoretically making the orthographic matrix for me. However If you have a custom function you think would better do this then that is fine! I can use it too.
var width:Int32 = 0
var height:Int32 = 0
var matrix:[GLfloat] = []
func onload()
{
width = Int32(self.view.bounds.size.width)
height = Int32(self.view.bounds.size.height)
glViewport(0, 0, GLsizei(height), GLsizei(width))
matrix = glkitmatrixtoarray( GLKMatrix4MakeOrtho(0, GLfloat(width), 0, GLfloat(height), -1, 1))
}
func glkitmatrixtoarray(mat: GLKMatrix4) -> [GLfloat]
{
var buildme:[GLfloat] = []
buildme.append(mat.m.0)
buildme.append(mat.m.1)
buildme.append(mat.m.3)
buildme.append(mat.m.4)
buildme.append(mat.m.5)
buildme.append(mat.m.6)
buildme.append(mat.m.7)
buildme.append(mat.m.8)
buildme.append(mat.m.9)
buildme.append(mat.m.10)
buildme.append(mat.m.11)
buildme.append(mat.m.12)
buildme.append(mat.m.13)
buildme.append(mat.m.15)
return buildme
}
Passing it over to the shader
func draw()
{
//Setting up shader for use
let loc3 = glGetUniformLocation(program, "orthoMatrix")
if (loc3 != -1)
{
//glUniformMatrix4fv(loc3, 1, GLboolean(GL_TRUE), &matrix[0])
glUniformMatrix4fv(loc3, 1, GLboolean(GL_TRUE), &matrix[0])
}
//Passing points and extra data
}
Note: If you remove the multiplication with the matrix in the vertex shader the points show up, however obiously most of them are off screen because of how default OpenGL works.
Also: I have tried using this function rather then glKit's method. Same results. Perhaps I am not passing there might things into the matrix making function, or maybe im not getting it to the shader properly.
EDIT: I have thrown up the project file incase you want to see how everything goes.

OK I finally figured this out! What I did
1. I miscounted when turning the glkit matrix to an array.
2. When passing the matrix as a uniform you actually want the address of the whole array not just the beginning element.
3. GL_FALSE is not a proper argument when passing the matrix to the shader.
Thankyou reto matic

Related

Do two floats in a compute shader being added or subtracted not give the same value 100% of the time?

I have a function I call to generate some randomness in my hlsl compute shader code
float rand3dTo1d(float3 value, float3 dotDir = float3(12.9898, 78.233, 37.719)){
//make value smaller to avoid artefacts
float3 smallValue = sin(value);
//get scalar value from 3d vector
float random = dot(smallValue, dotDir);
//make value more random by making it bigger and then taking the factional part
random = frac(sin(random) * 43758.5453);
return random;
}
If I pass in an incoming vectors location, all is fine, but if I try to pass in the center point of three vectors using this function into the randomness:
float3 GetTriangleCenter3d(float3 a, float3 b, float3 c) {
return (a + b + c) / 3.0;
}
Then ocassionally SOME of my points are not the same from frame to frame (shown by the color I paint the triangles with using this code). I get flickering of color.
float3 color = lerp(_ColorFrom, _ColorTo, rand1d);
I am at a total loss. I was able to at least get consitant results by using the thread id as the seed for the randomness, but not being able to use the centerpoint of the triangle is really weird to me and I have no idea what I am doing wrong or what I am missing. Any help would be great.

Convert screen coordinates to Metal's Normalized Device Coordinates

I am trying to render a 2D triangle using user touches. So, I will let a user touch three points on the screen and those points will be used as vertices of a triangle.
You're already aware that you need to return clip-space coordinates (technically not normalized device coordinates) from your vertex shader. The question is how and where to go from UIKit coordinates to Metal's clip-space coordinates.
Let's start by defining these different spaces. Note that below, I actually am using NDC coordinates for the sake of simplicity, since in this particular case, we aren't introducing perspective by returning vertex positions with w != 1. (Here I'm referring to the w coordinate of the clip-space position; in the following discussion, w always refers to the view width).
We pass the vertices into our vertex shader in whatever space is convenient (this is often called model space). Since we're working in 2D, we don't need the usual series of transformations to world space, then eye space. Essentially, the coordinates of the UIKit view are our model space, world space, and eye space all in one.
We need some kind of orthographic projection matrix to move from this space into clip space. If we strip out the unnecessary parts related to the z axis and assume that our view bounds' origin is (0, 0), we come up with the following transformation:
We could pass this matrix into our vertex shader, or we could do the transformation prior to sending the vertices to the GPU. Considering how little data is involved, it really doesn't matter at this point. In fact, using a matrix at all is a little wasteful, since we can just transform each coordinate with a couple of multiplies and an add. Here's how that might look in a Metal vertex function:
float2 inverseViewSize(1.0f / width, 1.0f / height); // passed in a buffer
float clipX = (2.0f * in.position.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -in.position.y * inverseViewSize.y) + 1.0f;
float4 clipPosition(clipX, clipY, 0.0f, 1.0f);
Just to verify that we get the correct results from this transformation, let's plug in the upper-left and lower-right points of our view to ensure they wind up at the extremities of clip space (by linearity, if these points transform correctly, so will all others):
These points appear correct, so we're done. If you're concerned about the apparent distortion introduced by this transformation, note that it is exactly canceled by the viewport transformation that happens prior to rasterization.
Here is a function that will convert UIKit view-based coordinates to Metal's clip space coordinates (based on warrenm`s answer). It can be added directly to a shader file & called from the vertex shader function.
float2 convert_to_metal_coordinates(float2 point, float2 viewSize) {
float2 inverseViewSize = 1 / viewSize;
float clipX = (2.0f * point.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -point.y * inverseViewSize.y) + 1.0f;
return float2(clipX, clipY);
}
You'll want to pass the viewSize (UIKit's bounds) to Metal somehow, say via a buffer parameter on the vertex function.
Translated Thompsonmachine's code to swift, using SIMD values which is what I need to pass to shaders.
func convertToMetalCoordinates(point: CGPoint, viewSize: CGSize) -> simd_float2 {
let inverseViewSize = CGSize(width: 1.0 / viewSize.width, height: 1.0 / viewSize.height)
let clipX = Float((2.0 * point.x * inverseViewSize.width) - 1.0)
let clipY = Float((2.0 * -point.y * inverseViewSize.height) + 1.0)
return simd_float2(clipX, clipY)
}

Gradient not rendering correctly on a Unity3D GUI object

I am attempting to apply a gradient effect on a Unity3D(5.2) gui object but its as if one of the gradient color keys is being completely ignored. I have tried with both instantiating a new gradient field and declaring a gradient field public and edit its keys in the editor but yet the effects remain the same.
I'm beginning to think that I am not supposed to use Gradients in a BaseMeshEffect in the way I am using it. If only have 2 keys, the colors render properly. Where am I wrong?
Here is a code sample followed by a screen shot.
public class GradientUI : BaseMeshEffect
{
[SerializeField]
public Gradient Grad;
public override void ModifyMesh(VertexHelper vh)
{
if (!IsActive())
{
return;
}
List<UIVertex> vertexList = new List<UIVertex>();
vh.GetUIVertexStream(vertexList);
ModifyVertices(vertexList);
vh.Clear();
vh.AddUIVertexTriangleStream(vertexList);
}
void ModifyVertices(List<UIVertex> vertexList)
{
int count = vertexList.Count;
float bottomY = vertexList[0].position.y;
float topY = vertexList[0].position.y;
for (int i = 1; i < count; i++)
{
float y = vertexList[i].position.y;
if (y > topY)
{
topY = y;
}
else if (y < bottomY)
{
bottomY = y;
}
}
float uiElementHeight = topY - bottomY;
for (int i = 0; i < count; i++)
{
UIVertex uiVertex = vertexList[i];
float percentage = (uiVertex.position.y - bottomY) / uiElementHeight;
// Debug.Log(percentage);
Color col = Grad.Evaluate(percentage);
uiVertex.color = col;
vertexList[i] = uiVertex;
Debug.Log(uiVertex.position);
}
}
Screen shot
Your script is actually OK, no problem with it. The problem here is that UI elements simply don't have enough geometry for you to actually see the whole gradient.
Let me explain. In a nutshell, each UI element is actually a mesh made of several 3D triangles, each one rotated to face the camera exactly with its front so it looks 2D. You filter works by assigning a color value to each vertex of those triangles. The color values in the middle of triangles are interpolated based on the proximity to each of the colored vertices.
If you look at UI element in wireframe, you will see that its geometry is very simple. This is for example how a sliced image looks:
As you can see, all of its vertices are concentrated at the corners, and there are no vertices in the middle. So, lets assume your gradient is 2 keys WHITE=>RED. The upper vertices get value WHITE or close to WHITE, the bottom values get value RED or close to RED. This works OK for 2 keys.
Now assume you have 3 keys WHITE=>BLUE=>RED. The upper value is WHITE or close to WHITE, the bottom values get value RED or close to RED, the BLUE value is supposed to be somewhere in the middle, but there is no vertex in the middle, so it is not assigned to anything. So you still get WHITE to RED gradient.
Now, what you can do:
1) You can add some more geometry programmatically, for example by simply subdividing the whole mesh. This may help you: http://answers.unity3d.com/questions/259127/does-anyone-have-any-code-to-subdivide-a-mesh-and.html. Pay attention that in this case, the more keys your gradient has, the more subdivisions are required.
2) Use texture that looks like a gradient gradient.

OpenGL ES transparency not working, instead things just blend with the background

So I have a simple simulation set up on my phone. The goal is to have circles of red, white, and blue that appear on the screen with various transparencies. I have most of that working, except for one thing, while transparency sort of works, the only blending happens with the black background. As a result the circle in the center appears dark red instead of showing the white circles under it. What am I doing wrong?
Note I am working in an orthographic 2d projection matrix. All of the objects z positions are the same, and are rendered in a specific order.
Here is how I set it so transparency works:
glEnable(GLenum(GL_DEPTH_TEST))
glEnable(GLenum(GL_POINT_SIZE));
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_SRC_ALPHA), GLenum(GL_ONE_MINUS_SRC_ALPHA))
glEnable(GLenum(GL_POINT_SMOOTH))
//Note some of these things aren't compatible with OpenGL-es but they can hurt right?
Here is the fragment shader:
precision mediump float;
varying vec4 outColor;
varying vec3 center;
varying float o_width;
varying float o_height;
varying float o_pointSize;
void main()
{
vec4 fc = gl_FragCoord;
vec3 fp = vec3(fc);
vec2 circCoord = 2.0 * gl_PointCoord - 1.0;
if (dot(circCoord, circCoord) > 1.0) {
discard;
}
gl_FragColor = outColor;//colorOut;
}
Here is how I pass each circle to the shader:
func drawParticle(part: Particle,color_loc: GLint, size_loc: GLint)
{
//print("Drawing: " , part)
let p = part.position
let c = part.color
glUniform4f(color_loc, GLfloat(c.h), GLfloat(c.s), GLfloat(c.v), GLfloat(c.a))
glUniform1f(size_loc, GLfloat(part.size))
glVertexAttribPointer(0, GLint(3), GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, [p.x, p.y, p.z]);
glEnableVertexAttribArray(0);
glDrawArrays(GLenum(GL_POINTS), 0, GLint(1));
}
Here is how I set it so transparency works:
glEnable(GLenum(GL_DEPTH_TEST))
glEnable(GLenum(GL_POINT_SIZE));
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_SRC_ALPHA), GLenum(GL_ONE_MINUS_SRC_ALPHA))
glEnable(GLenum(GL_POINT_SMOOTH))
And that's not how transparency works. OpenGL is not a scene graph, it just draws geometry in the order you specify it to. If the first thing you draw are the red circles, they will blend with the background. Once things get drawn that are "behind" the red circles, the "occulded" parts will simply be discarded due to the depth test. There is no way for OpenGL (or any other depth test based algorithm) to automatically sort the different depth layers and blend them appropriately.
What you're trying to do there is order independent transparency, a problem still in research on how to solve it efficiently.
For what you want to achieve you'll have to:
sort your geometry far to near and draw in that order
disable the depth test while rendering

2D lighting from multiple point sources on GLSL ES 2.0 in iPhone

as i'm a complete noob with shaders i've got some problems while trying to get to work a 2D lighting system that basically covers the screen with a 2D black texture with transparent holes where the lighten areas are.
As i'm using only one texture I guess that i must do this in the fragment shader, right?
Fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
// Texture, coordinates and size
uniform sampler2D u_texture;
varying vec2 v_texCoord;
uniform vec2 textureSize;
uniform int lightCount;
struct LightSource
{
vec2 position;
float radius;
float strength;
};
uniform LightSource lights[10];
void main()
{
float alpha = 1.0;
vec2 pos = vec2(v_texCoord.x * textureSize.x, v_texCoord.y * textureSize.y);
int i;
for (i = 0; i < lightCount; i++)
{
LightSource source = lights[i];
float distance = distance(source.position, pos);
if (distance < source.radius)
{
alpha -= mix(source.strength, 0.0, distance/source.radius);
}
}
gl_FragColor = vec4(0.0, 0.0, 0.0, alpha);
}
The problem is that the performance is really terrible (cannot run at 60fps with 2 lights and nothing else on screen), any suggestions to make it better or even different ways to approach this problem?
By the way, i'm doing this from cocos2d-x, so if anyone has any idea that uses cocos2d elements it will be welcome as well :)
I totally agree with Tim. If you want to improve the total speed, you've to avoid for loops. I recommend you that, if the lights array size is always ten, swap the loop statement with ten copies of the loop content. You should be aware that any variable that you declare into a loop statement will be freed up at the end of the loop! So its a good idea to span the loop in ten parts (ugly, but it's an old school trick ;))))
Besides, I also recommend you to put some println in every statement, to see what instructions is messing around. I bet that the mix operation is the culprit. I don't know anything about cocos2d, but, it is possible to make an unique call to mix at the end of the process, with a sumarization of distances and strengths? It seems that at some point there's a pretty float-consuming annoying operation
Two things I would try (not guaranteed to help)
Remove the for loop and just hardcode in two lights. For loops can be expensive if they are not handled properly by the driver. It would be good to know if that is slowing you down.
If statements can be expensive, and I don't think that's a good application of mix (you're doing an a*(1-c) + 0.0 * c, and the second half of that term is pointless). I might try replacing this if statement:
if (distance < source.radius)
{
alpha -= mix(source.strength, 0.0, distance/source.radius);
}
With this single line:
alpha -= (1.0-min(distance/source.radius, 1.0)) * source.strength;