Drawing a cube in open GL ES1 for the iphone - iphone

Hello friendly computer people,
I've been studying openGL with the book iPhone 3D programming from O'Reilly. Below I've posted an example from the text which shows how to draw a cone. I'm still trying to wrap my head around it which is a bit difficult since I'm not super familiar with C++.
Anyway, what I would like to do is draw a cube. Could anyone suggest the best way to replace the following code with one that would draw a simple cube?
const float coneRadius = 0.5f;
const float coneHeight = 1.866f;
const int coneSlices = 40;
{
// Allocate space for the cone vertices.
m_cone.resize((coneSlices + 1) * 2);
// Initialize the vertices of the triangle strip.
vector<Vertex>::iterator vertex = m_cone.begin();
const float dtheta = TwoPi / coneSlices;
for (float theta = 0; vertex != m_cone.end(); theta += dtheta) {
// Grayscale gradient
float brightness = abs(sin(theta));
vec4 color(brightness, brightness, brightness, 1);
// Apex vertex
vertex->Position = vec3(0, 1, 0);
vertex->Color = color;
vertex++;
// Rim vertex
vertex->Position.x = coneRadius * cos(theta);
vertex->Position.y = 1 - coneHeight;
vertex->Position.z = coneRadius * sin(theta);
vertex->Color = color;
vertex++;
}
}
Thanks for all the help.

If all you want is an OpenGL ES 1.1 cube, I created such a sample application (that has texture and lets you rotate it using your finger) that you can grab the code for here. I generated this sample for the OpenGL ES session of my course on iTunes U (I've since fixed the broken texture rendering you see in that class video).
The author is demonstrating how to build a generic 3-D engine in C++ in the book, so his code is a little more involved than mine. In this part of the code, he's looping through an angle from 0 to 2 * pi in a number of steps corresponding to coneSlices. You could replace his loop with a series of manual vertex additions corresponding to the vertices I have in my sample application in order to draw a cube instead of his cone. You'd also need to remove the code he has elsewhere for drawing the circular base of the cone.

In OpenGLES 1 you would probably draw a cub using glVertexPointer to submit geometry and glDrawArrays to draw the cube. See these tutorials:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html
OpenGLES is a C based library.

Related

Unity - Tinting mesh UVs

I followed the lovely tutorial by Sebastian Lague here Link to tutorial. I applied it to my own scenario where I want to generate landmass, and ended up with a cool result:
As you can see in the image there is a grid, this is simply a texture that is repeated (tiled) x amount of times and applied to the generated mesh. The code for that looks like this:
Vector2[] uvs = new Vector2[vertices.Count];
for (int i = 0; i < vertices.Count; i++)
{
float percentX = Mathf.InverseLerp(-map.GetLength(0) / 2 * squareSize, map.GetLength(0) / 2 * squareSize, vertices[i].x) * tileAmount;
float percentY = Mathf.InverseLerp(-map.GetLength(0) / 2 * squareSize, map.GetLength(0) / 2 * squareSize, vertices[i].z) * tileAmount;
uvs[i] = new Vector2(percentX, percentY);
}
mesh.uv = uvs;
I am wondering, if there is any way to tint each tile a different shade during this process, either in this script or using a shader.
Vertex colors
They will be automatically interpolated for smooth gradients. If you don't want that, you'll have to build the mesh so that each square has separate vertices, not shared with the neighboring squares.

Convert screen coordinates to Metal's Normalized Device Coordinates

I am trying to render a 2D triangle using user touches. So, I will let a user touch three points on the screen and those points will be used as vertices of a triangle.
You're already aware that you need to return clip-space coordinates (technically not normalized device coordinates) from your vertex shader. The question is how and where to go from UIKit coordinates to Metal's clip-space coordinates.
Let's start by defining these different spaces. Note that below, I actually am using NDC coordinates for the sake of simplicity, since in this particular case, we aren't introducing perspective by returning vertex positions with w != 1. (Here I'm referring to the w coordinate of the clip-space position; in the following discussion, w always refers to the view width).
We pass the vertices into our vertex shader in whatever space is convenient (this is often called model space). Since we're working in 2D, we don't need the usual series of transformations to world space, then eye space. Essentially, the coordinates of the UIKit view are our model space, world space, and eye space all in one.
We need some kind of orthographic projection matrix to move from this space into clip space. If we strip out the unnecessary parts related to the z axis and assume that our view bounds' origin is (0, 0), we come up with the following transformation:
We could pass this matrix into our vertex shader, or we could do the transformation prior to sending the vertices to the GPU. Considering how little data is involved, it really doesn't matter at this point. In fact, using a matrix at all is a little wasteful, since we can just transform each coordinate with a couple of multiplies and an add. Here's how that might look in a Metal vertex function:
float2 inverseViewSize(1.0f / width, 1.0f / height); // passed in a buffer
float clipX = (2.0f * in.position.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -in.position.y * inverseViewSize.y) + 1.0f;
float4 clipPosition(clipX, clipY, 0.0f, 1.0f);
Just to verify that we get the correct results from this transformation, let's plug in the upper-left and lower-right points of our view to ensure they wind up at the extremities of clip space (by linearity, if these points transform correctly, so will all others):
These points appear correct, so we're done. If you're concerned about the apparent distortion introduced by this transformation, note that it is exactly canceled by the viewport transformation that happens prior to rasterization.
Here is a function that will convert UIKit view-based coordinates to Metal's clip space coordinates (based on warrenm`s answer). It can be added directly to a shader file & called from the vertex shader function.
float2 convert_to_metal_coordinates(float2 point, float2 viewSize) {
float2 inverseViewSize = 1 / viewSize;
float clipX = (2.0f * point.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -point.y * inverseViewSize.y) + 1.0f;
return float2(clipX, clipY);
}
You'll want to pass the viewSize (UIKit's bounds) to Metal somehow, say via a buffer parameter on the vertex function.
Translated Thompsonmachine's code to swift, using SIMD values which is what I need to pass to shaders.
func convertToMetalCoordinates(point: CGPoint, viewSize: CGSize) -> simd_float2 {
let inverseViewSize = CGSize(width: 1.0 / viewSize.width, height: 1.0 / viewSize.height)
let clipX = Float((2.0 * point.x * inverseViewSize.width) - 1.0)
let clipY = Float((2.0 * -point.y * inverseViewSize.height) + 1.0)
return simd_float2(clipX, clipY)
}

Microsoft Kinect V2 + Unity 3D Depth = Warping

I've been working on a scene in Unity3D where I have the KinectV2 depth information coming in at 512 x 424 and I'm converting that in real time to Mesh that is also 512 x 424. So there is a 1:1 ratio of pixel data (depth) and vertices (mesh).
My end goal is to make the 'Monitor 3D View' scene found in 'Microsoft Kinect Studio v2.0' with the Depth.
I've pretty much got it working in terms of the point cloud. However, there is a large amount of warping in my Unity scene. I though it might of been down to my maths, etc.
However I noticed that its the same case for the Unity Demo kinect supplied in their Development kit.
I'm just wondering if I'm missing something obvious here? Each of my pixels (or vertices in this case) is mapped out in a 1 by 1 fashion.
I'm not sure if its because I need to process the data from the DepthFrame before rendering it to scene? Or if there's some additional step I've missed out to get the true representation of my room? Because it looks like theres a slight 'spherical' effect being added right now.
These two images are a top down shot of my room. The green line represents my walls.
The left image is the Kinect in a Unity scene, and the right is within Microsoft Kinect Studio. Ignoring the colour difference, you can see that the left (Unity) is warped, whereas the right is linear and perfect.
I know it's quite hard to make out, especially that you don't know the layout of the room I'm sat in :/ Side view too. Can you see the warping on the left? Use the green lines as a reference - these are straight in the actual room, as shown correctly on the right image.
Check out my video to get a better idea:
https://www.youtube.com/watch?v=Zh2pAVQpkBM&feature=youtu.be
Code C#
Pretty simple to be honest. I'm just grabbing the depth data straight from the Kinect SDK, and placing it into a point cloud mesh on the Z axis.
//called on application start
void Start(){
_Reader = _Sensor.DepthFrameSource.OpenReader();
_Data = new ushort[_lengthInPixels];
_Sensor.Open();
}
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
float depthAdjust = 0.1;
Vector3 new_pos = new Vector3(points[index].x, points[index].y, _Data[index] * depthAdjust;
points[index] = new_pos;
}
}
}
Kinect API can be found here:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.depthframe.aspx
Would appreciate any advise, thanks!
With thanks to Edward Zhang, I figured out what I was doing wrong.
It's down to me not projecting my depth points correctly, in where I need to use the CoordinateMapper to map my DepthFrame into CameraSpace.
Currently, my code assumes an orthogonal depth instead of using a perspective depth camera. I just needed to implement this:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.coordinatemapper.aspx
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
CameraSpacePoint[] _CameraSpace = new CameraSpacePoint[_Data.Length];
_Mapper.MapDepthFrameToCameraSpace(_Data, _CameraSpace);
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
Vector3 new_pos = new Vector3(_CameraSpace[index].X, _CameraSpace[index].Y, _CameraSpace[index].Z;
points[index] = new_pos;
}
}
}

OpenGL ES orthographic projection matrix not working

So my goal is simple. I am trying to get my coordinate space set up, so that the origin is at the bottom left of the screen, and the top right coordinates are (screen.width, screen.height).
Also this is a COMPLETELY 2d engine, so no 3d stuff is needed. I just need those coordinates to work.
Right now I am trying to plot a couple points on the screen. Mostly at places like (0, 0), (width, height), (width / 2, height /2) etc so I can see if things are working right.
Unfortunately right now my efforts to get this going are in vain, and instead of having multiple points I have one in the dead center of the device (obviously they are all overlapping).
So here is my code what exactly am I doing wrong?
Vertex Shader
uniform vec4 color;
uniform float pointSize;
uniform mat4 orthoMatrix;
attribute vec3 position;
varying vec4 outColor;
varying vec3 center;
void main() {
center = position;
outColor = color;
gl_PointSize = pointSize;
gl_Position = vec4(position, 1) * orthoMatrix;
}
And here is how I make the matrix. I am using GLKit so it is theoretically making the orthographic matrix for me. However If you have a custom function you think would better do this then that is fine! I can use it too.
var width:Int32 = 0
var height:Int32 = 0
var matrix:[GLfloat] = []
func onload()
{
width = Int32(self.view.bounds.size.width)
height = Int32(self.view.bounds.size.height)
glViewport(0, 0, GLsizei(height), GLsizei(width))
matrix = glkitmatrixtoarray( GLKMatrix4MakeOrtho(0, GLfloat(width), 0, GLfloat(height), -1, 1))
}
func glkitmatrixtoarray(mat: GLKMatrix4) -> [GLfloat]
{
var buildme:[GLfloat] = []
buildme.append(mat.m.0)
buildme.append(mat.m.1)
buildme.append(mat.m.3)
buildme.append(mat.m.4)
buildme.append(mat.m.5)
buildme.append(mat.m.6)
buildme.append(mat.m.7)
buildme.append(mat.m.8)
buildme.append(mat.m.9)
buildme.append(mat.m.10)
buildme.append(mat.m.11)
buildme.append(mat.m.12)
buildme.append(mat.m.13)
buildme.append(mat.m.15)
return buildme
}
Passing it over to the shader
func draw()
{
//Setting up shader for use
let loc3 = glGetUniformLocation(program, "orthoMatrix")
if (loc3 != -1)
{
//glUniformMatrix4fv(loc3, 1, GLboolean(GL_TRUE), &matrix[0])
glUniformMatrix4fv(loc3, 1, GLboolean(GL_TRUE), &matrix[0])
}
//Passing points and extra data
}
Note: If you remove the multiplication with the matrix in the vertex shader the points show up, however obiously most of them are off screen because of how default OpenGL works.
Also: I have tried using this function rather then glKit's method. Same results. Perhaps I am not passing there might things into the matrix making function, or maybe im not getting it to the shader properly.
EDIT: I have thrown up the project file incase you want to see how everything goes.
OK I finally figured this out! What I did
1. I miscounted when turning the glkit matrix to an array.
2. When passing the matrix as a uniform you actually want the address of the whole array not just the beginning element.
3. GL_FALSE is not a proper argument when passing the matrix to the shader.
Thankyou reto matic

How to draw anti-aliased circle with iPhone OpenGL ES

There are three main ways I know of to draw a simple circle in OpenGL ES, as provided by the iPhone. They are all based on a simple algorithm (the VBO version is below).
void circleBufferData(GLenum target, float radius, GLsizei count, GLenum usage) {
const int segments = count - 2;
const float coefficient = 2.0f * (float) M_PI / segments;
float *vertices = new float[2 * (segments + 2)];
vertices[0] = 0;
vertices[1] = 0;
for (int i = 0; i <= segments; ++i) {
float radians = i * coefficient;
float j = radius * cosf(radians);
float k = radius * sinf(radians);
vertices[(i + 1) * 2] = j;
vertices[(i + 1) * 2 + 1] = k;
}
glBufferData(target, sizeof(float) * 2 * (segments + 2), vertices, usage);
glVertexPointer(2, GL_FLOAT, 0, 0);
delete[] vertices;
}
The three ways that I know of to draw a simple circle are by using glDrawArray from an array of vertices held by the application; using glDrawArray from a vertex buffer; and by drawing to a texture on initialization and drawing the texture when rendering is requested. The first two methods I know fairly well (though I have not been able to get anti-aliasing to work). What code is involved for the last option (I am very new to OpenGL as a whole, so a detailed explanation would be very helpful)? Which is most efficient?
Antialiasing in the iOS OpenGL ES impelmentation is severely limited. You won't be able to draw antialiased circles using traditional methods.
However, if the circles you're drawing aren't that large, and are filled, you could take a look at using GL_POINT_SMOOTH. It's what I used for my game, Pizarro, which involves a lot of circles. Here's a detailed writeup of my experience with drawing antialiased circles on the iOS:
http://sveinbjorn.org/drawing_antialiased_circles_opengl_iphone