iPhone 4S, OpenGL ES seems too slow. What's wrong? - iphone

I draw 2560 very slim polygons for each frame on an iPhone 4S using OpenGL ES. The problem is that I'm getting framerates around 30, which is not smooth enough for my taste. I think it should be faster than that.
Is that right?
Please help me finding out what can be improved.
UPDATE: I do the rendering on the main thread. Are there any recommendations on which thread to perform the rendering operations?
A bit background:
I'm trying to make a smoothly scrolling (target is 60 FPS) waveform of size 320x200 in iPhone view coordinates, so 640x400 pixels on a retina display.
My test device is an iPhone 4S. With iOS 6 and 6.1, I could achieve this easily with normal UIKit drawing operations. However, since I updated the device to iOS 7, it got much slower, so I decided to use OpenGL ES, because I read lots of times that it allows faster 2D drawing.
I implemented drawing the waveform with OpenGL ES 2.0, but now it's just a slight bit faster on the device than with UIKit. And like with UIKit, the speed greatly depends on the number of pixels being drawn to, which makes me wonder what's going on.
The waveform is composed out of bars/rectangles, each of them is exactly 1 pixel in width. I draw two bars per pixel column, and each bar consists of two polygons, which means I draw 1280 bars, or 2560 polygons for each frame. The polygons are extremely slim. Each of them is at most 1 pixel wide. I think this should be no problem to draw at 60FPS with OpenGL ES.
I draw one bar like this:
- (void) glFillRect: (Float32)x0 : (Float32)y0 : (Float32)x1 : (Float32)y1 {
glEnableVertexAttribArray(GLKVertexAttribPosition);
GLfloat vertices[8];
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 0, vertices);
GLfloat* vp = vertices;
*vp++ = x0; *vp++ = y0;
*vp++ = x1; *vp++ = y0;
*vp++ = x0; *vp++ = y1;
*vp++ = x1; *vp++ = y1;
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(GLKVertexAttribPosition);
}
The code calling the above method is below. _maxDrawing and _avgDrawing are my effects, which are composed like this at app startup time:
_maxDrawing = [[GLKBaseEffect alloc] init];
_maxDrawing.useConstantColor = GL_TRUE;
_maxDrawing.constantColor = GLKVector4Make(0.075f, 0.1f, 0.25f, 1.0f);
I later adjust the projection matrix so that my drawing coordinates for OpenGL ES line up with the view coordinates of my view, which, afaik, is the standard way to go for 2D drawing.
[_maxDrawing prepareToDraw];
x_Cu = [self transformViewXToWaveformX:rect.origin.x];
for (Float32 x_Vu = rect.origin.x; x_Vu < viewEndX_Vu; x_Vu += onePixelInViewUnits) {
x_Cu += onePixelInContentUnits;
if (x_Cu < 0 || x_Cu >= waveformEndX_Cu) {
continue;
}
SInt64 frameIdx = (SInt64) x_Cu;
CBWaveformElement element;
element = [self.dataSource getElementContainingFrame:frameIdx];
prevMax = curMax;
curMax = futureMax;
futureMax = element.max;
smoothMax = prevMax * 0.25 + curMax * 0.5 + futureMax * 0.25;
if (smoothMax < curMax)
smoothMax = curMax;
Float32 barHeightHalf = smoothMax * heightScaleHalf;
Float32 barY0 = viewHeightHalf - barHeightHalf;
Float32 barY1 = viewHeightHalf + barHeightHalf;
[self glFillRect: x_Vu : barY0 : x_Vu + onePixelInViewUnits : barY1];
}
[_avgDrawing prepareToDraw];
x_Cu = [self transformViewXToWaveformX:rect.origin.x];
for (Float32 x_Vu = rect.origin.x; x_Vu < viewEndX_Vu; x_Vu += onePixelInViewUnits) {
x_Cu += onePixelInContentUnits;
if (x_Cu < 0 || x_Cu >= waveformEndX_Cu) {
continue;
}
SInt64 frameIdx = (SInt64) x_Cu;
CBWaveformElement element;
element = [self.dataSource getElementContainingFrame:frameIdx];
Float32 barHeightHalf = element.avg * heightScaleHalf;
Float32 barY0 = viewHeightHalf - barHeightHalf;
Float32 barY1 = viewHeightHalf + barHeightHalf;
[self glFillRect: x_Vu : barY0 : x_Vu + onePixelInViewUnits : barY1];
}
When I take out all the OpenGL calls, the execution duration for one frame is around 1ms, which means it could theoretically go up to 1000 FPS. All other time (around 33ms) is spent drawing.

Per Daniel's request, I'm posting this as an answer to close the question out.
In the above code, it appears that you're using a glDrawArrays() call per each box. This incurs a significant amount of overhead with a lot of boxes.
A more efficient way to approach this would be to use a VBO (probably a dynamically updated one) containing all the vertices of a your scene, or at least a larger group of the boxes, and to draw all of those with a single call.
As rickster points out, iOS 7 adds some nice support for instancing, which could also be a help here.
Regarding whether or not to render on a background thread, in my experience I've usually seen significant performance boosts (10-40%, particularly on the multicore devices) when rendering my OpenGL ES scene on a background thread. Using a serial GCD queue, it's also pretty easy to do that in a safe manner.

Related

OpenGL ES 1.1 2D Ring with Texture iPhone

I would appreciate some help with the following. I'm trying to render a ring shape on top of another object in OpenGL ES 1.1 for an iPhone game. The ring is essentially the difference between two circles.
I have a graphic prepared for the ring itself, which is transparent in the centre.
I had hoped to just create a circle, and apply the texture to that. The texture is a picture of the ring that occupies the full size of the texture (i.e. the outside of the ring touches the four sides of the texture). The centre of the ring is transparent in the graphic being used.
It needs to be transparent in the centre to let the object underneath show through. The ring is rendering correctly, but is a solid black mass in the centre, not transparent. I'd appreciate any help to solve this.
Code that I'm using to render the circle is as follows (not optimised at all: I will move the coords in proper buffers etc for later code, but I have written it this way to just try and get it working...)
if (!m_circleEffects.empty())
{
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
int segments = 360;
for (int i = 0; i < m_circleEffects.size(); i++)
{
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(m_circleEffects[i].position.x, m_circleEffects[i].position.y, 0);
glBindTexture(GL_TEXTURE_2D, m_Texture);
float radius = 1.764706;
GLfloat circlePoints[segments * 3];
GLfloat textureCoords[segments * 2];
int circCount = 3;
int texCount = 2;
for (GLfloat i = 0; i < 360.0f; i += (360.0f / segments))
{
GLfloat pos1 = cosf(i * M_PI / 180);
GLfloat pos2 = sinf(i * M_PI / 180);
circlePoints[circCount] = pos1 * radius;
circlePoints[circCount+1] = pos2 * radius;
circlePoints[circCount+2] = (float)z + 5.0f;
circCount += 3;
textureCoords[texCount] = pos1 * 0.5 + 0.5;
textureCoords[texCount+1] = pos2 * 0.5 + 0.5;
texCount += 2;
}
glVertexPointer(3, GL_FLOAT, 0, circlePoints);
glTexCoordPointer(2, GL_FLOAT, 0, textureCoords);
glDrawArrays(GL_TRIANGLE_FAN, 0, segments);
}
m_circleEffects.clear();
glDisable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
I've been experimenting with trying to create a ring rather than a circle, but I haven't been able to get this right yet.
I guess that the best approach is actually to not create a circle, but a ring, and then get the equivalent texture coordinates as well. I'm still experimenting with the width of the ring, but, it is likely that the radius of the ring is 1/4 width of the total circle.
Still a noob at OpenGL and trying to wrap my head around it. Thanks in advance for any pointers / snippets that might help.
Thanks.
What you need to do is use alpha blending, which blends colors into each other based on their alpha values (which you say are zero in the texture center, meaning transparent). So you have to enable blending by:
glEnable(GL_BLEND);
and set the standard blending functions for using a color's alpha component as opacity:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
But always keep in mind in order to see the transparent object correctly blended over the object behind, you need to render your objects in back to front order.
But if you only use the alpha as a object/no-object indicator (only values of either 0 or 1) and don't need partially transparent colors (like glass, for example), you don't need to sort your objects. In this case you should use the alpha test to discard fragments based on their alpha values, so that they don't pollute the depth-buffer and prevent the behind lying object from being rendered. An alpha test set with
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.5f);
will only render fragments (~pixels) that have an alpha of more than 0.5 and will completely discard all other fragments. If you only have alpha values of 0 (no object) or 1 (object), this is exactly what you need and in this case you don't actually need to enable blending or even sort your objects back to front.

How does one compare one image to another to see if they are similar by a certain percentage, on the iPhone?

I basically want to take two images taken from the camera on the iPhone or iPad 2 and compare them to each other to see if they are pretty much the same. Obviously due to light etc the image will never be EXACTLY the same so I would like to check for around 90% compatibility.
All the other questions like this that I saw on here were either not for iOS or were for locating objects in images. I just want to see if two images are similar.
Thank you.
As a quick, simple algorithm, I'd suggest iterating through about 1% of the pixels in each image and either comparing them directly against each other or keeping a running average and then comparing the two average color values at the end.
You can look at this answer for an idea of how to determine the color of a pixel at a given position in an image. You may want to optimize it somewhat to better suit your use-case (repeatedly querying the same image), but it should provide a good starting point.
Then you can use an algorithm roughly like:
float numDifferences = 0.0f;
float totalCompares = width * height / 100.0f;
for (int yCoord = 0; yCoord < height; yCoord += 10) {
for (int xCoord = 0; xCoord < width; xCoord += 10) {
int img1RGB[] = [image1 getRGBForX:xCoord andY: yCoord];
int img2RGB[] = [image2 getRGBForX:xCoord andY: yCoord];
if (abs(img1RGB[0] - img2RGB[0]) > 25 || abs(img1RGB[1] - img2RGB[1]) > 25 || abs(img1RGB[2] - img2RGB[2]) > 25) {
//one or more pixel components differs by 10% or more
numDifferences++;
}
}
}
if (numDifferences / totalCompares <= 0.1f) {
//images are at least 90% identical 90% of the time
}
else {
//images are less than 90% identical 90% of the time
}
Based on aroth's idea, this is my full implementation. It checks if some random pixels are the same. For what I needed it works flawlessly.
- (bool)isTheImage:(UIImage *)image1 apparentlyEqualToImage:(UIImage *)image2 accordingToRandomPixelsPer1:(float)pixelsPer1
{
if (!CGSizeEqualToSize(image1.size, image2.size))
{
return false;
}
int pixelsWidth = CGImageGetWidth(image1.CGImage);
int pixelsHeight = CGImageGetHeight(image1.CGImage);
int pixelsToCompare = pixelsWidth * pixelsHeight * pixelsPer1;
uint32_t pixel1;
CGContextRef context1 = CGBitmapContextCreate(&pixel1, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
uint32_t pixel2;
CGContextRef context2 = CGBitmapContextCreate(&pixel2, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
bool isEqual = true;
for (int i = 0; i < pixelsToCompare; i++)
{
int pixelX = arc4random() % pixelsWidth;
int pixelY = arc4random() % pixelsHeight;
CGContextDrawImage(context1, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image1.CGImage);
CGContextDrawImage(context2, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image2.CGImage);
if (pixel1 != pixel2)
{
isEqual = false;
break;
}
}
CGContextRelease(context1);
CGContextRelease(context2);
return isEqual;
}
Usage:
[self isTheImage:image1 apparentlyEqualToImage:image2
accordingToRandomPixelsPer1:0.001]; // Use a value between 0.0001 and 0.005
According to my performance tests, 0.005 (0.5% of the pixels) is the maximum value you should use. If you need more precision, just compare the whole images
using this. 0.001 seems to be a safe and well-performing value. For large images (like between 0.5 and 2 megapixels or million pixels), I'm using 0.0001 (0.01%) and it works great and incredibly fast, it never makes a mistake.
But of course the mistake-ratio will depend on the type of images you are using. I'm using UIWebView screenshots and 0.0001 performs well, but you can probably use much less if you are comparing real photographs (even just compare one random pixel in fact). If you are dealing with very similar computer designed images you definitely need more precision.
Note: I'm always comparing ARGB images without taking into account the alpha channel. Maybe you'll need to adapt it if that's not exactly your case.

Is CGContextAddArc really that slow (compared to a circle drawn with a few lines

Folks,
While coding up a few dials and sliders (e.g. like a big volume button one can rotate around) - I found that the standard CGContextAddArc() used like:
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGContextSetLineWidth(ctx, radius * (KE-KR)+8);
CGContextSetStrokeColorWithColor(ctx,self.foregroundColor.CGColor);
.... more some colour/width/etc settings
...
CGContextAddArc(ctx, dx,dy,radius, 0, 2*M_PI, 0);
to be unbelievable slow.
On an iPad - with a handful of filled/stroked circles, less than some 10 clean [self setNeedsDisplay] updates/second during drag. A very quick hack with a hand-drawn circle (shown below) was several orders of magnitude faster. Same applies to the emulator.
Why is this. Seems to be the case for both a normal fill and various gradient fills. What am I doing wrong ?
Dw.
// Stupid replacement for CGContectAddArc() which seems to be very slow.
//
void CGContextAddCirlce(CGContextRef ctx, float ox, float oy, float radius)
{
double len = 2 * M_PI * radius;
double step = 1.8 / len; // over the top :)
// translating/scaling would more efficient, etc..
//
float x = ox + radius;
float y = oy;
// stupid hack - should just do a quadrant and mirror twice.
//
CGContextMoveToPoint(ctx,x,y);
for(double a = step; a < 2.0 * M_PI -step; a += step) {
x = ox + radius * cos(a);
y = oy + radius * sin(a);
CGContextAddLineToPoint(ctx, x, y);
};
CGContextClosePath(ctx);
};
The vector drawing operations of Quartz 2D can be slow, which is why it is a good idea to redraw only when needed.
In your case, I would suggest drawing your volume button once, then transforming the UIView or CALayer into which you've drawn the button using a rotational transform. By simply moving, rotating, or scaling a view, you do not trigger an expensive redraw. The content is already cached as a texture, and the GPU can quickly manipulate and composite this rasterized content on top of your other views.
You'll find that avoiding redrawing in this manner will yield much improved performance.
Issue partly (mostly resolved).
Extensive benchmarking does show that AddArc is indeed slow compared to drawing a complete circle with a vector/straight-line path for circles in the 100-200 pixel radius range. For partial circles the effect is much less pronounced; am wondering if this is tied to the number of beziers.
BUT:
The code below did not compile as one would read it; M_PI was not the 3.14etc as actually expected by set to (3.14... * ((EVP_ARM7_ADJUST[(PLTF)])) by an included fixed-point DSP library (set to x100).
Hence it specified the end-arc double by a factor of 256 too large.
And it was the latter which did make the issue so noticeable (evidently the underlaying implementation just keeps going round and round and round..).
So issue now understood (and will keep an optimized/benchmarked version).
Thanks for the help!

Drawing a cube in open GL ES1 for the iphone

Hello friendly computer people,
I've been studying openGL with the book iPhone 3D programming from O'Reilly. Below I've posted an example from the text which shows how to draw a cone. I'm still trying to wrap my head around it which is a bit difficult since I'm not super familiar with C++.
Anyway, what I would like to do is draw a cube. Could anyone suggest the best way to replace the following code with one that would draw a simple cube?
const float coneRadius = 0.5f;
const float coneHeight = 1.866f;
const int coneSlices = 40;
{
// Allocate space for the cone vertices.
m_cone.resize((coneSlices + 1) * 2);
// Initialize the vertices of the triangle strip.
vector<Vertex>::iterator vertex = m_cone.begin();
const float dtheta = TwoPi / coneSlices;
for (float theta = 0; vertex != m_cone.end(); theta += dtheta) {
// Grayscale gradient
float brightness = abs(sin(theta));
vec4 color(brightness, brightness, brightness, 1);
// Apex vertex
vertex->Position = vec3(0, 1, 0);
vertex->Color = color;
vertex++;
// Rim vertex
vertex->Position.x = coneRadius * cos(theta);
vertex->Position.y = 1 - coneHeight;
vertex->Position.z = coneRadius * sin(theta);
vertex->Color = color;
vertex++;
}
}
Thanks for all the help.
If all you want is an OpenGL ES 1.1 cube, I created such a sample application (that has texture and lets you rotate it using your finger) that you can grab the code for here. I generated this sample for the OpenGL ES session of my course on iTunes U (I've since fixed the broken texture rendering you see in that class video).
The author is demonstrating how to build a generic 3-D engine in C++ in the book, so his code is a little more involved than mine. In this part of the code, he's looping through an angle from 0 to 2 * pi in a number of steps corresponding to coneSlices. You could replace his loop with a series of manual vertex additions corresponding to the vertices I have in my sample application in order to draw a cube instead of his cone. You'd also need to remove the code he has elsewhere for drawing the circular base of the cone.
In OpenGLES 1 you would probably draw a cub using glVertexPointer to submit geometry and glDrawArrays to draw the cube. See these tutorials:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html
OpenGLES is a C based library.

How to draw anti-aliased circle with iPhone OpenGL ES

There are three main ways I know of to draw a simple circle in OpenGL ES, as provided by the iPhone. They are all based on a simple algorithm (the VBO version is below).
void circleBufferData(GLenum target, float radius, GLsizei count, GLenum usage) {
const int segments = count - 2;
const float coefficient = 2.0f * (float) M_PI / segments;
float *vertices = new float[2 * (segments + 2)];
vertices[0] = 0;
vertices[1] = 0;
for (int i = 0; i <= segments; ++i) {
float radians = i * coefficient;
float j = radius * cosf(radians);
float k = radius * sinf(radians);
vertices[(i + 1) * 2] = j;
vertices[(i + 1) * 2 + 1] = k;
}
glBufferData(target, sizeof(float) * 2 * (segments + 2), vertices, usage);
glVertexPointer(2, GL_FLOAT, 0, 0);
delete[] vertices;
}
The three ways that I know of to draw a simple circle are by using glDrawArray from an array of vertices held by the application; using glDrawArray from a vertex buffer; and by drawing to a texture on initialization and drawing the texture when rendering is requested. The first two methods I know fairly well (though I have not been able to get anti-aliasing to work). What code is involved for the last option (I am very new to OpenGL as a whole, so a detailed explanation would be very helpful)? Which is most efficient?
Antialiasing in the iOS OpenGL ES impelmentation is severely limited. You won't be able to draw antialiased circles using traditional methods.
However, if the circles you're drawing aren't that large, and are filled, you could take a look at using GL_POINT_SMOOTH. It's what I used for my game, Pizarro, which involves a lot of circles. Here's a detailed writeup of my experience with drawing antialiased circles on the iOS:
http://sveinbjorn.org/drawing_antialiased_circles_opengl_iphone