Is CGContextAddArc really that slow (compared to a circle drawn with a few lines - iphone

Folks,
While coding up a few dials and sliders (e.g. like a big volume button one can rotate around) - I found that the standard CGContextAddArc() used like:
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGContextSetLineWidth(ctx, radius * (KE-KR)+8);
CGContextSetStrokeColorWithColor(ctx,self.foregroundColor.CGColor);
.... more some colour/width/etc settings
...
CGContextAddArc(ctx, dx,dy,radius, 0, 2*M_PI, 0);
to be unbelievable slow.
On an iPad - with a handful of filled/stroked circles, less than some 10 clean [self setNeedsDisplay] updates/second during drag. A very quick hack with a hand-drawn circle (shown below) was several orders of magnitude faster. Same applies to the emulator.
Why is this. Seems to be the case for both a normal fill and various gradient fills. What am I doing wrong ?
Dw.
// Stupid replacement for CGContectAddArc() which seems to be very slow.
//
void CGContextAddCirlce(CGContextRef ctx, float ox, float oy, float radius)
{
double len = 2 * M_PI * radius;
double step = 1.8 / len; // over the top :)
// translating/scaling would more efficient, etc..
//
float x = ox + radius;
float y = oy;
// stupid hack - should just do a quadrant and mirror twice.
//
CGContextMoveToPoint(ctx,x,y);
for(double a = step; a < 2.0 * M_PI -step; a += step) {
x = ox + radius * cos(a);
y = oy + radius * sin(a);
CGContextAddLineToPoint(ctx, x, y);
};
CGContextClosePath(ctx);
};

The vector drawing operations of Quartz 2D can be slow, which is why it is a good idea to redraw only when needed.
In your case, I would suggest drawing your volume button once, then transforming the UIView or CALayer into which you've drawn the button using a rotational transform. By simply moving, rotating, or scaling a view, you do not trigger an expensive redraw. The content is already cached as a texture, and the GPU can quickly manipulate and composite this rasterized content on top of your other views.
You'll find that avoiding redrawing in this manner will yield much improved performance.

Issue partly (mostly resolved).
Extensive benchmarking does show that AddArc is indeed slow compared to drawing a complete circle with a vector/straight-line path for circles in the 100-200 pixel radius range. For partial circles the effect is much less pronounced; am wondering if this is tied to the number of beziers.
BUT:
The code below did not compile as one would read it; M_PI was not the 3.14etc as actually expected by set to (3.14... * ((EVP_ARM7_ADJUST[(PLTF)])) by an included fixed-point DSP library (set to x100).
Hence it specified the end-arc double by a factor of 256 too large.
And it was the latter which did make the issue so noticeable (evidently the underlaying implementation just keeps going round and round and round..).
So issue now understood (and will keep an optimized/benchmarked version).
Thanks for the help!

Related

Convert screen coordinates to Metal's Normalized Device Coordinates

I am trying to render a 2D triangle using user touches. So, I will let a user touch three points on the screen and those points will be used as vertices of a triangle.
You're already aware that you need to return clip-space coordinates (technically not normalized device coordinates) from your vertex shader. The question is how and where to go from UIKit coordinates to Metal's clip-space coordinates.
Let's start by defining these different spaces. Note that below, I actually am using NDC coordinates for the sake of simplicity, since in this particular case, we aren't introducing perspective by returning vertex positions with w != 1. (Here I'm referring to the w coordinate of the clip-space position; in the following discussion, w always refers to the view width).
We pass the vertices into our vertex shader in whatever space is convenient (this is often called model space). Since we're working in 2D, we don't need the usual series of transformations to world space, then eye space. Essentially, the coordinates of the UIKit view are our model space, world space, and eye space all in one.
We need some kind of orthographic projection matrix to move from this space into clip space. If we strip out the unnecessary parts related to the z axis and assume that our view bounds' origin is (0, 0), we come up with the following transformation:
We could pass this matrix into our vertex shader, or we could do the transformation prior to sending the vertices to the GPU. Considering how little data is involved, it really doesn't matter at this point. In fact, using a matrix at all is a little wasteful, since we can just transform each coordinate with a couple of multiplies and an add. Here's how that might look in a Metal vertex function:
float2 inverseViewSize(1.0f / width, 1.0f / height); // passed in a buffer
float clipX = (2.0f * in.position.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -in.position.y * inverseViewSize.y) + 1.0f;
float4 clipPosition(clipX, clipY, 0.0f, 1.0f);
Just to verify that we get the correct results from this transformation, let's plug in the upper-left and lower-right points of our view to ensure they wind up at the extremities of clip space (by linearity, if these points transform correctly, so will all others):
These points appear correct, so we're done. If you're concerned about the apparent distortion introduced by this transformation, note that it is exactly canceled by the viewport transformation that happens prior to rasterization.
Here is a function that will convert UIKit view-based coordinates to Metal's clip space coordinates (based on warrenm`s answer). It can be added directly to a shader file & called from the vertex shader function.
float2 convert_to_metal_coordinates(float2 point, float2 viewSize) {
float2 inverseViewSize = 1 / viewSize;
float clipX = (2.0f * point.x * inverseViewSize.x) - 1.0f;
float clipY = (2.0f * -point.y * inverseViewSize.y) + 1.0f;
return float2(clipX, clipY);
}
You'll want to pass the viewSize (UIKit's bounds) to Metal somehow, say via a buffer parameter on the vertex function.
Translated Thompsonmachine's code to swift, using SIMD values which is what I need to pass to shaders.
func convertToMetalCoordinates(point: CGPoint, viewSize: CGSize) -> simd_float2 {
let inverseViewSize = CGSize(width: 1.0 / viewSize.width, height: 1.0 / viewSize.height)
let clipX = Float((2.0 * point.x * inverseViewSize.width) - 1.0)
let clipY = Float((2.0 * -point.y * inverseViewSize.height) + 1.0)
return simd_float2(clipX, clipY)
}

Game programming difficult mathematical issue

The question I am about to ask could be somewhat challenging. I will try to make this as clear and cohesive as possible.
I am currently making a game, in which I have a 'laser ring,' as shown here:
This laser ring, when prompted, will fire a 'grappling hook' which is simply the image shown below. This image's frame.width property is adjusted to make it fire (lengthen) and retract (shorten.) It starts at a width of 0, and as the frames progress, it lengthens until reaching the desired point.
This grappling hook, when fired, should line up with the ring so that they appear to be one item. Refer to the image below for clarity:
*Note that the grappling hook's width changes almost every frame, so a constant width cannot be assumed.
Something else to note is that, for reasons that are difficult to explain, I can only access the frame.center property of the grappling hook and not the frame.origin property.
So, my question to you all is this: How can I, accessing only the frame.center.x and frame.center.y properties of the grappling hook, place it around the laser ring in such a way that it appears to be seamlessly extending from the ring as shown in the above image - presumably calculated based on the angle and width of the grappling hook at any given frame?
Any help is immensely appreciated.
OK, I've done this exact same thing in my own app.
The trick I did to make it easier was to have a function to calculate the "unitVector" of the line.
i.e. the vector change in the line based on a line length of 1.
It just uses simple pythagorus...
- (CGSize)unitVectorFromPoint:(CGPoint)start toPoint:(CGPoint)end
{
//distance between start an end
float dX = end.x - start.x;
float dY = end.y - start.y;
float distance = sqrtf(dX * dX + dY * dY); // simple pythagorus
//unit vector is just the difference divided by the distance
CGSize unitVector = CGSizeMake(dX/distance, dY/distance);
return unitVector;
}
Note... it doesn't matter which way round the start and end are as squaring the numbers will only give positive values.
Now you can use this vector to get to any point along the line between the two points (centre of the circle and target).
So, the start of the line is ...
CGPoint center = // center of circle
CGPoint target = // target
float radius = //radius of circle
float dX = center.x - target.x;
float dY = center.y - target.y;
float distance = sqrtf(dX * dX + dY * dY);
CGSize unitVector = [self unitVectorFromPoint:center toPoint:target];
CGPoint startOfLaser = CGPointMake(center.x + unitVector.x * radius, center.y + unitVector.y * radius).
CGPoint midPointOfLaser = CGPointMake(center.x + unitVecotr.x * distance * 0.5, center.y + unitVector.y * distance * 0.5);
This just multiplies the unit vector by how far you want to go (radius) to get to the point on the line at that distance.
Hope this helps :D
If you want the mid point between the two points then you just need to change "radius" to be the distance that you want to calculate and it will give you the mid point. (and so on).

OpenGL ES 1.1 2D Ring with Texture iPhone

I would appreciate some help with the following. I'm trying to render a ring shape on top of another object in OpenGL ES 1.1 for an iPhone game. The ring is essentially the difference between two circles.
I have a graphic prepared for the ring itself, which is transparent in the centre.
I had hoped to just create a circle, and apply the texture to that. The texture is a picture of the ring that occupies the full size of the texture (i.e. the outside of the ring touches the four sides of the texture). The centre of the ring is transparent in the graphic being used.
It needs to be transparent in the centre to let the object underneath show through. The ring is rendering correctly, but is a solid black mass in the centre, not transparent. I'd appreciate any help to solve this.
Code that I'm using to render the circle is as follows (not optimised at all: I will move the coords in proper buffers etc for later code, but I have written it this way to just try and get it working...)
if (!m_circleEffects.empty())
{
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
int segments = 360;
for (int i = 0; i < m_circleEffects.size(); i++)
{
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(m_circleEffects[i].position.x, m_circleEffects[i].position.y, 0);
glBindTexture(GL_TEXTURE_2D, m_Texture);
float radius = 1.764706;
GLfloat circlePoints[segments * 3];
GLfloat textureCoords[segments * 2];
int circCount = 3;
int texCount = 2;
for (GLfloat i = 0; i < 360.0f; i += (360.0f / segments))
{
GLfloat pos1 = cosf(i * M_PI / 180);
GLfloat pos2 = sinf(i * M_PI / 180);
circlePoints[circCount] = pos1 * radius;
circlePoints[circCount+1] = pos2 * radius;
circlePoints[circCount+2] = (float)z + 5.0f;
circCount += 3;
textureCoords[texCount] = pos1 * 0.5 + 0.5;
textureCoords[texCount+1] = pos2 * 0.5 + 0.5;
texCount += 2;
}
glVertexPointer(3, GL_FLOAT, 0, circlePoints);
glTexCoordPointer(2, GL_FLOAT, 0, textureCoords);
glDrawArrays(GL_TRIANGLE_FAN, 0, segments);
}
m_circleEffects.clear();
glDisable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
I've been experimenting with trying to create a ring rather than a circle, but I haven't been able to get this right yet.
I guess that the best approach is actually to not create a circle, but a ring, and then get the equivalent texture coordinates as well. I'm still experimenting with the width of the ring, but, it is likely that the radius of the ring is 1/4 width of the total circle.
Still a noob at OpenGL and trying to wrap my head around it. Thanks in advance for any pointers / snippets that might help.
Thanks.
What you need to do is use alpha blending, which blends colors into each other based on their alpha values (which you say are zero in the texture center, meaning transparent). So you have to enable blending by:
glEnable(GL_BLEND);
and set the standard blending functions for using a color's alpha component as opacity:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
But always keep in mind in order to see the transparent object correctly blended over the object behind, you need to render your objects in back to front order.
But if you only use the alpha as a object/no-object indicator (only values of either 0 or 1) and don't need partially transparent colors (like glass, for example), you don't need to sort your objects. In this case you should use the alpha test to discard fragments based on their alpha values, so that they don't pollute the depth-buffer and prevent the behind lying object from being rendered. An alpha test set with
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.5f);
will only render fragments (~pixels) that have an alpha of more than 0.5 and will completely discard all other fragments. If you only have alpha values of 0 (no object) or 1 (object), this is exactly what you need and in this case you don't actually need to enable blending or even sort your objects back to front.

OpenGL-ES change angle of vision in frustum

Let's see if I can explain myself.
When you set up the glFrustum view it will give the perspective effect. Near things near & big... far things far & small. Everything looks like it shrinks along its Z axis to create this effect.
Is there a way to make it NOT shrink that much?
To approach perspective view to an orthographic view.... but not that much to lose perspective completely?
Thanks
The angle is conformed by two parameters: heigth of the nearest clipping plane (determined by top and bottom parameters), and the distance of the nearest clipping plane (determined by zNearest).
To make a perspective matrix such that it doesn't shrink the image too much, you can set a smaller height or a further nearest clipping plane.
The thing is to understand that orthographic view is a view with a FOV of zero and a camera position at infinity. So there is a way to approach orthographic view by reducing FOV and moving the camera far away.
I can suggest the following code that computes a near-orthographic projection camera from a given theta FOV value. I use it in a personal project, though it uses custom matrix classes rather than glOrtho and glFrustum, so it might be incorrect. I hope it gives a good general idea though.
void SetFov(int width, int height, float theta)
{
float near = -(width + height);
float far = width + height;
/* Set the projection matrix */
if (theta < 1e-4f)
{
/* The easy way: purely orthogonal projection. */
glOrtho(0, width, 0, height, near, far);
return;
}
/* Compute a view that approximates the glOrtho view when theta
* approaches zero. This view ensures that the z=0 plane fills
* the screen. */
float t1 = tanf(theta / 2);
float t2 = t1 * width / height;
float dist = width / (2.0f * t1);
near += dist;
far += dist;
if (near <= 0.0f)
{
far -= (near - 1.0f);
near = 1.0f;
}
glTranslate3f(-0.5f * width, -0.5f * height, -dist);
glFrustum(-near * t1, near * t1, -near * t2, near * t2, near, far);
}

Car turning circle and moving the sprite

I would like to use Cocos2d on the iPhone to draw a 2D car and make it steer from left to right in a natural way.
Here is what I tried:
Calculate the angle of the wheels and just move it to the destination point where the wheels point to. But this creates a very unnatural feel. The car drifts half the time
After that I started some research on how to get a turning circle from a car, which meant that I needed a couple of constants like wheelbase and the width of the car.
After a lot of research, I created the following code:
float steerAngle = 30; // in degrees
float speed = 20;
float carWidth = 1.8f; // as in 1.8 meters
float wheelBase = 3.5f; // as in 3.5 meters
float x = (wheelBase / abs(tan(steerAngle)) + carWidth/ 2);
float wheelBaseHalf = wheelBase / 2;
float r = (float) sqrt(x * x + wheelBaseHalf * wheelBaseHalf);
float theta = speed * 1 / r;
if (steerAngle < 0.0f)
theta = theta * -1;
drawCircle(CGPointMake(carPosition.x - r, carPosition.y),
r, CC_DEGREES_TO_RADIANS(180), 50, NO);
The first couple of lines are my constants. carPosition is of the type CGPoint. After that I try to draw a circle which shows the turning circle of my car, but the circle it draws is far too small. I can just make my constants bigger, to make the circle bigger, but then I would still need to know how to move my sprite on this circle.
I tried following a .NET tutorial I found on the subject, but I can't really completely convert it because it uses Matrixes, which aren't supported by Cocoa.
Can someone give me a couple of pointers on how to start this? I have been looking for example code, but I can't find any.
EDIT After the comments given below
I corrected my constants, my wheelBase is now 50 (the sprite is 50px high), my carWidth is 30 (the sprite is 30px in width).
But now I have the problem, that when my car does it's first 'tick', the rotation is correct (and also the placement), but after that the calculations seem wrong.
The middle of the turning circle is moved instead of kept at it's original position. What I need (I think) is that at each angle of the car I need to recalculate the original centre of the turning circle. I would think this is easy, because I have the radius and the turning angle, but I can't seem to figure out how to keep the car moving in a nice circle.
Any more pointers?
You have the right idea. The constants are the problem in this case. You need to specify wheelBase and carWidth in units that match your view size. For example, if the image of your car on the screen has a wheel base of 30 pixels, you would use 30 for the WheelBase variable.
This explains why your on-screen circles are too small. Cocoa is trying to draw circles for a tiny little car which is only 1.8 pixels wide!
Now, for the matter of moving your car along the circle:
The theta variable you calculate in the code above is a rotational speed, which is what you would use to move the car around the center point of that circle:
Let's assume that your speed variable is in pixels per second, to make the calculations easier. With that assumption in place, you would simply execute the following code once every second:
// calculate the new position of the car
newCarPosition.x = (carPosition.x - r) + r*cos(theta);
newCarPosition.y = carPosition.y + r*sin(theta);
// rotate the car appropriately (pseudo-code)
[car rotateByAngle:theta];
Note: I'm not sure what the correct method is to rotate your car's image, so I just used rotateByAngle: to get the point across. I hope it helps!
update (after comments):
I hadn't thought about the center of the turning circle moving with the car. The original code doesn't take into account the angle that the car is already rotated to. I would change it as follows:
...
if (steerAngle < 0.0f)
theta = theta * -1;
// calculate the center of the turning circle,
// taking int account the rotation of the car
circleCenter.x = carPosition.x - r*cos(carAngle);
circleCenter.y = carPosition.y + r*sin(carAngle);
// draw the turning circle
drawCircle(circleCenter, r, CC_DEGREES_TO_RADIANS(180), 50, NO);
// calculate the new position of the car
newCarPosition.x = circleCenter.x + r*cos(theta);
newCarPosition.y = circleCenter.y + r*sin(theta);
// rotate the car appropriately (pseudo-code)
[car rotateByAngle:theta];
carAngle = carAngle + theta;
This should keep the center of the turning circle at the appropriate point, even if the car has been rotated.