How do I draw this exact gradient on the iPhone? - iphone

The gradient in question is Figure 8-5 from the Quartz 2D Programming Guide, "A radial gradient that varies between a point and a circle".
I'm trying to build a CGGradient object (not a CGShading object, which might be the problem) like so:
CGColorSpaceRef rgb = CGColorSpaceCreateDeviceRGB();
CGFloat colors[] =
{
0, 0, 0, 0.9,
0, 0, 0, 0
};
CGGradientRef gradient = CGGradientCreateWithColorComponents(rgb, colors, NULL, sizeof(colors)/(sizeof(colors[0])*sizeof(CGFloat)));
CGContextClipToRect(context, rect);
CGContextDrawRadialGradient(context, gradient, startCenter, startRadius, endCenter, endRadius, gradientDrawingOptions);
CGGradientRelease(gradient);
CGColorSpaceRelease(rgb);
Of course, that isn't exactly right -- the centre points and radii are correct, but the actual gradient doesn't look the same. I just wish Apple had provided the source code for each example! >:(

UPDATE: These color values add the shading on top of other content (drawing from a point out to a circle):
CGFloat colors[] =
{
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.75f
};
Using these color values is pretty close (drawing from a point out to a circle):
CGFloat colors[] =
{
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 0.0f, 0.0f, 0.75f
};

Related

Applying Translations to Entire OpenGL ES Scene - iPhone

I have an OpenGL ES scene which is made up of about 20 objects. In the render method for each object I have code which scales, rotates and positions (using glmultmatrix) that object in the correct place in each scene (see code below).
My question is how can I then apply a transformation to the entire scene as a whole ? E.g scale / enlarge the entire scene by 2 ?
glPushMatrix();
glLoadIdentity();
//Move some objects.
if (hasAnimations) {
glTranslatef(kBuildingOffset);
//scale
glScalef(kModelScale);
//glMultMatrixf(testAnimation);
zRotation = kBuildingzRotation
xRotation = kBuildingxRotation
yRotation = kBuildingyRotation
glRotatef(yRotation, 0.0f, 1.0, 0.0f);
glRotatef(xRotation, 1.0f, 0.0f, 0.0f);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
//NSLog(#"ANIMATION FRAME IS %d", animationFrame);
//NSLog(#"MATRICE IS %f", animationArray[0][0]);
glMultMatrixf(animationArray[animationFrame]);
//glMultMatrixf(matricesArray);
glMultMatrixf(matricePivotArray);
//glMultMatrixf(testAnimation);
}
//First rotate our objects as required.
if ([objectName isEqualToString:#"movingobject1"]) {
glTranslatef(kFan1Position);
glScalef(kModelScale);
glMultMatrixf(matricesArray);
glTranslatef(0, 0, 0);
zRotation +=kFanRotateSpeed;
yRotation =kyFanFlip;
xRotation = kxFanRotation;
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
glRotatef(yRotation, 0.0f, 1.0f, 0.0f);
glRotatef(xRotation, 1.0f, 0.0, 0.0f);
glTranslatef(0.0, 0.0, -300);
}
if ([objectName isEqualToString:#"movingobject2"]) {
glTranslatef(kFan2Position);
glScalef(kModelScale);
glMultMatrixf(matricesArray);
glTranslatef(0, 0, 0);
zRotation +=kFanRotateSpeed;
yRotation = kyFanFlip;
xRotation = kxFanRotation;
glRotatef(-kFan3YOffset, 0.0, 1.0, 0.0);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
glRotatef(yRotation, 0.0f, 1.0f, 0.0f);
glRotatef(xRotation, 1.0f, 0.0, 0.0f);
glRotatef(kFan3YOffset, 0.0f, 1.0f, 0.0f);
glTranslatef(0.0, 0.0, -300);
}
if ([objectName isEqualToString:#"movingobject3"]) {
glTranslatef(kFan3Position);
glScalef(kModelScale);
glMultMatrixf(matricesArray);
glTranslatef(0, 0, 0);
zRotation +=kFanRotateSpeed;
yRotation =kyFanFlip;
xRotation =kxFanRotation;
glRotatef(-kFan2YOffSet, 0.0, 1.0, 0.0);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
glRotatef(yRotation, 0.0f, 1.0f, 0.0f);
glRotatef(xRotation, 1.0f, 0.0f, 0.0f);
glRotatef(kFan2YOffSet, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -300);
}
//Then position the rest of the scene objects.
if (![objectName isEqualToString:#"movingobject1"])
if (![objectName isEqualToString:#"movingobject2"])
if(![objectName isEqualToString:#"movingobject3"])
if (!hasAnimations) {
glLoadIdentity();
glTranslatef(kBuildingOffset);
//scale
glScalef(kModelScale);
zRotation = kBuildingzRotation
xRotation = kBuildingxRotation
yRotation = kBuildingyRotation
glRotatef(yRotation, 0.0f, 1.0, 0.0f);
glRotatef(xRotation, 1.0f, 0.0f, 0.0f);
glRotatef(zRotation, 0.0f, 0.0f, 1.0f);
if ([Matrices count]!=0) {
glMultMatrixf(matricesArray);
}
if (hasPivotNode) {
glMultMatrixf(matricePivotArray);
}
}
[mesh render];
glPopMatrix();
//restore the matrix
You should be able to achieve this easily enough by pushing the transform matrix you desire on to the matrix stack before you do any of your object-specifc transforms, but then don't load the identity matrix each time you push another matrix onto the stack. Practically speaking, this will transform all subsequent matrix operations. This is the basic pattern...
// Push an identity matrix on the bottom of the stack...
glPushMatrix();
glLoadIdentity();
// Now scale it, so all subsequent transforms will be
// scaled up 2x.
glScalef(2.f, 2.f, 2.f);
foreach(mesh) {
glPushMatrix();
//glLoadIdentity(); This will erase the scale set above.
glDoABunchOfTransforms();
[mesh render];
glPopMatrix();
}

iOS: Confusion about texture coordinates for rendering a texture on the screen

I am trying to render a texture that was generated by the camera on the iPhone screen.
I downloaded the color tracking example from Brad Larson on http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios (direct link for sample code: http://www.sunsetlakesoftware.com/sites/default/files/ColorTracking.zip).
In the ColorTrackingViewController drawFrame method he uses the following code to generate vertex and corresponding texture coordinates for rendering a textured square:
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
I don't understand why these texture coordinates work correctly.
In my opinion, and in another example code I have seen that works also correctly, they should be:
static const GLfloat textureVertices[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
I went through the whole code, but I cannot figure out why the above texture coordinates work correctly. What am I missing?
I believe it is because the image data from the iphone camera is always presented rotated 90 CCW. To counteract that rotation he's setting the texture co-ordinates to be rotated 90 CCW too. Sometimes two wrongs do make a right?

Colourise CGContextRef But Preserve Alpha

I have a CGContextRef and can draw on it and specify the alpha, and if I try and use it it works perfectly. However, I am trying to colourise it (currently either red or green), but whatever blend mode I choose, the alpha is set to 1 (because I am drawing with alpha as 1). Drawing it the correct colour is not really a viable option, as I would like to be able to colour UIImages loaded from the filesystem as well, so how should I achieve this?
Edit: Example code (width and height are predefined floats, points is an array of CGPoints all of which lie inside the context and color is a UIColor with an opacity of 100%) -
UIGraphicsBeginImageContext(CGSizeMake(width,height));
CGContextRef contextRef = UIGraphicsGetCurrentContext();
CGContextClearRect(contextRef, CGRectMake(0.0f, 0.0f, width, height));
CGContextSetRGBStrokeColor(contextRef, 0.0f, 0.0f, 0.0f, 1.0f);
CGContextSetRGBFillColor(contextRef, 1.0f, 1.0f, 1.0f, 1.0f);
CGContextSetLineWidth(contextRef, lineWidth);
CGContextBeginPath(contextRef);
CGContextMoveToPoint(contextRef, points[0].x, points[0].y);
for (int i = 1; i < 4; i++) {
CGContextAddLineToPoint(contextRef, points[i].x, points[i].y);
}
CGContextAddLineToPoint(contextRef, points[0].x, points[0].y); // Encloses shape
CGContextDrawPath(contextRef, kCGPathFillStroke);
[color setFill];
CGContextBeginPath(contextRef);
CGContextSetBlendMode(contextRef, kCGBlendModeMultiply);
CGContextAddRect(contextRef, CGRectMake(0.0f, 0.0f, width, height));
CGContextDrawPath(contextRef, kCGPathFill);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Thank you in advance for your help,
jrtc27
The problem was I needed a CGContextClipToMask. This meant my code required the following:
CGContextTranslateCTM(contextRef, 0, height);
CGContextScaleCTM(contextRef, 1.0, -1.0);
to convert the coordinates and then
CGRect rect = CGRectMake(0.0f, 0.0f, width, height);
CGContextClipToMask(contextRef, rect, UIGraphicsGetImageFromCurrentImageContext().CGImage);
to fix it.

iphone uiview circle?

Newbie Q.
I wish to subclass a UIView so that it renders a circle.
How is that done in an iPhone?
In the drawRect: method do:
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(ctx);
CGContextSetRGBFillColor(ctx, 1.0f, 1.0f, 1.0f, 1.0f); // white color
CGContextFillEllipseInRect(ctx, CGRectMake(10.0f, 10.0f, 100.0f, 100.0f)); // a white filled circle with a diameter of 100 pixels, centered in (60, 60)
UIGraphicsPopContext();
}

How to do a radar animation in the iPhone SDK?

Does anyone know how to animate a radar animation like the image below?
alt text http://img197.imageshack.us/img197/7124/circle0.png
With it growing outwards? I have to draw the circle using Quartz or some other way?
To draw a static circle in a UIView subclass:
- (void)drawRect:(CGRect)rect
{
CGRect rect = { 0.0f, 0.0f, 100.0f, 100.0f };
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextSetLineWidth(context, 2.0f);
CGContextAddEllipseInRect(context, rect);
CGContextStrokePath(context);
}
From there you just need to animate it with a timer, vary the size of the rect, and add more circles.