Box2d fixture and body out of sync on retina display - iphone

I'm trying to make a cocos2d/box2d game work on iPad, iPhone and iPhone retina.
My problem is, that the fixture and body don't line up on the retina simulator, please click on screenshots below for illustration (as a new stackoverflow member, it won't allow me to post the screenshot here).
screenshot
(please disregard the different shapes, I want the 4 corners to line up)
I've done quite a bit of research on this over the last couple of days, and the closest I found was this:
link
But the solution offered there with PTM_RATIO and CC_CONTENT_SCALE_FACTOR() doesn't seem to work in my case. I think it has to do with the fact that I don't load an image from file into my sprite. Most solutions to this problem are based on loading -hd image files for the retina display, but I don't want to use files in my game at all. I basically want to draw the polygons myself at runtime,
My code looks as follows:
-(CCSprite*)addSprite
{
CGSize contextsize = CGSizeMake(200, 200);
UIGraphicsBeginImageContext(contextsize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextFlush(context);
CGContextSetAllowsAntialiasing(context, true);
CGContextTranslateCTM(context, 0, contextsize.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGFloat components[] = {0.0, 0.0, 1.0, 1.0};
CGColorRef color = CGColorCreate(colorspace, components);
CGContextSetStrokeColorWithColor(context, color);
UIBezierPath* aPath;
aPath = [UIBezierPath bezierPathWithArcCenter:CGPointMake(100, 100)
radius:100
startAngle:0
endAngle:1.57
clockwise:YES];
[aPath addArcWithCenter:CGPointMake(100, 100)
radius:50
startAngle:1.57
endAngle:0
clockwise:NO];
[aPath stroke];
CGContextStrokePath(context);
CGColorSpaceRelease(colorspace);
CGColorRelease(color);
UIImage *graphImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CCTexture2D *tex = [[[CCTexture2D alloc] initWithImage:graphImage] autorelease];
CCSprite *sprite = [CCSprite spriteWithTexture:tex];
return sprite;
}
-(void) addFixture:(CCSprite *)fixsprite
{
b2Vec2 arcdots[] = {
b2Vec2(50.0f / PTM_RATIO, 0.0f / PTM_RATIO),
b2Vec2(100.0f / PTM_RATIO, 0.0f / PTM_RATIO),
b2Vec2(0.0f / PTM_RATIO, 100.0f / PTM_RATIO),
b2Vec2(0.0f / PTM_RATIO, 50.0f / PTM_RATIO)
};
b2PolygonShape p_shape;
b2FixtureDef fixtureDef;
b2BodyDef bodyDef;
bodyDef.type = b2_kinematicBody;
bodyDef.position.Set(100/PTM_RATIO, 100/PTM_RATIO);
bodyDef.userData = fixsprite;
b2Body *body = world->CreateBody(&bodyDef);
p_shape.Set(arcdots, 4);
fixtureDef.shape = &p_shape;
fixtureDef.density = 1.0f;
fixtureDef.friction = 0.3f;
body->CreateFixture(&fixtureDef);
}
And I call these functions from the main routine as follows:
CCSprite *sprite2 = [self addSprite];
sprite2.position = ccp(0, 0);
[self addChild:sprite2 z:0];
[self addFixture:sprite2];
I have these lines uncommented in the delegate file:
if( ! [director enableRetinaDisplay:YES] )
CCLOG(#"Retina Display Not supported");
Please let me know if further information is required. And please be gentle, I'm only starting to learn this. Thanks for your time.

Unless otherwise mentioned, all coordinates in cocos2d (and most of UIKit) are given in points, not pixels. On a Retina display device you still have a point resolution of 480x320 points (960x640 pixels).
From that follows: when you calculate in actual pixels, multiply or divide by CC_CONTENT_SCALE_FACTOR. If you deal with point coordinates, do nothing. Since you're rendering your own polys I assume you know whether you use actual pixel coordinates or not. If you use OpenGL directly, then you'll be working with pixel coordinates.
I'm not sure if enabling Retina display mode does anything for you if you don't use cocos2d to render your content.
Lastly, a common misunderstanding is that the Box2D world is using point coordinates and must be transformed to pixels or vice versa. Neither is the case. The Box2D world is completely oblivious to a specific coordinate system. The use of PTM_RATIO is done only to ensure that Box2D coordinates are within reasonable ranges for the Box2D engine, since it works best with objects that are 1 meter in size/diameter, and most objects should range from 0.1 to 10 meters in diameter.

Related

ios unread message icon

I was wondering if there is a standard method in iOS to produce the numbered bubble icon for unread messages as the ones used in mail for iphone and mac.
I'm not talking about the red dots on the application item which is done with badgevalue but about the blue bubble beside the mailboxes.
Of course one can do it manually using coregraphics but it's harder to match the dimensions and color of the standard ones used in mail etc.
here are three ways to do this, in order of difficulty..
screen shot your mail app from your iphone, send the image into photoshop, extract the blue dot and use it as an image in your app. To use it in a tableviewcell, you just set the imageView.image = [UIImage imageName:#"blueDot.png"];
same as #1, except save the image as a grayscale, this way you can use Quartz and overlay your own colors on top of it. so you can make that dot any color you want. Very cool stuff.
Use Quartz to draw the whole thing. Its really not that hard. Let me know if you would like some code for that.
OK, twist my arm... here is the code to draw your own gradient sphere... from quartz.
Make a class that inherits from UIView. add the following code
static float RADIANS_PER_DEGREE=0.0174532925;
-(void) drawInContext:(CGContextRef) context
{
// Drawing code
CGFloat radius = self.frame.size.width/2;
CGFloat start = 0 * RADIANS_PER_DEGREE;
CGFloat end = 360 * RADIANS_PER_DEGREE;
CGPoint startPoint = CGPointMake(0, 0);
CGPoint endPoint = CGPointMake(0, self.bounds.size.height);
//define our grayscale gradient.. we will add color later
CGFloat cc[] =
{
.70,.7,.7,1, //r,g,b,a of color1, as a percentage of full on.
.4,.4,.4,1, //r,g,b,a of color2, as a percentage of full on.
};
//set up our gradient
CGGradientRef gradient;
CGColorSpaceRef rgb = CGColorSpaceCreateDeviceRGB();
gradient = CGGradientCreateWithColorComponents(rgb, cc, NULL, sizeof(cc)/(sizeof(cc[0])*4));
CGColorSpaceRelease(rgb);
//draw the gray gradient on the sphere
CGContextSaveGState(context);
CGContextBeginPath(context);
CGContextAddArc(context, self.bounds.size.width/2, self.bounds.size.height/2, radius,start,end , 0);
CGContextClosePath(context);
CGContextClip(context);
CGContextAddRect(context, self.bounds);
CGContextDrawLinearGradient(context, gradient, startPoint, endPoint, kCGGradientDrawsBeforeStartLocation);
CGGradientRelease(gradient);
//now add our primary color. you could refactor this to draw this from a color property
UIColor *color = [UIColor blueColor];
[color setFill];
CGContextSetBlendMode(context, kCGBlendModeColor); // play with the blend mode for difference looks
CGContextAddRect(context, self.bounds); //just add a rect as we are clipped to a sphere
CGContextFillPath(context);
CGContextRestoreGState(context);
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
[self drawInContext:context];
}
If you want to use a graphic resource from iOS, you can find it using the UIKit-Artwork-Extractor tool. Extract everything to the desktop and find the one you want. For example, the red badge for notifications is called SBBadgeBG.png. I don't know which one you mean, so search for it yourself :P
This is what I did to use a badge, the procedure is exactly the same to show a bubble in a subview of your table:
// Badge is an image with 14+1+14 pixels width and 15+1+15 pixels height.
// Setting the caps to 14 and 15 preserves the original size of the sides, so only the pixel in the middle is stretched.
UIImage *image = [UIImage imageNamed:#"badge"];
self.badgeImage = [image stretchableImageWithLeftCapWidth:(image.size.width-1)/2 topCapHeight:(image.size.height-1)/2];
// what size do we need to show 3 digits using the given font?
self.badgeFont = [UIFont fontWithName:#"Helvetica-Bold" size:13.0];
CGSize maxStringSize = [[NSString stringWithString:#"999"] sizeWithFont:self.badgeFont];
// set the annotation frame to the max needed size
self.frame = CGRectMake(0,0,
self.badgeImage.size.width + maxStringSize.width,
self.badgeImage.size.height + maxStringSize.height);
and then override the method drawRect: of your view to paint the badge and the numbers inside:
- (void)drawRect:(CGRect)rect {
// get the string to show and calculate its size
NSString *string = [NSString stringWithFormat:#"%d",self.badgeNumber];
CGSize stringSize = [string sizeWithFont:self.badgeFont];
// paint the image after stretching it enough to acommodate the string
CGSize stretchedSize = CGSizeMake(self.badgeImage.size.width + stringSize.width,
self.badgeImage.size.height);
// -20% lets the text go into the arc of the bubble. There is a weird visual effect without abs.
stretchedSize.width -= abs(stretchedSize.width *.20);
[self.badgeImage drawInRect:CGRectMake(0, 0,
stretchedSize.width,
stretchedSize.height)];
// color of unread messages
[[UIColor yellowColor] set];
// x is the center of the image minus half the width of the string.
// Same thing for y, but 3 pixels less because the image is a bubble plus a 6px shadow underneath.
float height = stretchedSize.height/2 - stringSize.height/2 - 3;
height -= abs(height*.1);
CGRect stringRect = CGRectMake(stretchedSize.width/2 - stringSize.width/2,
height,
stringSize.width,
stringSize.height);
[string drawInRect:stringRect withFont:badgeFont];
}

Proper use of MKOverlayView

I am writing an iPhone app in which I place a large PNG image (1936 × 2967) on an MKMapView using MKOverlayView. I am a little confused about how to appropriately implement the drawMapRect: function in MKOverlayView - should I manually segment my image before drawing it? Or should I let the mechanisms of MKOverlayView handle all that?
My impression from other posts is that before MKOverlayView was available, you were expected to segment images yourself for this kind of task, and use a CATiledLayer. I thought maybe MKOverlayView took care of all the dirty work.
The real reason I ask though is because when I run my app through Instruments using the allocations tool, I find that the number of live bytes my app is using steadily increases with the introduction of the custom image on the map. Right now I am NOT segmenting my image, but I also am seeing no record of memory leaks in the leaks tool in Instruments. Here is my drawMapRect: function:
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context{
// Load image from applicaiton bundle
NSString* imageFileName = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:#"map.png"];
CGDataProviderRef provider = CGDataProviderCreateWithFilename([imageFileName UTF8String]);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
MKMapRect overlayMapRect = [self.overlay boundingMapRect];
CGRect overlayRect = [self rectForMapRect:overlayMapRect];
// draw image
CGContextSaveGState(context);
CGContextDrawImage(context, overlayRect, image);
CGContextRestoreGState(context);
CGImageRelease(image);
}
If my drawMapRect: function is not the cause of these memory issues, does anybody know what it might be? I know through debugging that my viewForOverlay: function for the mapView only gets called once for each overlay, so it's not that memory is leaking there or something.
Any advice is welcome!
Thanks, -Matt
EDIT: so it turns out that the memory issue is actually being caused by MKMapView - every time I move the map at all the memory usage goes up very steadily and never comes down - this doesn't seem good :(
A bit of a late answer, leaving it here if somebody else hits the same problem in the future. The flaw here is trying to render a whole image while documentation clearly says -
In addition, you should avoid drawing the entire contents of the overlay each time this method is called. Instead, always take the mapRect parameter into consideration and avoid drawing content outside that rectangle.
so, you have to only draw the part of the image in the area defined by mapRect
updated: keep in mind that drawRect here can be larger than mapRect, need to adjust the paint and cut regions accordingly
let overlayMapRect = overlay.boundingMapRect
let overlayDrawRect = self.rect(for: overlayMapRect)
// watch out for draw rect adjustment here --
let drawRect = self.rect(for: mapRect).intersection(overlayDrawRect)
let scaleX = CGFloat(image.width) / overlayRect.width
let scaleY = CGFloat(image.height) / overlayRect.height
let transform = CGAffineTransform.init(scaleX: scaleX, y: scaleY)
let imageCut = drawRect.applying(transform)
// omitting optionals checks, you should not
let cutImage = image.cropping(to: imageCut)
// the usual vertical flip issue with image.draw
context.translateBy(x: 0, y: drawRect.maxY + drawRect.origin.y)
context.scaleBy(x: 1, y: -1)
context.draw(image, in: drawRect, byTiling: false)
Here is the objc version based on epolyakov's answer. It works great, but only without any rotation.
- (void) drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
CGImageRef overlayImage = <your_uiimage>.CGImage;
CGRect overlayRect = [self rectForMapRect:[self.overlay boundingMapRect]];
CGRect drawRect = [self rectForMapRect:mapRect];
CGRect rectPortion = CGRectIntersection(overlayRect, drawRect);
CGFloat scaleX = rotatedImage.size.width / overlayRect.size.width;
CGFloat scaleY = rotatedImage.size.height / overlayRect.size.height;
CGAffineTransform transform = CGAffineTransformMakeScale(scaleX, scaleY);
CGRect imagePortion = CGRectApplyAffineTransform(rectPortion, transform);
CGImageRef cutImage = CGImageCreateWithImageInRect(overlayImage, imagePortion);
CGRect finalRect = rectPortion;
CGContextTranslateCTM(context, 0, finalRect.origin.y + CGRectGetMaxY(finalRect));
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetAlpha(context, self.alpha);
CGContextDrawImage(context, finalRect, cutImage);
}
If you need to manage also the rotation of your image, I found a trick using a rotated version of the original image (this because the map rendering always draw vertical rects and rotating the image in this method will cut it).
So using a rotated version of the original image allows to render with vertical rects as the map expects
UIImage* rotatedImage = [self rotatedImage:<your_uiimage> withAngle:<angle_of_image>];
CGImageRef overlayImage = rotatedImage.CGImage;
And this is the method that produce a rotated image in a bounding rect
- (UIImage*) rotatedImage:(UIImage*)image withAngle:(CGFloat)angle
{
float radians = degreesToRadians(angle);
CGAffineTransform xfrm = CGAffineTransformMakeRotation(radians);
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
CGRect rotatedImageBoundingRect = CGRectApplyAffineTransform (imageRect, xfrm);
UIGraphicsBeginImageContext(rotatedImageBoundingRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM (ctx, rotatedImageBoundingRect.size.width/2., rotatedImageBoundingRect.size.height/2.);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGContextRotateCTM (ctx, radians);
CGContextDrawImage (ctx, CGRectMake(-image.size.width / 2, -image.size.height / 2, image.size.width, image.size.height), image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

iPhone Circular Progress Indicator. CGContextRef. Draw on Demand

I want to draw an image that would effectively be a circular progress indicator on a UIButton. Because the image is supposed to represent progress of a task, I do not think I should handle the drawing code in the view's drawrect method.
I have a thread that is performing some tasks. After each task, it calls a method on the main thread. The called method is supposed to update the image on the button.
In the button update method, I create a CGContextRef by using CGBitmapContextCreate.
Then I use the button's frame to create a CGRect.
Then I attempt to draw into using the context I created.
Lastly I set NeedsDisplay and clean up.
But none of this is inside the view's drawrect method.
I would like to know if anyone has used CGContext to draw on / in a view on-demand in a view while the view is being displayed.
I would like to get some ideas regarding an approach to doing this.
Here is an encapsulated version of what I am doing now:
CGContextRef xContext = nil;
CGColorSpaceRef xColorSpace;
CGRect xRect;
void* xBitmapData;
int iBMPByteCount;
int iBMPBytesPerRow;
float fBMPWidth = 20.0f;
float fBMPHeight = 20.0f;
float fPI = 3.14159;
float fRadius = 25.0f;
iBMPBytesPerRow = (fBMPWidth * 4);
iBMPByteCount = (iBMPBytesPerRow * fBMPHeight);
xColorSpace = CGColorSpaceCreateDeviceRGB();
xBitmapData = malloc(iBMPByteCount);
xContext = CGBitmapContextCreate(xBitmapData, fBMPWidth, fBMPHeight, 8, iBMPBytesPerRow, xColorSpace, kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(xColorSpace);
UIGraphicsPushContext(xContext);
xRect = CGRectMake(30.0f, 400.0f, 50.0f, 50.0f);
float fWidth = xRect.size.width;
float fHeight = xRect.size.height;
CGContextClearRect(xContext, xRect);
CGContextSetRGBStrokeColor(xContext, 0.5f, 0.6f, 0.7f, 1.0f);
CGContextSetLineWidth(xContext, 1.0f);
float fArcBegin = 45.0f * fPI / 180.0f;
float fArcEnd = 90.0f * fPI / 180.0f;
CGContextSetFillColor(xContext, CGColorGetComponents( [[UIColor greenColor] CGColor]));
CGContextMoveToPoint(xContext, fWidth, fHeight);
CGContextAddArc(xContext, fWidth, fHeight, fRadius, fArcBegin, fArcEnd, 0);
CGContextClosePath(xContext);
CGContextFillPath(xContext);
UIGraphicsPopContext;
CGContextRelease(xContext);
[self.view setNeedsDisplay];
// [self.view setNeedsDisplayInRect: xRect];
The above is a little bit wonky because I've tried different tweaks. However, I think it communicates what I am trying to do.
Alternative approach:
You could create a series of images that represent the progress updates and then replace the UIButton currentImage property with the setImage:forState: method at each step of the process. This doesn't require drawing in the existing view and this approach has worked well for me to show simple "animation" of images (buttons or other).
Would this approach work for you? If not, why not?
Bart
This was really bugging me so after dealing with a series of silly, but necessary issues regarding the project I want this functionality for, I played around with it.
The end result is that I can now arbitrarily draw an arc representing the progress of a particular background task to a button.
The goal was to draw something like the little indicator in the lower right hand corner of the XCode windows while a project is being cleaned or compiled.
I created a function that will draw and fill an arc and return it as a UIImage.
The worker thread calls method (PerformSelectorOnMainThread) with the current values and a button identifier. In the called method, I call the arc image function with the percentage filled and such.
example call:
oImg = [self ArcImageCreate:100.0f fWidth:100.0f
fPercentFilled: 0.45f fAngleStart: 0.0f xFillColor:[UIColor blueColor]];
Then set the background image of the button:
[oBtn setBackgroundImage: oImg forState: UIControlStateNormal];
Here is the function:
It is not finished, but it works well enough to illustrate how I am doing this.
/**
ArcImageCreate
#ingroup UngroupedFunctions
#brief Create a filled or unfilled solid arc and return it as a UIImage.
Allows for dynamic / arbitrary update of an object that allows a UIImage to be drawn on it. \
This can be used for some sort of pie chart or progress indicator by Image Flipping.
#param fHeight The height of the created UIImage.
#param fWidth The width of the created UIImage.
#param fPercentFilled A percentage of the circle to be filled by the arc. 0.0 to 1.0.
#param AngleStart The angle where the arc should start. 0 to 360. Clock Reference.
#param xFillColor The color of the filled area.
#return Pointer to a UIImage.
#todo [] Validate object creation at each step.
#todo [] Convert PercentFilled (0.0 to 1.0) to appropriate radian(?) (-3.15 to +3.15)
#todo [] Background Image Support. Allow for the arc to be drawn on top of an image \
and the whole thing returned.
#todo [] Background Image Reduction. Background images will have to be resized to fit the specfied size. \
Do not want to return a 65KB object because the background is 60K or whatever.
#todo [] UIColor RGBA Components. Determine a decent method of extracting RGVA values \
from a UIColor*. Check out arstechnica.com/apple/guides/2009/02/iphone-development-accessing-uicolor-components.ars \
for an idea.
*/
- (UIImage*) ArcImageCreate: (float)fHeight fWidth:(float)fWidth fPercentFilled:(float)fPercentFilled fAngleStart:(float)fAngleStart xFillColor:(UIColor*)xFillColor
{
UIImage* fnRez = nil;
float fArcBegin = 0.0f;
float fArcEnd = 0.0f;
float fArcPercent = 0.0f;
UIColor* xArcColor = nil;
float fArcImageWidth = 0.0f;
float fArcImageHeight = 0.0f;
CGRect xArcImageRect;
CGContextRef xContext = nil;
CGColorSpaceRef xColorSpace;
void* xBitmapData;
int iBMPByteCount;
int iBMPBytesPerRow;
float fPI = 3.14159;
float fRadius = 25.0f;
// #todo Force default of 100x100 px if out of bounds. \
// Check max image dimensions for iPhone. \
// If negative, flip values *if* values are 'reasonable'. \
// Determine minimum useable pixel dimensions. 10x10 px is too small. Or is it?
fArcImageWidth = fHeight;
fArcImageHeight = fWidth;
// Get the passed target percentage and clip it between 0.0 and 1.0
fArcPercent = (fPercentFilled 1.0f) ? 1.0f : fPercentFilled;
fArcPercent = (fArcPercent > 1.0f) ? 1.0f : fArcPercent;
// Get the passed start angle and clip it between 0.0 to 360.0
fArcBegin = (fAngleStart 359.0f) ? 0.0f : fAngleStart;
fArcBegin = (fArcBegin > 359.0f) ? 0.0f : fArcBegin;
fArcBegin = (fArcBegin * fPI) / 180.0f;
fArcEnd = ((360.0f * fArcPercent) * fPI) / 180.0f;
//
if (xFillColor == nil) {
// random color
} else {
xArcColor = xFillColor;
}
// Calculate memory required for image.
iBMPBytesPerRow = (fArcImageWidth * 4);
iBMPByteCount = (iBMPBytesPerRow * fArcImageHeight);
xBitmapData = malloc(iBMPByteCount);
// Create a color space. Behavior changes at OSXv10.4. Do not rely on it for consistency across devices.
xColorSpace = CGColorSpaceCreateDeviceRGB();
// Set the system to draw. Behavior changes at OSXv10.3.
// Both of these work. Not sure which is better.
// xContext = CGBitmapContextCreate(xBitmapData, fArcImageWidth, fArcImageHeight, 8, iBMPBytesPerRow, xColorSpace, kCGImageAlphaPremultipliedFirst);
xContext = CGBitmapContextCreate(NULL, fArcImageWidth, fArcImageHeight, 8, iBMPBytesPerRow, xColorSpace, kCGImageAlphaPremultipliedFirst);
// Let the system know the colorspace reference is no longer required.
CGColorSpaceRelease(xColorSpace);
// Set the created context as the current context.
// UIGraphicsPushContext(xContext);
// Define the image's box.
xArcImageRect = CGRectMake(0.0f, 0.0f, fArcImageWidth, fArcImageHeight);
// Clear the image's box.
// CGContextClearRect(xContext, xRect);
// Draw the ArcImage's background image.
// CGContextDrawImage(xContext, xArcImageRect, [oBackgroundImage CGImage]);
// Set Us Up The Transparent Drawing Area.
CGContextBeginTransparencyLayer(xContext, nil);
// Set the fill and stroke colors
// #todo [] Determine why SetFilColor does not. Use alternative method.
// CGContextSetFillColor(xContext, CGColorGetComponents([xArcColor CGColor]));
// CGContextSetFillColorWithColor(xContext, CGColorGetComponents([xArcColor CGColor]));
// Test Colors
CGContextSetRGBFillColor(xContext, 0.3f, 0.4f, 0.5f, 1.0f);
CGContextSetRGBStrokeColor(xContext, 0.5f, 0.6f, 0.7f, 1.0f);
CGContextSetLineWidth(xContext, 1.0f);
// Something like this to reverse drawing?
// CGContextTranslateCTM(xContext, TranslateXValue, TranslateYValue);
// CGContextScaleCTM(xContext, -1.0f, 1.0f); or CGContextScaleCTM(xContext, 1.0f, -1.0f);
// Test Vals
// fArcBegin = 45.0f * fPI / 180.0f; // 0.785397
// fArcEnd = 90.0f * fPI / 180.0f; // 1.570795
// Move to the start point and draw the arc.
CGContextMoveToPoint(xContext, fArcImageWidth/2.0f, fArcImageHeight/2.0f);
CGContextAddArc(xContext, fArcImageWidth/2.0f, fArcImageHeight/2.0f, fRadius, fArcBegin, fArcEnd, 0);
// Ask the OS to close the arc (current point to starting point).
CGContextClosePath(xContext);
// Fill 'er up. Implicit path closure.
CGContextFillPath(xContext);
// CGContextEOFillPath(context);
// Close Transparency drawing area.
CGContextEndTransparencyLayer(xContext);
// Create an ImageReference and create a UIImage from it.
CGImageRef xCGImageTemp = CGBitmapContextCreateImage(xContext);
CGContextRelease(xContext);
fnRez = [UIImage imageWithCGImage: xCGImageTemp];
CGImageRelease(xCGImageTemp);
// UIGraphicsPopContext;
return fnRez;
}

iPhone: Draw rotated text?

I want to draw some text in a view, rotated 90°. I'm pretty new to iPhone development, and poking around the web reveals a number of different solutions. I've tried a few and usually end up with my text getting clipped.
What's going on here? I am drawing in a fairly small space (a table view cell), but there has to be a "right" way to do this… right?
Edit: Here are a couple of examples. I'm trying to display the text "12345" along the black bar at the left.
First attempt, from RJShearman on the Apple Discussions
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSelectFont (context, "Helvetica-Bold", 16.0, kCGEncodingMacRoman);
CGContextSetTextDrawingMode (context, kCGTextFill);
CGContextSetRGBFillColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextSetTextMatrix (context, CGAffineTransformRotate(CGAffineTransformScale(CGAffineTransformIdentity, 1.f, -1.f ), M_PI/2));
CGContextShowTextAtPoint (context, 21.0, 55.0, [_cell.number cStringUsingEncoding:NSUTF8StringEncoding], [_cell.number length]);
CGContextRestoreGState(context);
(source: deeptechinc.com)
Second attempt, from zgombosi on iPhone Dev SDK. Identical results (the font was slightly smaller here, so there's less clipping).
CGContextRef context = UIGraphicsGetCurrentContext();
CGPoint point = CGPointMake(6.0, 50.0);
CGContextSaveGState(context);
CGContextTranslateCTM(context, point.x, point.y);
CGAffineTransform textTransform = CGAffineTransformMakeRotation(-1.57);
CGContextConcatCTM(context, textTransform);
CGContextTranslateCTM(context, -point.x, -point.y);
[[UIColor redColor] set];
[_cell.number drawAtPoint:point withFont:[UIFont fontWithName:#"Helvetica-Bold" size:14.0]];
CGContextRestoreGState(context);
Attempt two. There is almost identical clipping http://dev.deeptechinc.com/sidney/share/iphonerotation/attempt2.png
It turns out that the my table cell was always initialized 44px high regardless of the row height, so all of my drawing was getting clipped 44px from the top of the cell.
To draw larger cells it was necessary to set the content view's autoresizingMask with
cellContentView.autoresizingMask = UIViewAutoresizingFlexibleHeight;
or
cellContentView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
…and drawRect is called with the correct size. In a way, this makes sense, because UITableViewCell's initWithStyle:reuseIdentifier: makes no mention of the size of the cell, and only the table view actually knows how big each row is going to be, based on its own size and its delegate's response to tableView:heightForRowAtIndexPath:.
I read the Quartz 2D Programming Guide until the drawing model and functions started to make sense, and the code to draw my rotated text became simple and obvious:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextRotateCTM(context, -(M_PI/2));
[_cell.number drawAtPoint:CGPointMake(-57.0, 5.5) withFont:[UIFont fontWithName:#"Helvetica-Bold" size:16.0]];
CGContextRestoreGState(context);
Thanks for the tips, it looks like I'm all set.
Use :-
label.transform = CGAffineTransformMakeRotation(- 90.0f * M_PI / 180.0f);
where label is the object of UILabel.
Here's a tip. I presume you're doing this drawing in drawRect. Why don't you draw a frame around drawRect to see how big the rect is and if that is why you get clipping.
An alternative is to put your text in a UILabel, and then rotate that 90 degrees when you make your cells in cellForRowAtIndexPath.
You know about the UITableViewDelegate method heightForRowAtIndexPath right?
Here's a simple tutorial on various graphics level methods. Presuming you know how big your text is you should be able to size your table view row size appropriately.
Also, I'd check to make sure that the bounds after any transform actually meet your expectations. (Either use a debugger or log statement to verify this).
to what #Sidnicious said, and what i collected through out stack overflow, i want to give a usage example - appended my code to completely draw a ruler to the left screen side, with numbers rotated:
RulerView : UIView
// simple testing for iPhones (check for device descriptions to get all iPhones + iPads)
- (float)getPPI
{
switch ((int)[UIScreen mainScreen].bounds.size.height) {
case 568: // iPhone 5*
case 667: // iPhone 6
return 163.0;
break;
case 736: // iPhone 6+
return 154.0;
break;
default:
return -1.0;
break;
}
}
- (void)drawRect:(CGRect)rect
{
[[UIColor blackColor] setFill];
float ppi = [self getPPI];
if (ppi == -1.0) // unable to draw, maybe an ipad.
return;
float linesDist = ppi/25.4; // ppi/mm per inch (regular size iPad would be 132.0, iPhone6+ 154.0)
float linesWidthShort = 15.0;
float linesWidthMid = 20.0;
float linesWidthLong = 25.0;
for (float i = 0, c = 0; i <= self.bounds.size.height; i = i + linesDist, c = c +1.0)
{
bool isMid = (int)c % 5 == 0;
bool isLong = (int)c % 10 == 0;
float linesWidth = isLong ? linesWidthLong : isMid ? linesWidthMid : linesWidthShort;
UIRectFillUsingBlendMode( (CGRect){0, i, linesWidth, .5} , kCGBlendModeNormal);
/* FONT: Numbers without rotation (yes, is short)
if (isLong && i > 0 && (int)c % 10 == 0)
[[NSString stringWithFormat:#"%d", (int)(c/10)] drawAtPoint:(CGPoint){linesWidthLong +2, i -5} withAttributes:#{
NSFontAttributeName: [UIFont systemFontOfSize:9],
NSBaselineOffsetAttributeName: [NSNumber numberWithFloat:1.0]
}];
*/
// FONT: Numbers with rotation (yes, requires more effort)
if (isLong && i > 0 && (int)c % 10 == 0)
{
NSString *str = [NSString stringWithFormat:#"%d", (int)(c/10)];
NSDictionary *attrs = #{
NSFontAttributeName: [UIFont systemFontOfSize:9],
NSBaselineOffsetAttributeName: [NSNumber numberWithFloat:0.0]
};
CGSize textSize = [str sizeWithAttributes:attrs];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextRotateCTM(context, +(M_PI/2));
[str drawAtPoint:(CGPoint){i - (textSize.width/2), -(linesWidthLong + textSize.height +2)} withAttributes:attrs];
CGContextRestoreGState(context);
}
}
}
After I discovered that I needed to add the following to the top of my file I liked Matt's approach. Very simple.
#define degreesToRadian(x) (M_PI * (x) / 180.0)
mahboudz's suggestion will probably be your path of least resistance. You can rotate the UILabel 90deg with this: [label setTransform:CGAffineTransformMakeRotation(DegreesToRadians(-90.0f))]; You'll just have to calculate your cell height based upon the label width. -Matt – Matt Long Nov 10 at 0:09

how to sharp/blur an uiimage in iphone?

I have a view with UIImageView and an UIImage set to it. How do I make image sharp or blur using coregraphics?
Apple has a great sample program called GLImageProcessing that includes a very fast blur/sharpen effect using OpenGL ES 1.1 (meaning it works on all iPhones, not just the 3gs.)
If you're not fairly experienced with OpenGL, the code may make your head hurt.
Going down the OpenGL route felt like insane overkill for my needs (blurring a touched point on an image). Instead I implemented a simple blurring process that takes a touch point, creates a rect containing that touch point, samples the image in that point and then redraws the sample image upside down on top of the source rect several times slightly offset with slightly different opacity. This produces a pretty nice poor man's blur effect without an insane amount of code and complexity. Code follows:
- (UIImage*)imageWithBlurAroundPoint:(CGPoint)point {
CGRect bnds = CGRectZero;
UIImage* copy = nil;
CGContextRef ctxt = nil;
CGImageRef imag = self.CGImage;
CGRect rect = CGRectZero;
CGAffineTransform tran = CGAffineTransformIdentity;
int indx = 0;
rect.size.width = CGImageGetWidth(imag);
rect.size.height = CGImageGetHeight(imag);
bnds = rect;
UIGraphicsBeginImageContext(bnds.size);
ctxt = UIGraphicsGetCurrentContext();
// Cut out a sample out the image
CGRect fillRect = CGRectMake(point.x - 10, point.y - 10, 20, 20);
CGImageRef sampleImageRef = CGImageCreateWithImageInRect(self.CGImage, fillRect);
// Flip the image right side up & draw
CGContextSaveGState(ctxt);
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextTranslateCTM(ctxt, 0.0, -rect.size.height);
CGContextConcatCTM(ctxt, tran);
CGContextDrawImage(UIGraphicsGetCurrentContext(), rect, imag);
// Restore the context so that the coordinate system is restored
CGContextRestoreGState(ctxt);
// Cut out a sample image and redraw it over the source rect
// several times, shifting the opacity and the positioning slightly
// to produce a blurred effect
for (indx = 0; indx < 5; indx++) {
CGRect myRect = CGRectOffset(fillRect, 0.5 * indx, 0.5 * indx);
CGContextSetAlpha(ctxt, 0.2 * indx);
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextDrawImage(ctxt, myRect, sampleImageRef);
}
copy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return copy;
}
What you really need are in the image filters in the CoreImage API. Unfortunately CoreImage is not supported on the iPhone (unless that changed recently and I missed it). Be careful here, as, IIRC, they are available in the SIM - but not on the device.
AFAIK there is no other way to do it properly with the native libraries, although I've sort of faked a blur before by creating an extra layer over the top which is a copy of what's below, offset by a pixel or two and with a low alpha value. For a proper blur effect, tho, the only way I've been able to do it is offline in Photoshop or similar.
Would be keen to hear if there is a better way too, but to my knowledge that is the situation currently.
Have a look at the following libraries:
https://github.com/coryleach/UIImageAdjust
https://github.com/esilverberg/ios-image-filters
https://github.com/cmkilger/CKImageAdditions