In my app I'm trying to render text along a path; this is fine for most characters but not for Japanese (or anything non mac-Roman). I've been advised to use [NSString drawAtPoint] which displays the correct characters in my CATiledLayer; however, they dissapear after approximately 5 seconds. In this time I can zoom the layer and they scale properly, but they don't seem to get committed to the CATiledLayer like the rest of the text usually is.
Before I render, I check the string and decide whether any of them will not be renderable. If I'm going to have issues, I use drawAtpoint instead:
if (!isFullyDisplayable)
{
CGContextShowGlyphsAtPoint(context, pt.x, pt.y, realGlyph, 1);
}
else {
// fall back on less flexible font rendering for difficult characters
NSString *b = [gv text];
NSString *c = [b substringWithRange:NSMakeRange(j,1)];
[c drawAtPoint:pt withFont:[UIFont fontWithName:#"Helvetica-Bold" size:16.0]];
}
Does anyone have any pointers as to why the text disappears?
As soon as the drawAtPoint is used my debug output gets flooded with:
<Error>: CGContextGetShouldSmoothFonts: invalid context
<Error>: CGContextSetFont: invalid context
<Error>: CGContextSetTextMatrix: invalid context
<Error>: CGContextSetFontSize: invalid context
<Error>: CGContextSetTextPosition: invalid context
<Error>: CGContextShowGlyphsWithAdvances: invalid context
So I'm guessing it is something to do with my context management, but I assumed that if this is in the same place as I use CGContextShowGlyphsAtPoint it should have the correct context already?
Answering my own question:
NSString drawAtPoint:withFont: makes use of the context stack, and from where I was calling this method the stack was empty. Wrapping the call with
UIGraphicsPushContext(context); and UIGraphicsPopContext();
did the trick.
For completeness, here is the code needed for Cocoa. Also works with .drawInRect...
CGContextSaveGState(context)
let old_nsctx = NSGraphicsContext.currentContext() // in case there was one
// force "my" context...
NSGraphicsContext.setCurrentContext(NSGraphicsContext(CGContext: context, flipped: true))
// Do any rotations, translations, etc. here
str.drawAtPoint(cgPt, withAttributes: attributes)
NSGraphicsContext.setCurrentContext(old_nsctx)// clean up for others
CGContextRestoreGState(context)
This is mostly not needed, as #davbryn says, as normally there is already a "working" context on the stack that is the same (you hope!) as your context, however, as he points out, sometimes there isn't. I discovered this problem particularly with MapKit MKOverlayRenderer's drawMapRect:, which simply wouldn't show text without setting the context explicitly.
Related
FIXED: I'm not sure yet why. Code has been updated below, and notes at the bottom
I'm having a very strange error that does not show up on the Simulator (even when running Instruments), unless I turn on all the Zombie and Debug options. However, it will crash the phone after a minute of updates (1 update per second). I have a 2D array that I take a subset of, apply a colormap, and turn into an image (this array changes constantly). Then I pass that image to the model, and the viewcontroller grabs it from the model once it receives notification of an update. I'll layout the 3 classes -Spectrogram, Model, ViewController:
Here are the important bits of each (there is more, but not relevant):
Spectrogram.h (Sorry, I can't get this to indent correctly on here)
#interface: Spectrogram : NSObject
{
NSMutableData *arrayData;
}
//renamed so Xcode allows the object with +1 reference count to be returned
- (CGImageRef)newSpectrogramImage;
Spectrogram.m
#implementation Spectrogram
- (CGImageRef)newSpectrogramImage
{
//slightly reordered
NSMutableData *imageData = [[NSMutableData alloc] init];
...code to go through arrayData and colormap it (get RGB transform) and store in imageData...
CGImageRef arrayImage = nil;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//specifically tell CGImage there is no alpha channel
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNone;
//Use the toll-free bridge between NSData and CFData
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)imageData);
arrayImage = CGImageCreate((slices-startSlice), bins, 8, 24, 3*(slices-startSlice), colorSpace, bitmapInfo, provider, NULL, false, kCGRenderingIntentDefault); //image is rotated now, so width and height are switched
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(arrayImage);
[imageData release];
//Pass the CGImageRef with a reference count of 1
return arrayImage;
}
Model.h
#interface: Model : NSManagedObject
{
Spectrogram *spectrogram;
}
//Function the viewController can call to get the update
- (CGImageRef)newSpectrogramImage;
Model.m
#implementation Model
... there is a function that adds new data to the array and notifies all listeners...
- (CGImageRef)newSpectrogramImage {
return [spectrogram newSpectrogramImage];
}
ViewController.h
#interface: ViewController : UIViewController
{
}
//the root controller actually alloc's the record, and sets this property when creating this view
#property (nonatomic, retain) Record *currentRecord;
ViewController.m
#implementation ViewController
#synthesize spectrogramView;
- (void)viewDidLoad
{
[super viewDidLoad];
[[NSNotificationCenter ... listen for model updates...
}
//now, the real meat
- (void)recordUpdated:(NSNotification*)notification
{
CALayer *myLayer = self.view.layer;
CGImageRef spectrogramImage = nil;
spectrogramImage = [currentRecord newSpectrogramImage];
myLayer.contents = (id)spectrogramImage;
CGImageRelease(spectrogramImage);
}
I've changed this so many times in the last day trying to hunt down where and why it fails. I've tried passing the CGImageRef instead (but since that isn't an object, I'm worried about making copies of what can be a -huge- image) and it still fails. And it works perfectly in the simulator (will run for dozens of minutes). But fails within a minute on the iphone, or if I turn on the debug options for the simulator (it will fail as soon as the viewcontroller is loaded).
On a side note, that might be of some use. This viewcontroller is loaded as a modal view when the phone is turned sideways (works great). However, I have a lot of NSLogs in there and I see this viewcontroller's dealloc is called before the mainviewcontroller even gets to viewwilldisappear - but it still runs. And then this controller's dealloc is called again when the phone is turned back and the view disappears.
Note
The version that worked well in the simulator passed CGImageRefs all the way to the viewController, instead of UIImages. I've tried at least 50 different combinations of where to create the UIImage from the CGImage, and what's posted above is just one of them (all of them fail eventually, or immediately). Of note, with the code above, if I do add this to the viewController modelUpdated:
CGSize size = currentModel.spectrogramImage.size;
NSLog(#"width: %f", size.width);
and comment out assigning it to the spectrogramView, the width is reported correctly, so the UIImage is getting passed along, it's just not getting retained (This is how I understand the EXE_BAD_ACCESS error).
Also, recently I receive dan EXE_BAD_ACCESS on the
self.spectrogramImage = [spectrogram getSpectrogramImage];
line. So, I think the error may be inside the Spectrogram class. Even though the CGImage and UIImage code was taken from Apple examples.
Fixed notes
I read that setting the contents of the CALayer was a much quicker way to pass a CGImageRef to a view - and no UIImage intermediary. Unfortunately, I only commit working changes, so I can't see every iteration I went through. However, I know that I had something very similar to this several times that kept crashing. The problem was always that as it is currently written, the program would crash with EXE_BAD_ACCESS. And if I upped the reference count (or didn't release it), then it would work perfectly, but the object would leak. I still don't understand how that is possible. To have the difference between a leak, and a BAD ACCESS be a reference count of 1 (and not 2 or more).
I can't answer your specific question, but this might help you know what is going on a little bit better. I have a function "logMemUsage" that outputs your memory usage and shows how much it changed since last time. If you call it once a second or so, you can better understand how memory is being used in your app. If it keeps growing, obviously there's a leak, if it goes up and down as you expect it, that's good, if it doesn't go down when you think it should, you'll see it. It's in github here in Utilities.h/.m
It's hard to figure this out from what you've posted, but why not use properties with retain on all your allocated data? For example, you're returning an autoreleased UIImage from getSpectrogramImage, and it gets stored into Model with the call self.spectrogramImage = [spectrogram getSpectrogramImage];, but there isn't a property for spectrogramImage anywhere I can see, even though you're using the self. mechanism. Is it just that you didn't bother to post it? The way it's written it could be getting autoreleased, and then when you try to use it...
[update: this problem has been resolved; the issue was not in drawInRect: but in UIGraphicsBeginImageContext()]
In my app, I'm grabbing a bunch of large images, cropping them down to thumbnail size and storing the thumbnails for previewing.
Note that I'm doing this in a separate image context -- this is not about redrawing a UIView that is on the screen.
This code is rather intensive so I'm running it in a separate thread. The actual scaling looks like this, and is a category implementation on top of UIImage:
- (UIImage *) scaledImageWithWidth:(CGFloat)width andHeight:(CGFloat)height
{
CGRect rect = CGRectMake(0.0, 0.0, width, height);
UIGraphicsBeginImageContext(rect.size);
[self drawInRect:rect]; // <-- crashing on this line
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
This is called from a separate method, which loops through the images in turn and does the processing. The actual call to the above method looks like this:
UIImage *small = [bigger scaledImageWithWidth:24.f andHeight:32.f];
This all works most of the time, but occasionally I get an EXC_BAD_ACCESS.
Backtrace:
#0 0x330d678c in ripc_RenderImage ()
#1 0x330dd5aa in ripc_DrawImage ()
#2 0x300e3276 in CGContextDelegateDrawImage ()
#3 0x300e321a in CGContextDrawImage ()
#4 0x315164c8 in -[UIImage drawInRect:blendMode:alpha:] ()
#5 0x31516098 in -[UIImage drawInRect:] ()
#6 0x0000d6e4 in -[UIImage(Scaling) scaledImageWithWidth:andHeight:] (self=0x169320, _cmd=0x30e6e, width=48, height=64) at /Users/me/Documents/svn/app/trunk/Classes/UIImage+Scaling.m:20
#7 0x00027df0 in -[mgMinimap loadThumbnails] (self=0x13df00, _cmd=0x30d05) at /Users/me/Documents/svn/app/trunk/Classes/mgMinimap.m:167
#8 0x32b15bd0 in -[NSThread main] ()
#9 0x32b81cfe in __NSThread__main__ ()
#10 0x30c8f78c in _pthread_start ()
#11 0x30c85078 in thread_start ()
[update 4] When I run this in the Simulator, and this problem happens, the console additionally shows the following:
// the below is before loading the first thumbnail
<Error>: CGContextSaveGState: invalid context
<Error>: CGContextSetBlendMode: invalid context
<Error>: CGContextSetAlpha: invalid context
<Error>: CGContextTranslateCTM: invalid context
<Error>: CGContextScaleCTM: invalid context
<Error>: CGContextDrawImage: invalid context
<Error>: CGContextRestoreGState: invalid context
<Error>: CGBitmapContextCreateImage: invalid context
// here, the first thumbnail has finished loading and the second one
// is about to be generated
<Error>: CGContextSetStrokeColorWithColor: invalid context
<Error>: CGContextSetFillColorWithColor: invalid context
My gut feeling is that I occasionally end up trying to drawInRect: while the OS is also trying to draw something, which results, occasionally, in a crash. I always presumed that as long as you don't draw on the actual screen, this is acceptable -- is this not the case? Or if it is the case, any idea what might be causing this?
Update (r2): I forgot to mention that this app is running under rather severe memory constraints (I've got a lot of images loaded at any given time and these are swapped in/out), so this may be a case of running out of memory (read on -- it's not). I'm not sure how to verify that, though, so thoughts on this would be welcome too. I did verify this by severely cutting down on the number of images being loaded and adding a check to make sure they're properly deallocated (they are, and the crash still occurs).
Update 3: I thought I found the problem. Below is the answer I wrote, before the crash happened again:
The code would (after I posted this question) occasionally start exiting with exit code 0, and sometimes with exit code 10 (SIGBUS). 0 means "no error", so that was extremely odd. 10 seems to mean a bit of everything so that was unhelpful too. The drawInRect: call was a big hint, though, when the crash happened there.
The looping through to get the thumbnails was generating a lot of autoreleased images. I had an autorelease pool but it was wrapping the entire for loop. I added a second autorelease pool within the for loop:
- (void)loadThumbnails
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
for (...) {
NSAutoreleasePool *cyclePool =
[[NSAutoreleasePool alloc] init]; // <-- here
UIImage *bigger = ...;
UIImage *small = [bigger scaledImageWithWidth:24.f andHeight:32.f];
UIImage *bloated = [i scaledImageWithWidth:48.f andHeight:64.f];
[cyclePool release]; // <-- ending here
}
[pool release];
}
I thought the above fixed the issue, until I ran the app and it crashed on me with "exit code 0" again just earlier. Back to the drawing board...
Have you looked at Matt Gemmell's latest release, MGImageUtilities? I extracted this from his source on github:
// Create appropriately modified image.
UIImage *image;
UIGraphicsBeginImageContextWithOptions(destRect.size, NO, 0.0); // 0.0 for scale means "correct scale for device's main screen".
CGImageRef sourceImg = CGImageCreateWithImageInRect([self CGImage], sourceRect); // cropping happens here.
image = [UIImage imageWithCGImage:sourceImg scale:0.0 orientation:self.imageOrientation]; // create cropped UIImage.
[image drawInRect:destRect]; // the actual scaling happens here, and orientation is taken care of automatically.
CGImageRelease(sourceImg);
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Not sure if thread safety is the issue, but it may be worth trying Matt's code before going too far down that path.
It turns out, there are two answers:
"Must drawInRect: for a separate context be executed on the main thread?" The answer is no, it doesn't.
However, UIGraphicsBeginImageContext must. This in fact is the reason for the crashing that occured. The crash didn't reveal itself until the (invalid) graphics context was being altered, which is why the crash occured on the drawInRect: line.
The solution was to stop using UIGraphicsContext and instead use CGBitmapContext, which is thread safe.
I have a feeling you shouldn't be calling drawInRect directly as it may not be thread safe. You should be calling setNeedsDisplay on the view which will send a message to drawInRect when it is safe to do so.
Does anybody know of an open source PDF creation project for the iPhone to generate documents from a template (using Quartz)? Or are there any functions people have written for alignment etc? I've seen libHaru but I understand it is lacking some crucial functionality.
Thanks
You can create a Quartz context with a PDF file as a rendering destination and simply draw into that. For example, the following code will create a PDF file within an NSData object, which you could then attach to an email or save to disk:
NSMutableData *pdfData = [[NSMutableData alloc] init];
CGDataConsumerRef dataConsumer = CGDataConsumerCreateWithCFData((CFMutableDataRef)pdfData);
const CGRect mediaBox = CGRectMake(0.0f, 0.0f, drawingWidth, drawingHeight);
CGContextRef pdfContext = CGPDFContextCreate(dataConsumer, &mediaBox, NULL);
UIGraphicsPushContext(pdfContext);
CGContextBeginPage(pdfContext, &mediaBox);
// Draw your content here
CGContextEndPage(pdfContext);
CGPDFContextClose(pdfContext);
UIGraphicsPopContext();
CGContextRelease(pdfContext);
CGDataConsumerRelease(dataConsumer);
There are a few things going on here. First, we create an NSMutableData instance and set it to be a data consumer (a destination for the PDF context to write to). Because Core Graphics uses Core Foundation types, and not Cocoa classes, CGDataConsumerCreateWithCFData() requires a CFMutableDataRef argument. We are able to simply cast the NSMutableData class we created as this type, because NSData is a toll-free bridged class. What that means is that it can be used in either Cocoa methods or Core Foundation functions without conversion between types.
After that, we set the page size of the PDF context (in points) and create a PDF context, using the data consumer we set up before. We then make this the active context for drawing by using UIGraphicsPushContext().
In this case, we are only creating a single page in the PDF we're drawing, so we begin the page, draw, then end the page. If you wanted to do multiple pages, you could repeat this for each page.
Note that all of this drawing will be done in the Quartz coordinate space, so if you've set up your drawing routines to display correctly in an iPhone view, it will be flipped here. To counteract this flipping, you can place the following within your drawing code (after UIGraphicsPushContext()):
CGContextTranslateCTM(context, 0.0f, self.frame.size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
There 2 ways to generate a pdf in iPhone SDK. The first one is using Core Graphics methods:
CGContextRef CGPDFContextCreateWithURL(CFURLRef url, const CGRect *mediaBox, CFDictionaryRef auxiliaryInfo)
void CGContextBeginPage(CGContextRef c, const CGRect *mediaBox)
void CGContextEndPage(CGContextRef c)
The second way is to use the UIKit methods of creating a pdf context. You can find it here.
There is a good tutorial on generating a pdf programmatically with the iPhone SDK, and you can find it here.
I'm having crashing issues using the Quartz PDF API for iOS. At the moment I am compiling with the SDK 4.0 GM Seed and running on my 3.2 iPad (I have tried using the 3.2 SDK with identical results).
All the code I am using is based on the standard Apple Quartz documentation and from various sources around the internets. So I can't image I'm doing something drastically different or wrong.
The code runs perfectly in the Simulator (all versions, it's a Universal app) and even while using the "Simulate Memory Warning" function. I've used the Leaks tool and there are no leaks that it finds. Build and Analyze also finds nothing. No crash or low memory log is left in my Library.
All this leads me to believe the device is running out of memory. This happens after running through say 50 pdf pages, with about 35% having an image of some sort (some full page some icon). It does not crash on any particular page. The pdf I am loading is about 75 pages and 3.5MB.
I've perused similar issues on this site and around the internets, and have applied some of the advice in the code below. I now release the pdf document reference on every page turn and I no longer retain/release a page reference. I've also simplified the image swapping from using CGImages to just using the UIGraphicsGetImageFromCurrentImageContext function. I've tried various implementations for switching the images, including replacing the pdfImgView completely with a newly allocated temp instance (using [[UIImageView alloc] iniWithImage:UIGraphicsGetImageFromCurrentImageContext()]), using the setter for pdfImgView and releasing the temp. All of the variations pass the Leaks and Analyzer tests, but still exhibit the same crashing behavior.
So, before I move away from PDFs altogether, is there something I should try or something I am missing?
View controller code that is called in interface handlers to swap pages and on first load:
[self drawPage];
// ...animation code...simple CATransition animation...crashes with or without
// scrollView is a UIScrollView that is a subview of self.view
[scrollView.layer addAnimation:transition forKey:nil];
// pdfImgView is a UIImageView that is a subview of scrollView
pdfImgView.image = UIGraphicsGetImageFromCurrentImageContext();
drawPage method used to configure and draw PDF page to the context:
[CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), CFSTR("BME_interior.pdf"), NULL, NULL);
pdfRef = CGPDFDocumentCreateWithURL((CFURLRef)pdfURL); // instance variable, not a property
CFRelease(pdfURL);
CGPDFPageRef page = CGPDFDocumentGetPage(pdfRef, currentPage);
CGRect box = CGPDFPageGetBoxRect(page, kCGPDFMediaBox);
// ...setting scale and imageHeight, both floats...
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.view.frame.size.width, imageHeight), NO, 0.0);
} else {
UIGraphicsBeginImageContext(CGSizeMake(self.view.frame.size.width, imageHeight));
}
CGContextRef context = UIGraphicsGetCurrentContext();
NSLog(#"page is %d, context is %d, pdf doc is %d, pdf page is %d", currentPage, context, pdfRef, page); // all prints properly
// ...setting up scrollView for new page, using same instance...
CGContextTranslateCTM(context, (self.view.frame.size.width-(box.size.width*scale))/2.0f, imageHeight);
CGContextScaleCTM(context, scale, -1.0*scale);
CGContextSaveGState(context);
CGContextDrawPDFPage(context, page);
CGContextRestoreGState(context);
CGPDFDocumentRelease(pdfRef);
pdfRef = NULL;
Aha! I've fixed the crashes by adding a UIGraphicsEndImageContext(); before beginning a new image context. I don't even get memory warnings now...
Calling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetRenderingIntent(context, kCGRenderingIntentDefault);
before CGContextDrawPDFPage solved a similar problem of mine.
Credits goes to this answer of Johann:
CGContextDrawPDFPage taking up large amounts of memory
Why is it so hard to figure out how to draw Unicode characters on the iPhone, deriving simple font metrics along the way, such as how wide each imaged glyph is going to be in the font of choice?
It looks like it'd be easy with NSLayoutManager, but that API apparently isn't available on the phone. It appears the way people are doing this is to use a private API, CGFontGetGlyphsForUnichars, which won't get you past the Apple gatekeepers into the App store.
Can anybody point me to documentation that shows how to do this? I'm losing hair rapidly.
Howard
I assumed that the exclusion of CGFontGetGlyphsForUnichars
was an oversight rather than a deliberate move, however I'm not
betting the farm on it. So instead I use
[NSString drawAtPoint:withFont:]; (in UIStringDrawing.h)
and
[NSString sizeWithFont];
This also has the advantage of performing decent substitution
on characters missing from your font, something that
CGContextShowGlyphs does not do.
CoreText is the answer if you want to draw unicode rather than CGContextShowGlyphsAtPositions. Also it's better than [NSString drawAtPoint:withFont:] if you need custom drawing.
Here is a complete example:
CTLineRef line = CTLineCreateWithAttributedString((CFAttributedStringRef)attributedString);
CFArrayRef runArray = CTLineGetGlyphRuns(line);
//in more complicated cases make loop on runArray
//here I assumed this array has only 1 CTRunRef within
const CTRunRef run = (CTRunRef)CFArrayGetValueAtIndex(runArray, 0);
//do not use CTFontCreateWithName, otherwise you won't see e.g. chinese characters
const CTFontRef font = CFDictionaryGetValue(CTRunGetAttributes(run), kCTFontAttributeName);
CFIndex glyphCount = CTRunGetGlyphCount(run);
CGGlyph glyphs[glyphCount];
CGPoint glyphPositions[glyphCount];
CTRunGetGlyphs(run, CFRangeMake(0, 0), glyphs);
//you can modify positions further
CTRunGetPositions(run, CFRangeMake(0, 0), glyphPositions);
CTFontDrawGlyphs(font, glyphs, glyphPositions, glyphCount, context);
CFRelease(line);
I've made a pretty suitable replacement for the private function. Read about it here:
http://thoughts.codemelody.com/2009/07/a-replacement-for-cgfontgetglyphsforunichars/