I have a PDF file that I want to draw in outline form. I want to draw the first several pages on the document each in their own UIImage to use on a button so that when clicked, the main display will navigate to the clicked page.
However, CGContextDrawPDFPage seems to be using copious amounts of memory when attempting to draw the page. Even though the image is only supposed to be around 100px tall, the application crashes while drawing one page in particular, which according to Instruments, allocates about 13 MB of memory just for the one page.
Here's the code for drawing:
//Note: This is always called in a background thread, but the autorelease pool is setup elsewhere
+ (void) drawPage:(CGPDFPageRef)m_page inRect:(CGRect)rect inContext:(CGContextRef) g {
CGPDFBox box = kCGPDFMediaBox;
CGAffineTransform t = CGPDFPageGetDrawingTransform(m_page, box, rect, 0,YES);
CGRect pageRect = CGPDFPageGetBoxRect(m_page, box);
//Start the drawing
CGContextSaveGState(g);
//Clip to our bounding box
CGContextClipToRect(g, pageRect);
//Now we have to flip the origin to top-left instead of bottom left
//First: flip y-axix
CGContextScaleCTM(g, 1, -1);
//Second: move origin
CGContextTranslateCTM(g, 0, -rect.size.height);
//Now apply the transform to draw the page within the rect
CGContextConcatCTM(g, t);
//Finally, draw the page
//The important bit. Commenting out the following line "fixes" the crashing issue.
CGContextDrawPDFPage(g, m_page);
CGContextRestoreGState(g);
}
Is there a better way to draw this image that doesn't take up huge amounts of memory?
Try to add :
CGContextSetInterpolationQuality(g, kCGInterpolationHigh);
CGContextSetRenderingIntent(g, kCGRenderingIntentDefault);
before :
CGContextDrawPDFPage(g, m_page);
I had a similar issue and adding the 2 function call above resulted in the rendering using 5x less memory. Might be a bug in the CGContextXXX drawing functions
Take a look at my code for a PDF image slicer on github:
http://github.com/luciuskwok/Maps-Slicer
There should be enough memory on the device that a 13 MB allocation isn't going to kill the app. Are you draining the autorelease pool each time you render a PDF? You might also want to cache the rendering into a UIImage so that it doesn't have to render it every time it's displayed.
Related
I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.
I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.
Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.
In the iPhone sample code "PhotoScroller" from WWDC 2010, they show how to do a pretty good mimmic of the Photos app with scrolling, zooming, and paging of images. They also tile the images to show how to display high resolution images and maintain good performance.
Tiling is implemented in the sample code by grabbing pre scaled and cut images for different resolutions and placing them in the grid which makes up the entire image.
My question is: is there a way to tile images without having to manually go through all your photos and create "tiles"? How is it the Photos app is able to display large images on the fly?
Edit
Here is the code from Deepa's answer below:
- (UIImage *)tileForScale:(float)scale row:(int)row col:(int)col size:(CGSize)tileSize image:(UIImage *)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage:tiledImage];
return tileImage;
}
Here goes the piece of code for tiled image generation:
In PhotoScroller source code replace tileForScale: row:col: with the following:
inImage - Image that you want to create tiles
- (UIImage *)tileForScale: (float)scale row: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage: tiledImage];
return tileImage;
}
Regards,
Deepa
I've found this which may be of help: http://www.mikelin.ca/blog/2010/06/iphone-splitting-image-into-tiles-for-faster-loading-with-imagemagick/
You just run it in the Terminal as a shell script on your Mac.
Sorry Jonah, but I think that you cannot do what you want to.
I have been implementing a comic app using the same example as a reference and had the same doubt. Finally, I realized that, even if you could load the image and cut it into tiles the first time that you use it, you shouldn't. There are two reasons for that:
You do the tiling to save time and be more responsive. Loading and tiling takes time for a large image.
Previous reason is particularly important the first time the user runs the app.
If these two reasons make no sense to you, and you still want to do it, I would use Quartz to create the tiles. CGImage function CGImageCreateWithImageInRect would be my starting point.
Deepa's answer above will load the entire image into memory as a UIImage (the input variable in his function), defeating the purpose of tiling.
Many image formats support region-based decoding. Instead of loading the whole image into memory, decompressing the whole thing, and discarding all but the region of interest (ROI), you can load and decode only the ROI, on-demand. For the most part, this eliminates the need to pre-generate and save image tiles. I've never worked with ImageMagick but I'd be amazed if it couldn't do it. (I have done it using the Java Advanced Imaging (JAI) API, which isn't going to help you on the iPhone...)
I've played with the PhotoScroller example and the way it works with pre-generated tiles is only to demonstrate the idea behind CATiledLayer, and make a working-self contained project. It's straightforward to replace the image tile loading strategy - just rewrite the TilingView tileForScale:row:col: method to return a UIImage tile from some other source, be it Quartz or ImageMagick or whatever.
I am working with images of size 2 to 4MB. I am working with images of resolution 1200x1600 by performing scaling, translation and rotation operations. I want to add another image on that and save it to photo album. My app is crashing after i successfully edit one image and save to photos. Its happening because of images size i think. I want to maintain the 90% of resolution of the images.
I am releasing some images when i get memory warning. But still it crashes as i am working with 2 images of size 3MB each and context of size 1200x1600 and getting a image from the context at the same time.
Is there any way to compress images and work with it?
I doubt it. Even compressing and decompressing an image without doing anything to it loses information. I suspect that any algorithms to manipulate compressed images would be hopelessly lossy.
Having said that, it may be technically possible. For instance, rotating a Fourier transform also rotates the original image. But practical image compression isn't usually as simple as just computing a Fourier transform.
Alternatively, you could write piecemeal algorithms that chop the image up into bite-sized pieces, transform the pieces and reassemble them afterwards. You might also provide a real-time view of the process by applying the same transform to a smaller version of the full image.
The key will be never to full decode the entire image into memory at full size.
If you need to display the image, there's no reason to do that at full size -- the display on the iPhone is too small to take advantage of that. For image objects that are for display, decode the image in scaled down form.
For processing, you will need to write custom code that works on a stream of pixels rather than an in-memory array. I don't know if this is available on the iPhone already, but you can write it yourself by writing to the libpng library API directly.
For example, your code right now probably looks something like this (pseudo code)
img = ReadImageFromFile("image.png")
img2 = RotateImage(img, 90)
SaveImage(img2, "image2.png")
The key thing to understand, is that in this case, img is not the data in the PNG file (2MB), but the fully uncompressed image (~6mb). RotateImage (or whatever it's called) returns another image of about this same size. If you are scaling up, it's even worse.
You want code that looks more like this (but there might not be any API's for you to do it -- you might have to write it yourself)
imgPixelGetter = PixelDecoderFromFile("image.png")
imgPixelSaver = OpenImageForAppending("image2.png")
w = imgPixelGetter.Width
h = imgPixelGetter.Height
// set up a 90 degree rotate
imgPixelSaver.Width = h
imgPixelSaver.Height = w
// read each vertical scanline of pixels
for (x = 0; x < w; ++x) {
pixelRect = imgPixelGetter.ReadRect(x, 0, 1, h) // x, y, w, h
pixelRect.Rotate(90); // it's now got a width of h and a height of 1
imgPixelSaver.AppendScanLine(pixelRect)
}
In this algorithm, you never had the entire image in memory at once -- you read it out piece by piece and saved it. You can write similar algorithms for scaling and cropping.
The tradeoff is that it will be slower than just decoding it into memory -- it depends on the image format and the code that's doing the ReadRect(). Unfortunately, PNG is not designed for this kind of access to the pixels.
My data visualization app incurs a large memory consumption spike during redraw (setNeedsDisplay which triggers drawRect). I am currently redrawing the entire view that houses the data plot. This view is much larger then the device display.
Is there any way to tell CoreGraphics to allocate just enough memory to draw each element (each element is a small rectangular block much smaller then the device display) and release the memory when done, rather then my current naive approach?
Thanks in advance.
-Doug
UPDATE 8 Dec 8:28am EST
Here is the relevant code with explanatory wordage. I am running Instruments with ObjectAlloc, Memory Monitor, and Leaks instruments running. The only memory leak I have is due has to do with the NSOperationQueue not releasing mems. This is minor an not relevant.
Architecturally the app consists of a tableView with a list of interesting locations along the human genome to inspect. When a table row is selected I enqueue a data gathering operation that returns data called alignmentData. This data is then plotted as horizontal rectangular slabs.
Initially, when the tableView launches my memory footprint is 5 MB.
- (void)viewWillAppear:(BOOL)animated {
// Initial dimensions for the alignment view are set here. These
// dimensions were roughed out in IB.
frame = self.alignmentView.frame;
frame.origin.x = 0.0;
frame.origin.y = 0.0;
frame.size.width = self.scrollView.contentSize.width;
frame.size.height = 2.0 * (self.containerView.frame.size.height);
}
Note: After viewWillAppear: is called the memory footprint has not budged. Even though the alignmentView is be sized well beyond the dimensions of the display.
This is the method called from the data gathering operation.
- (void)didFinishRetrievingAlignmentData:(NSDictionary *)results {
// Data retrieved from the data server via the data gathering operation
NSMutableData *alignmentData = [[results objectForKey:#"alignmentData"] retain];
NSMutableArray *alignments = [[NSMutableArray alloc] init];
while (offset < [alignmentData length]) {
// ...
// Ingest alignmentData in alignments array
// ...
} // while (offset < [alignmentData length])
[alignmentData release];
// Take the array of alignment objects and position them in screen space
// so that they pack densely creating horizontal rows of alignment objects
// in the process.
self.alignmentView.packedAlignmentRows =
[Alignment packAlignments:alignments basepairStart:self.startBasepairValue basepairEnd:self.endBasepairValue];
[alignments release];
[self.alignmentView setNeedsDisplay];
}
After this line of code:
self.alignmentView.packedAlignmentRows = ...
The memory footprint is 13.8 MB
After this line of code:
[self.alignmentView setNeedsDisplay];
The memory footprint spikes to 21.5 MB, stays there for a few seconds then returns to the pre-existing level of 13.8 MB
The solution I am looking for would allow me to essentially, create a horizontal render buffer window that that is the height of a single row of alignment objects. I would allocate its memory render into it, then discard it. I would do this over and over again for each row of alignment data.
In theory, I could render an infinite amount of data with this approach which of course would be most excellent ;-).
-Doug
Here is the - not so obvious answer to my memory problem. I'll give myself this one because I learned it on the Apple dev forum form Rincewind - a very helpful Apple engineer BTW.
It turns out that by slicing a large view into N smaller pieces and rendering into each in turn I will incur a memory spike that is roughly 1/N the size of the large view.
So, for each smaller view: alloc/init, feed a portion of my data, setNeedsDisplay. Rinse/repeat for all N small views.
Simple, eh?
Prior to learning this I had mistakenly thought that setNeedsDisplay:myRect did this for the large view. Apparently not.
Thanks for all the suggestions gang.
Cheers,
Doug
#dugla
"This view is much larger then the device display."
So you're scrolling through the data representation by moving the view around? You might want to consider making your view the same size as the display and using CGTranslate to adjust the drawing offset within your drawRect function. It sounds like you're drawing tons of stuff, and CoreGraphics can't tell what's visible and what is not.
You'll get much better drawing performance if you make the view smaller and insert checks to avoid drawing things that are outside the view's bounds.
This is very possible, you will need to specify which sections of the screen need to be drawn, you need to call setNeedsDisplayInRect as described here and pass in a CGRect which is the area you wish to be redrawn.
This is much, much faster than re-drawing the entire screen, I had issues with this in an iPhone drawing application I created a year and a half ago.
In addition to Ben's suggestion:
If you're scrolling around your data, consider adding a few smaller views to the scrollview. This way you don't need to redraw most of the time, but only when some area of your scrollview isn't covered any more. Basically, if one of your subviews scrolls completely out of sight you'd move it to the opposite side of the visible area and redraw it accordingly.
In my app I'm only scrolling horizontally, and am using two subviews. Let's say view1 is on the left and view2 on the right. When view2 scrolls out of sight, I move it to the left of view1 and redraw it accordingly. If the user scrolls further in the same direction view1 will scroll out of sight as well and I'll move it to the left of view2 and so on.
If you need to scroll horizontally and vertically you'd need 4 views.
I know you are probably aware of this, but have you looked at the Core Plot framework for your data visualization? We recently added touch-scrolling of graphs and we've tried to be conservative when it comes to memory within the framework. Without knowing more about your specific case, this might be something you could try.
I have a UIImageView object that is just a plain black rectangle.
This is what i use to select a button in the view.
Problem is, I have 49 of these buttons in my view, and all of them can be selected at the same time.
What I use for adding a subview to a button is:
UIImageView* selectedSquareView = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,40,40)];
[selectedSquareView setImage:[UIImage imageNamed:#"SelectedSquare.png"]];
[button addSubview: selectedSquareView];
I would like the selectedSquareView to be reused multiple times as subviews for the other buttons, but only keep one allocation of it. I would prefer not having 49 UIImageViews created at the same time just for this purpose. Is this possible?
If not, should I store them in a NSMutableArray for easy removal later?
Regards
-Raymond
You will need 49 UIImageViews, you only need 1 UIImage. The UIImageViews contain position, size, is_higlighted, etc information for each button.
That being said even if you had a lot of UIImage's, UIImage is supposed to be pretty intelligent about these things as Apple describes in their documentation:
In low-memory situations, image data may be purged from a UIImage object to free up memory on the system. This purging behavior affects only the image data stored internally by the UIImage object and not the object itself. When you attempt to draw an image whose data has been purged, the image object automatically reloads the data from its original file. This extra load step, however, may incur a small performance penalty.
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
Alternatively if you really feel like you need to do delete the UIImageView's when not in use you can do as you suggest store them in an array and release them on viewDidDisappear and then recreate them all on viewWillAppear.
Each UIView appears only once, so you will definitely need to create 49 copies of it.
Your current code is probably fine, since UIImage will probably cache the image, but you might like to create the image only once and then set it each time, something like:
UIImageView* selectedSquareView = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,40,40)];
static UIImage* kSelectedSquareImage = [UIImage imageNamed:#"SelectedSquare.png"] retain];
[selectedSquareView setImage:kSelectedSquareImage];
If not, should I store them in a
NSMutableArray for easy removal later?
It depends if there are any other views in the container view - if not, then there is no need to store them in an NSMutableArray as you can just use container.subviews to get an array of views. Otherwise, sure, you could store them in an NSMutableArray and remove them that way (just make sure you remove them from the array or release the array as well, otehrwise they will remain in memory simply because they are stored in the array).