Is it possible to tile images in a UIScrollView without having to manually create all the tiles? - iphone

In the iPhone sample code "PhotoScroller" from WWDC 2010, they show how to do a pretty good mimmic of the Photos app with scrolling, zooming, and paging of images. They also tile the images to show how to display high resolution images and maintain good performance.
Tiling is implemented in the sample code by grabbing pre scaled and cut images for different resolutions and placing them in the grid which makes up the entire image.
My question is: is there a way to tile images without having to manually go through all your photos and create "tiles"? How is it the Photos app is able to display large images on the fly?
Edit
Here is the code from Deepa's answer below:
- (UIImage *)tileForScale:(float)scale row:(int)row col:(int)col size:(CGSize)tileSize image:(UIImage *)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage:tiledImage];
return tileImage;
}

Here goes the piece of code for tiled image generation:
In PhotoScroller source code replace tileForScale: row:col: with the following:
inImage - Image that you want to create tiles
- (UIImage *)tileForScale: (float)scale row: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage: tiledImage];
return tileImage;
}
Regards,
Deepa

I've found this which may be of help: http://www.mikelin.ca/blog/2010/06/iphone-splitting-image-into-tiles-for-faster-loading-with-imagemagick/
You just run it in the Terminal as a shell script on your Mac.

Sorry Jonah, but I think that you cannot do what you want to.
I have been implementing a comic app using the same example as a reference and had the same doubt. Finally, I realized that, even if you could load the image and cut it into tiles the first time that you use it, you shouldn't. There are two reasons for that:
You do the tiling to save time and be more responsive. Loading and tiling takes time for a large image.
Previous reason is particularly important the first time the user runs the app.
If these two reasons make no sense to you, and you still want to do it, I would use Quartz to create the tiles. CGImage function CGImageCreateWithImageInRect would be my starting point.

Deepa's answer above will load the entire image into memory as a UIImage (the input variable in his function), defeating the purpose of tiling.

Many image formats support region-based decoding. Instead of loading the whole image into memory, decompressing the whole thing, and discarding all but the region of interest (ROI), you can load and decode only the ROI, on-demand. For the most part, this eliminates the need to pre-generate and save image tiles. I've never worked with ImageMagick but I'd be amazed if it couldn't do it. (I have done it using the Java Advanced Imaging (JAI) API, which isn't going to help you on the iPhone...)
I've played with the PhotoScroller example and the way it works with pre-generated tiles is only to demonstrate the idea behind CATiledLayer, and make a working-self contained project. It's straightforward to replace the image tile loading strategy - just rewrite the TilingView tileForScale:row:col: method to return a UIImage tile from some other source, be it Quartz or ImageMagick or whatever.

Related

How to capture screen after applying mask to the layer in iOS?

I added two UIView to ViewController.view, and applied 2 squares image into each view.layer.mask to make it like a square is sliced into 2 pieces, and addSubview the imageview over it.

I am having a problem rendering the masked layers and save it to photo album.
I want the saved photo to be look like picture no. 1, but it always looks like picture no. 2 after I save it to photo album.

Is there any solution to capture like picture No. 1 after applying mask?
the below is the reference from apple regarind renderIngContext.
Important The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
I've created an image capture function before, which literally does a printscreen of a UIView. I don't use it, because it does not work well for my needs but maybe you can use it:
UIImage *img;
UIGraphicsBeginImageContextWithOptions(UIViewYouWantToCapture.bounds.size, self.opaque, 0.0);
[[UIViewYouWantToCapture layer] renderInContext:UIGraphicsGetCurrentContext()];
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
When we apply masking to any image then the we get the resultant image with alpha property of masked image to 1 and remaining image to 0,
and when we are capturing image of view then there is still complete image(we are able to seen half image due due to alpa = 0 of half image, but there is still a complete image) , so we getting the screenshot of complete view.

can I process UIImage in objective C?

I want to process the UIImage bit by bit. As per the attached example first is normal image and the second one is processed image with CGBitmapContextCreate function.
Any ideas or reference guys ?
I have little idea about, through which it can be achieved but not sure how can we process bit by bit.
CGContextRef context;
context = CGBitmapContextCreate(pixelData,
CGImageGetWidth(sourceImage),
CGImageGetHeight(sourceImage),
8,
CGImageGetBytesPerRow(sourceImage),
CGImageGetColorSpace(sourceImage),
kCGImageAlphaPremultipliedLast);
If you want to process images, you have different technologies :
check Core Image to see what filters are built-in and how to apply them to images. Note that Core Image is also available on Mac OS, and you can build custom filters
for more "image processing" or "computer vision" related tasks (I'm talking of the technical fields here, not pimping images), you need to gain access to the bitmap data of images. For this, you an create a CGImage from a UIImage (see Quartz documentation), or if you're grabbing from a webcam you'll want to use an intermediate buffer (see the AVFoundation frmework).

CGContextDrawPDFPage taking up large amounts of memory

I have a PDF file that I want to draw in outline form. I want to draw the first several pages on the document each in their own UIImage to use on a button so that when clicked, the main display will navigate to the clicked page.
However, CGContextDrawPDFPage seems to be using copious amounts of memory when attempting to draw the page. Even though the image is only supposed to be around 100px tall, the application crashes while drawing one page in particular, which according to Instruments, allocates about 13 MB of memory just for the one page.
Here's the code for drawing:
//Note: This is always called in a background thread, but the autorelease pool is setup elsewhere
+ (void) drawPage:(CGPDFPageRef)m_page inRect:(CGRect)rect inContext:(CGContextRef) g {
CGPDFBox box = kCGPDFMediaBox;
CGAffineTransform t = CGPDFPageGetDrawingTransform(m_page, box, rect, 0,YES);
CGRect pageRect = CGPDFPageGetBoxRect(m_page, box);
//Start the drawing
CGContextSaveGState(g);
//Clip to our bounding box
CGContextClipToRect(g, pageRect);
//Now we have to flip the origin to top-left instead of bottom left
//First: flip y-axix
CGContextScaleCTM(g, 1, -1);
//Second: move origin
CGContextTranslateCTM(g, 0, -rect.size.height);
//Now apply the transform to draw the page within the rect
CGContextConcatCTM(g, t);
//Finally, draw the page
//The important bit. Commenting out the following line "fixes" the crashing issue.
CGContextDrawPDFPage(g, m_page);
CGContextRestoreGState(g);
}
Is there a better way to draw this image that doesn't take up huge amounts of memory?
Try to add :
CGContextSetInterpolationQuality(g, kCGInterpolationHigh);
CGContextSetRenderingIntent(g, kCGRenderingIntentDefault);
before :
CGContextDrawPDFPage(g, m_page);
I had a similar issue and adding the 2 function call above resulted in the rendering using 5x less memory. Might be a bug in the CGContextXXX drawing functions
Take a look at my code for a PDF image slicer on github:
http://github.com/luciuskwok/Maps-Slicer
There should be enough memory on the device that a 13 MB allocation isn't going to kill the app. Are you draining the autorelease pool each time you render a PDF? You might also want to cache the rendering into a UIImage so that it doesn't have to render it every time it's displayed.

How can I adjust the RGB pixel data of a UIImage on the iPhone?

I want to be able to take a UIImage instance, modify some of its pixel data, and get back a new UIImage instance with that pixel data. I've made some progress with this, but when I try to change the array returned by CGBitmapContextGetData, the end result is always an overlay of vertical blue lines over the image. I want to be able to change the colors and alpha values of pixels within the image.
Can someone provide an end-to-end example of how to do this, starting with a UIImage and ending with a new UIImage with modified pixels?
This might answer your question: How to get the RGB values for a pixel on an image on the iphone
I guess you'd follow those instructions to get a copy of the data, modify it, and then create a new CGDataProvider (CGDataProviderCreateWithCFData), use that to create a new CGImage (CGImageCreate) and then create a new UIImage from that.
I haven't actually tried this (no access to a Mac at the moment), though.

Reduce UIImage size to a manageable size (reduce bytes)

I want to reduce the number of bytes of an image captured by the device, since i believe the _imageScaledToSize does not reduce the number of bytes of the picture (or does it?) - i want to store a thumbnail of the image in a local dictionary object and can't afford to put full size images in the dictionary. Any idea?
If you wish to simply compress your UIImage, you can use
NSData *dataForPNGFile = UIImagePNGRepresentation(yourImage);
to generate an NSData version of your image encoded as a PNG (easily inserted into an NSDictionary or written to disk), or you can use
NSData *dataForPNGFile = UIImageJPEGRepresentation(yourImage, 0.9f);
to do the same, only in a JPEG format. The second parameter is the image quality of the JPEG. Both of these should produce images that are smaller, memory-wise, than your UIImage.
Resizing a UIImage to create a smaller thumbnail (pixels-wise) using published methods is a little trickier. _imageScaledToSize is from the private API, and I'd highly recommend you not use it. For a means that works within the documented methods, see this post.
I ran into this problem the other day and did quite a bit of research. I found an awesome solution complete with code here:
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
You need to draw the image into a graphics context at a smaller size. Then, release the original image.
When you say 'physical size', are you talking about a print? Because you can just change the printer page size.
Are you talking about the number of pixels used to capture the image? As in, if you have a pixel array of 3000x2000, and you only want 150x150, then you can crop the images. At the time of capture, if you have a scientific imager, then you can just set the area that will be captured. The camera driver would include instructions for that. If you want to capture 3000x2000 in 1500x1000, you can try to bin the image, if that's what you need.
Or, you can use resampling post-capture in order to make the image smaller. One such algorithm is bicubic resampling, also linear resampling-- there are many variations.
I'm thinking this last is what you're most interested in... in which case, check out this Wikipedia page on the algorithm. Or, you can go to FreeImage and get a library that will read in the image and can also resize images.
UIImageJPEGRepresentation does the trick but I find that using the ImageIO framework often gets significantly better compression results for the same quality setting. It may be slower, but depending on your use case this may not be an issue.
(Code adapted for NSData from this blog post by Zachary West).
#import <MobileCoreServices/MobileCoreServices.h>
#import <ImageIO/ImageIO.h>
...
+ (NSData*)JPEGDataFromImage:(UIImage*)image quality:(double)quality
{
CFMutableDataRef outputImageDataRef = CFDataCreateMutable(kCFAllocatorDefault, 0);
CGImageDestinationRef imageDestinationRef = CGImageDestinationCreateWithData(outputImageDataRef, kUTTypeJPEG, 1, NULL);
NSDictionary* properties = #{
(__bridge NSString*)kCGImageDestinationLossyCompressionQuality: #(quality)
};
CGImageDestinationSetProperties(imageDestinationRef, (__bridge CFDictionaryRef)properties);
CGImageDestinationAddImage(imageDestinationRef, image.CGImage, NULL);
CGImageDestinationFinalize(imageDestinationRef);
CFRelease(imageDestinationRef);
NSData* imageData = CFBridgingRelease(outputImageDataRef);
return imageData;
}