I've been stuck at this for quite some time. I've looked at many other answers and also at Apple's examples.
What I want to know is how I can go about loading large images in UIScrollView, and be able to page through them and zoom in to them, like the Photos app.
I'm working on an app where I need to scroll through large images (by large, I mean images greater than 1024 x 1024). I've already implemented something similar to what the photos app does. However, now that I'm using large images, I get memory warnings and stuff. I know why I'm getting the warning, and I know that I need to Tile the images.
Apple's examples demonstrate tiling with small images which are already present. My app will be downloading multiple images, saving them on a disk and the user will be able to look at them at a later time. Thus, I cannot have the images cut up using other programs.
I need something wherein I can tile the full size images. Any help will be greatly appreciated. Additionally, as I mentioned earlier, I know that I have to do something with Tiling. However, I'm new to this, and I would greatly appreciate if the answers contain some sample code, which I can use as a start off point.
I've looked at other questions, and none seemed to have been answered to my satisfaction, owing to which, I'm asking this question again.
Thanks again!
To work with tiling you'll need to use a CaTiledLayer in your view. Googling for it will give you good information. Basically you declare that the UIVIew is using it, declare some levels of details for zooming and in the drawRect you'll receive each tile as the rect parameter.
To tile the image, load in a UIImage (it uses memory, but quite less than showing it) and use for each tile you want something like:
UIGraphicsBeginImageContextWithOptions(tileSize, YES, 1);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, -tileSize.width * columnNumber, -tileSize.height*rowNumber);
[img drawInRect:CGRectMake(0, 0, tileSize.width, tileSize.height)];
UIImage* imgCut = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
and save imgCut. You probably will want to generate tiling with different zooms if you use different level of detail in your CATiledLayer.
Related
I'm trying to understand Apple's example code for the ZoomingPDFViewer. Here are some questions that I have in my understanding of how it works in my mind. I'm not really sure if I understand it correctly. The link for their code is at: http://developer.apple.com/library/ios/#samplecode/ZoomingPDFViewer/Introduction/Intro.html
(1) CATiledLayer is used to represent the PDF at different zoom levels. I'm assuming that's what this class is used for looking at the Class Reference. Would you ever use something else besides this class for a zooming function?
(2) in the initWithFrame for TiledPDFView, they do: tiledLayer.tileSize = CGSizeMake(512.0, 512.0); Is the tileSize the tiles that make up for the whole image? If so, why such a large size?
(3) How does the oldPDFView and pdfView work? Like which one is in front at the different stages of zoom, and when do they get swapped out. I'm having a hard time understanding the flow of the logic. Thanks.
(1) If you don't require the level of detail to vary for different zoom levels, or if the PDF loads fast enough to not warrant drawing a couple of tiles at a time, a regular UIView with a regular CALayer will work fine. For instance, if you were displaying an image instead of a PDF, and the image loads fast enough to not cause a performance snag, you would not need the asynchronous loading that CATiledLayer provides. The PhotoScroller sample uses both the tiled and non-tiled approaches if you want to compare them.
(2) The tileSize attribute changes the size of the blocks the layer should be split into. You can set this to whatever you want. 512x512 really isn't all that large, especially if your PDF dimensions are big. The default is 256x256.
(3) Anytime you start to zoom, oldPDFView is removed and released. Then pdfView is assigned to oldPDFView. When the zooming ends, a new pdfView is created with the change in scale and added on top of the old one. If the new scale is an increase, the new pdfView will be drawn with a higher level of detail. This makes it so you can zoom deeper and deeper into the PDF. The maximumZoomScale and minimumZoomScale only restrict how much you can zoom with an individual gesture.
I would appreciate some advice on an iphone game design. I want to display some backround image and other images on top of it (buildings, characters etc). The backround is going to be large (up to 10 times the size of the screen) so only a piece of the background file will be displayed at once. The idea is to replace this piece when character gets close to screen borders. I need to make this background transition as a smooth animation. Also, I need to have a zoom in/out feature, preferably animated. Some images on the screen will be static (buildings) and some will require some animation (character walking).
What is the best design:
Use Core Graphics combined with
"sprite" classes - displaying
sprite's UIImage with
CGContextDrawImage
Use UIKit -
create UIImageview to hold every
image and add them as subviews in a
single view appplication
Use OpenGL
ES project
Option 1) turned out to be very slow. It seems like CoreGraphics is not meant to display images in a game loop. But maybe there is a way to make it effecient? Maybe combine it with Core Animation somehow?
Option 2) is my current choice. I am hoping the view to cache the image it holds and thus be more effecient than CG. But will the animation provided by UIImageview will be satisfactory? I think the views shouldn't be added all at once, but rather created & added (removed?) dynamically when background moves. Is it a good idea?
Option 3) would probably give the best control over the images but it seems like quite an overhead. I only need to display images, not vector graphics. Plus I'm new to Mac programming and I don't want to get stuck in some complex technology.
I appreciate any advice, thanks :)
I highly recommend Cocos2D as I've done my own development here on my blog. It was really easy to do. I follow Ray Wenderlich's tutorials and he provides great tools for doing everything you describe.
You asked "The backround is going to be large (up to 10 times the size of the screen) so only a piece of the background file will be displayed at once."
The tiled image system is very powerful and fast performance. If you use google maps you will see and example of a tiled image. Scroll off to a new are and blocks appear. In a local app you could take your image that is 10 times the size of the screen and cut in to tiles that are say 100px by 100px and each screen will only load the tiles that are displayed. When the user moves only the needed tiles are loaded. This saves memory and dramatically improves speed. It is the base reason why tables can fly, only the cells one screen are loaded, as is scrolls off the screen it's memory is reused for the next cell.
If option 2 is sufficiently performant for your needs I would stick with that - it's as easy a system as you'll get on the iPhone and fine for very simple graphics. A related option that might buy you a little bit of speed is using CALayers to implement the graphics. CALayers are almost as easy to use, but are a bit more lightweight than UIViews (in some ways you can think of UIViews as just wrappers for CALayers with additional overhead for managing things like touch events, etc.)
If you're interested I would read the Core Animation Programming Guide (I would provide a link but I think my reputation is too low, but Google should track it down for you). Core Animation is a big subject and can be pretty daunting but if you just use layers (i.e. not the animation parts of it) it's not so bad. Here's a quick example to give you a sense of what using layers looks like:
// NOTE: I haven't compiled this code so it may have typos/errors I haven't noticed
UIView* canvasView; // the view that will be the "canvas" for your game
... // initialize the canvas, etc.
CALayer* imageLayer = [CALayer layer];
UIImage* image = [UIImage imageNamed: #"MyImage.png"];
imageLayer.content = (id)image.CGImage;
imageLayer.bounds = CGRectMake(0, 0, image.size.width, image.size.height);
imageLayer.position = CGPointMake(100, 100); // NOTE: Unlike UIViews CALayers have their origin at the center
[canvasView.layer addSublayer:imageLayer];
So basically it looks a lot like working with views but with some added performance (and occasional headache).
P.S. - One thing to keep in mind is that if you make changes to a layer's property that is animatable (e.g. position, opacity, etc.) Core Animation will implicitly animate it (e.g. if you write imageLayer.position = somePoint; the layer animates to that position rather than having it's position set immediately. There's easy ways to work around that but that's a topic for another question/answer.
I'm using the UIGraphicsGetImageFromCurrentImageContext() function to capture the screen contents into an UIImage object (previously rendered into an Image context). This works great for both the simulator and a real device, however in the latter the resulting image has a few pixels with distorted colors, as seen here:
http://img300.imageshack.us/img300/2788/screencap.png
Please notice the few fucsia pixels at the top navigation bar, at both sides of the search field and to the right of the button. There are also such pixels at the right of the bottom-left button.
The code I'm using to capture the screen view into an UIImage object is pretty straightforward:
UIGraphicsBeginImageContext(self.view.window.frame.size);
[self.view.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *anImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
One thing to note is that all the graphics that get distorted belong to custom PNG files, used to draw the search field background as well as the buttons background.
Does anyone knows what could be possible causing this strange color distortion?
Best regards,
Just checked my own code that is doing the same thing you are. Yours is nearly identical to mine, except that I am asking the view's layer to render instead of the window's, i.e.:
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
I don't know why that would make a difference, but it's worth a try.
Solved it by using the just-approved private function UIGetScreenImage().
For more info, please check http://www.tuaw.com/2009/12/15/apple-relents-and-is-now-allowing-uigetscreenimage-for-app-st/ and https://devforums.apple.com/message/149553
Regards,
This article explains the issue with image corruption (caused by partially transparent pixels) and provides a workaround which corroborates Chris's comment:
http://www.kaoma.net/iphone/?p=9
UIGetScreenImage() is quite annoying when you just want to capture a view.
I found a nice trick, just re-save all your PNG images into TIFF format using Preview.app :)
Does anyone know of a way to simply and effectively brighten a UIImage by a specific amount? I am currently fiddling with the Apple Sample Code Example GLImageProcessing with poor results...My app currently does not use OpenGLES or EAGLViews and it is awkward trying to bridge the technology.
You could render the UIImage into a CGBitmapContext. And then you should have a pointer to the raw bytes of the image. At that point you could do anything you'd like with the bytes, including brighten them. After that you can create a new CGImageRef from the bytes.
This would all be on the CPU, which might not perform as well as an OpenGL solution depending on the image size.
That depends what you mean by "brighten". You can overlay colors easily, and you can probably figure out some blending mode that will do what you want. Look through the CG functions and documentation (I'd post in more detail, but I can't right now).
I'm trying to work out how to draw from a TexturePage using CoreGraphics.
Given a texture page (CGImageRef) which contains multiple 64x64 packed textures, how do I render sub areas from that page onto the device context.
CGContextDrawImage seems to only take a destination rect. I noticed CGImageCreateWithImageInRect, however this creates a new image. I don't want a new image I simply want to draw from the original image.
I'm sure this is possible, however I'm new to iPhone development.
Any help much appreciated.
Thanks
What's wrong with CGImageCreateWithImageInRect?
CGImageRef subImage = CGImageCreateWithImageInRect(image, srcRect);
if (subImage) {
CGContextDrawImage(context, destRect, subImage);
CFRelease(subImage);
}
Edit: Wait a minute. Use CGImageCreateWithImageInRect. That is what it's for.
Here are the ideas I wrote up initially; I will leave them in case they're useful.
See if you can create a sub-image of some kind from another image, such that it borrows the original image's buffer (much like some substring implementations). Then you could draw using the sub-image.
It might be that Core Graphics is intended more for compositing than for image manipulation, so you may have to use separate image files in your application bundle. If the SDK docs don't particularly recommend what you're doing, then I suggest you go that route since it seems the most simple and natural way to do it.
You could use OpenGLES instead, in which case you can specify the texture coordinates of polygon vertices to select just that section of your big texture.