Image strategy in iPhone app - iphone

I'm writing a card game for the iPhone, and I'm not sure about the best strategy for displaying the cards. I have a basic prototype that creates a UIImageView that can be dragged for each card with a dummy image. I wanted to use one large UIImage that contains the faces of all of the cards, and then have each draggable UIImageView display a part of that image. I must be misunderstanding what setBounds is for - I thought that controlled which part of the underlying image is displayed. So, two questions:
Is this the right approach?
How do I display just a part of the image?

Depending on your resolution, this might not be the best approach.
From Apple:
You should avoid creating UIImage
objects that are greater than 1024 x
1024 in size. Besides the large amount
of memory such an image would consume,
you may run into problems when using
the image as a texture in OpenGL ES or
when drawing the image to a view or
layer. This size restriction does not
apply if you are performing code-based
manipulations, such as resizing an
image larger than 1024 x 1024 pixels
by drawing it to a bitmap-backed
graphics context. In fact, you may
need to resize an image in this manner
(or break it into several smaller
images) in order to draw it to one of
your views.
Now, you are talking about breaking it up into several smaller pieces, but given UIIMage's caching, I am not sure what happens to memory every time you access the image and copy a sub-rect out of it. I think the approach I would take is to have an array of images, instead of one big one.

Related

Webdesign: PNG-Image width limitations

I want to make a kind of slideshow based on scrolling the webpage.
My problem is that I have an image width of 78720x1015px in png-format.
The width of the image is determined by one single image of 1920px which is 41 times arranged next to each other. - It should be like a cartoon where an image moves by 100% (margin: -100%) and generates a feeling of a movie.
However, this results in an image width of 1920px x 41pics = 78720px.
This is of an enormous width, but what I am wondering about is that the filesize is only 975kB which is in my opinion not that big!? - However, somehow it takes a very long time to load the picture in the Webbrowser and the image is not of such quality as in my ImageViewer on Desktop.
Question 1: What do I have to consider when dealing with such a big image-width? What are the limits?
Question 2: Is there a better way make such kind of a slideshow? - Consider that the sliding itself shouldn't be visible. It should be like a movie based on about 40 pictures.
Thanks in advance!
PNG compresses very well, especially if it's a cartoon like you say that may use only a limited number of colours.
However, when loaded into memory, the device must load all that pixel data into RAM to display it. That's almost 80 million pixels in your case, which would be around 320Mb of uncompressed data. This is probably why the browser is struggling, and especially so if you're using margin to move it around as that requires a full re-draw of the image.
You may have better results with transform, as this should use hardware acceleration and also avoids the reflow part of a redraw since transforms don't affect page layout.
But the better solution would be to break it down into individual images. Have your code load in the next image, scroll it across, then load the next while scrolling and unload the one that's now off-screen to provide a relatively seamless view.

Preloading a large image

I have an image with the dimensions of 5534 × 3803, and size of 2.4mb. The UIView references notes that:
"In iOS 3.0 and later, views are no longer restricted to this maximum
size but are still limited by the amount of memory they consume."
When the image loads, it lags for half a second, then slides in. The image sits in the UIImageView at 1024x704, but can be scaled up to 4x that size for the purpose of my app.
Are you able to preload the image in the AppDelegate? Or is there another way of working around having such a large image?
Thanks
EDIT: The scaling is done via UIPinchGestureRecognizer, and scales up and done (scale x4 - x1) based on the image's center point. There is no panning of the image when zoomed in.
Personally, I would try to write a tile-based system (think Google Maps) that slices your big image into a grid of small images to avoid loading in that gigantic image all at once into RAM. I don't really know what your user interactions are for this image, or whether the images are changing or baked into your project, but I'd assume you can let users scroll around since that image is bigger than any iOS screen. With a tile-based system, you only load the images that are on-screen. CATiledLayer is an Apple class for doing just such a thing. That's probably what you want to look into.
See this StackOverflow question for some different approaches. The accepted answer uses code from Apple's sample PhotoScroller project, which may work for your needs and uses CATiledLayer.
This ScrollViewSuite Apple code might also get on your way (check out the Tiling code).

Does putting a large UIImage into a small UIImageView auto resize it?

Let's say I have a UIImage 1000x1000 and I have a UIImageView with a frame size of 50x50. If I set my UIImageView's image as the large image, is the UIImage automatically resized to 50x50? The UIImageView contentMode is ScaleToFit.
I know that the image does shrink down to 50x50 to fit in there, but in terms of memory, would it be better that I resized the image to 50x50 before setting it as the UIImageView image?
I know it won't make a difference for one image, but I have hundreds of large images and image views in a UIScrollView, so I want to make sure I get the fastest and smoothest performance.
I can think of a couple considerations. UIImage ImageNamed: will cache the image for the entire app run, even when no longer used. If arbitrary images will be used or added as opposed to a static set, make sure to load the image with NSData from a connection, file, or database. Also, I haven't checked in instruments for at least several iOS versions, but I believe that the original image is cached for scaling. If there is no specific issue that you actually run into with running out of memory or slowness, don't bother with pre-emptive optimization. Otherwise, there are some techniques that may help. First, you may only want to retain what is currently on screen. This is easier to do with a UITableView and cell reuse than with a bare UIScrollView. Second, for one app I used that held a lot of thumbnails, I scaled them once and stored them in the database.

iPhone image resource - 1024 maximum, 2048 pixels #2x?

The restriction of 1024x1024 as the largest image for an iPhone is a problem with iPhone 4. However if an #2x image is used with maximum dimensions of 2048x2048 everything looks equally good on the 4 as it does on a 3 - tried and tested in simulator and device. Question is, does the image dimension restriction relate to the UIImage or the resource that it contains? I can't imagine resources of more than 1024 pixels are discouraged with the 960 pixel height of the screen.
The right answer is really to use tiles so that things look even better, but the deadline for for this deliverable is too close - it's a future thing.
From UIImage class reference:
You should avoid creating UIImage
objects that are greater than 1024 x
1024 in size. Besides the large amount
of memory such an image would consume,
you may run into problems when using
the image as a texture in OpenGL ES or
when drawing the image to a view or
layer. This size restriction does not
apply if you are performing code-based
manipulations, such as resizing an
image larger than 1024 x 1024 pixels
by drawing it to a bitmap-backed
graphics context. In fact, you may
need to resize an image in this manner
(or break it into several smaller
images) in order to draw it to one of
your views.
That is, views are rendered and composited with the iPhone's GPU. If your view, for example, overrides drawRect and tries to render a very big UIImage you might run into problems. Newer generation iDevices, such as iPad and iPhone 4 support bigger textures than 1024x1024 (2048x2048 I think).
I didn't realise there was a restriction, I'm using an image 15198 × 252 as the scrolling landscape in Scramble... works really well, though I must say I did have reservations before I tried it out!

Fastest iPhone Blit Routine?

I have a UIView subclass onto which I need to blit a UIImage. There are several ways to skin this cat depending on which series of APIs you prefer to use, and I'm interested in the fastest. Would it be UIImage's drawAtPoint or drawRect? Or perhaps the C-based CoreGraphics routines, or something else? I have no qualms about altering my source image data format if it'll make the blitting that much faster.
To describe my situation my app has anywhere from ~10 to ~200 small UIViews (64x64), a subset of which will need to be redrawn based on user interaction. My current implementation is a call to drawAtPoint inside my UIView subclass' drawRect routine. If you can think of a better way to handle this kind of scenario, I'm all ears (well, eyes).
Using an OpenGL view may be fastest of all. Keep an age cache of images (or if you know a better way to determine when certain images can be removed from the cache, by all means use that) and preload as many images as you can while the app is idle. It should be very quick, with almost no Objective-C calls involved (just -draw)
While not a "blit" at all, given the requirements of the problem (many small images with various state changes) I was able to keep the different states to redraw in their own separate UIImageView instances, and just showed/hid the appropriate instance given the state change.
Since CALayer is lightweight and fast I would get a try.
Thierry
The fastest blit implementation you are going to find is in my AVAnimator library, it contains an ARM asm implementation of a blit for a CoreGraphics buffer, have a look at the source. The way you could make use of it would be to create a single graphics context, the size of the whole screen, and then blit your specific image changes into this single graphics context, then create a UIImage from that and set it as the image of a UIImageView. That would involve 1 GPU upload per refresh, so it will not depend on how many images you render into the buffer. But, you will likely not need to go that low level. You should first try making each 64x64 image into a CALayer and then update each layer with the contents of an image that is the exact size of the layer 64x64. The only tricky thing is that you will want to decompress each of your original images if they come from PNG or JPEG files. You do that by creating another pixel buffer and rendering the original image into the new pixel buffer, that way all the PNG or JPEG decompression is done before you start setting CALayer contents.