Playing looping images sequence in UIView - iphone

I have a looping animation consisting of 120 frames, 512x512 resolution, saved as 32bit PNG files. I want to play this sequence back in a UIView inside my application. Can anyone give me some pointers regarding how I might do this, hopefully I can do this using the standard API (which I would prefer). I could use Cocos2D if needed or even OpenGL (but I am totally new to OpenGL at this point).

You can try this:
// Init an UIImageView
UIImageView *imageView = [[UIImageView alloc] initWithFrame:/*Some frame*/];
// Init an array with UIImage objects
NSArray *array = [NSArray arrayWithObjects: [UIImage imageNamed:#"image1.png"], [UIImage imageNamed:#"image2.png"], .. ,nil];
// Set the UIImage's animationImages property
imageView.animationImages = array;
// Set the time interval
imageView.animationDuration = /* Number of images x 1/30 gets you 30FPS */;
// Set repeat count
imageView.animationRepeatCount = 0; /* 0 means infinite */
// Start animating
[imageView startAnimating];
// Add as subview
[self.view addSubview:imageView];
This is the easiest approach, but I can't say anything about the performance, since I haven't tried it. I think it should be fine though with the images that you have.

Uncompressed, that's about 90MB of images, and that might be as much as you're looking at if they're unpacked into UIImage format. Due to the length of the animation and the size of the images, I highly recommend storing them in a compressed movie format. Take a look at the reference for the MediaPlayer framework; you can remove the playback controls, embed an MPMoviePlayerController within your own view hierarchy, and set playback to loop. Note that 640x480 is the upper supported limit for H.264 so you might need to scale down the video anyway.
Do take a note of issues looping video, as mentioned in the question here Smooth video looping in iOS.

Related

SpriteKit reducing SKLabelnode draw calls

So I have a scene in my game which displays the levels, like any other game with level, I subclass SKSpriteNode to make a custom level button and within this subclass i Add a SKLabelNode to display the level title ( level 1, level 2 .....). The problem know is that i have a lot of draw calls because each SKLabelNode renders as one texture instant of combining them into an atlas. I would like to know if someone can help me reduce these draw calls. I dont want to use Glyph designer because this game is going to be in a lot of different languages like Japanese Chinese and more.
Any advice?
-(void)setText: (NSString *)text{
_label = [SKLabelNode labelNodeWithFontNamed:#"CooperBlack"];
_label.text = text;
_label.fontColor = [UIColor blackColor];
_label.fontSize = 11;
_label.zPosition = 2;
_label.verticalAlignmentMode = SKLabelVerticalAlignmentModeCenter;
_label.position = CGPointMake(0, 0);
[self addChild: _label];
}
Depending on what you're doing and when, you could render out the contents of the labels into textures at runtime (pre-loading / caching), and then manipulate them in any ways you'd like.
SKLabelNode *theThingToBecomeATexture;
//OR
SKSpriteNode *theThingToBecomeATexture;
SKTexture *theTexture = [theView textureFromNode:theThingToBecomeATexture];
But my follow up question or comment would be: I have a difficult time believing that you are running into performance problems by showing a few dozen label nodes on the screen. I can understand you hitting a load spike if you are trying to alloc and init a number of them all at the same time, in which case, I would preload them, or alloc them not on the main thread.

iPhone run animation while moving object on the screen smoothly

currently I have an image on the screen that is swapped out every 5 seconds with another image and is using an animation to do so.
At the same time on the screen I have objects that the user can pick up and drag around (using panning gesture). During the .5 duration of the animation, if I am moving around an object the UI stutters. For example I have a brush that I pick up and move around the screen. the 5 second timer ends and the background image updates. while this updates the brush stutters while that animation occurs. I moved the Image loading the the UI thread and force it to load by using NSData.
Is there a way that I can prevent this stutter while the animation to change the image is running. Here is how I swap the image.
// Dispatch to the queue, and do not wait for it to complete
// Grab image in background thread in order to not block UI as much as possible
dispatch_async(imageGrabbingQueue, ^{
curPos++;
if (curPos> (self.values.count - 1)) curPos= 0;
NSDictionary *curValue = self.values[curPos];
NSString *imageName = curValue [KEY_IMAGE_NAME];
// This may cause lazy loading later and stutter UI, convert to DataObject and force it into memory for faster processing
UIImage *imageHolder = [UIImage imageNamed:imageName];
// Load the image into NSData and recreate the image with the data.
NSData *imageData = UIImagePNGRepresentation(imageHolder);
UIImage *newImage = [[UIImage alloc] initWithData:imageData];
dispatch_async(dispatch_get_main_queue(), ^{
[UIView transitionWithView:self.view duration:.5 options:UIViewAnimationOptionTransitionCrossDissolve|UIViewAnimationOptionAllowUserInteraction|UIViewAnimationOptionAllowAnimatedContent
animations:^{
[self.image setImage:newImage ];
// Temp clause to show ad logo
if (curPos != 0) [self.imagePromotion setAlpha:1.0];
else [self.imagePromotion setAlpha:0];
}
completion:nil];
});
});
Thanks,
DMan
The image processing libs on the iPhone are not magic, they do take CPU time to actually decode the image. This is likely what you are running into. Calling UIImage imageNamed would likely cache the image, but caches can always be flushed so that does not force the system to keep the image in memory. Your code to call initWithData is pointless because it still has to decompress the PNG into memory and that is the part that is causing the slowdown. What you could do is render the image out as decoded pixels and then save that into a file. Then, memory map the file and wrap the mapped memory up in a coregraphics image. That would avoid the "decode and render" step that is likely causing the slowdown. But, anything else may not actually do what you are expecting. Oh, and you should not hold the decoded bytes in memory, because image data is typically so large that it will take up too much room in the device memory.

Large custom UIView - CABackingStoreUpdate Performance

Why, on iOS 4.3.5, do 'large' (960 x 1380) custom UIView's perform CABackingStoreUpdate so inefficiently and how can I improve the performance of drawing operations?
Not entirely sure what I mean? Read on...
Note:
As my understanding of this problem has evolved, so has this question. As a result the question itself is similar but the code example and underlying details/reasoning in the following body of text have changed significantly since the question was first asked.
Context
I have an incredibly basic application (code at the bottom) that draws a single elipses in the drawRect: method of a custom UIView. The application demonstrates the difference in performance when the size of the elipses being drawn remains the same but the size of the custom UIView gets larger:
I ran the application on both an iPod 4th Gen running iOS 4.3.5 and an iPad 1st Gen running iOS 5.1.1 a series of times using custom UIViews of different sizes.
The following table displays the results taken from the time profiler instrument:
The following instrument traces display the details of the two extremes for each device:
iOS 5.1.1 - (Custom UIView size 320 x 460)
iOS 5.1.1 - (Custom UIView size 960 x 1380)
iOS 4.3.5 - (Custom UIView size 320 x 460)
iOS 4.3.5 - (Custom UIView size 960 x 1380)
As you can (hopefully) see in 3 out of the 4 cases we get what we'd expect: the majority of time was spent performing the custom UIViews drawRect: method and each held 10fps.
But the forth case shows a plumet in performance with the application struggling to hold 7fps while only drawing a single shape. The majority of time was spent copying memory during the UIView's CALayer's display method, specifically:
[CALayer display] >
[CALayer _display] >
CABackingStoreUpdate >
CA::Render::ShmemBitmap::copy_pixels(CA::Render::ShmemBitmap const*, CGSRegionObject*) >
memcpy$VARIANT$CortexA8
Now it doesn't take a genius to see from the figures that something seriously wrong here. With a custom UIView of size 960 x 1380, iOS 4.3.5 spends over 4 times the amount of time copying memory around than it does drawing the entire view's contents.
Question
Now, given the context, I ask my question again:
Why, on iOS 4.3.5, do 'large' (960 x 1380) custom UIView's perform CABackingStoreUpdate so inefficiently and how can I improve the performance of drawing operations?
Any help is very much appreciated.
I have also posted this question on the Apple Developer forums.
The Real Deal
Now, obviously, I've reduced my real problem to the simplest reproducible case for the sake of this question. I'm actually attempting to animate a portion of a 960 x 1380 custom UIView that sits inside a UIScrollView.
Whilst I appreciate the temptation to steer anyone towards OpenGL ES when they're not achieving the level of performance they want through Quartz 2D I ask that anyone that takes that route at least offer an explanation as to why Quartz 2D is struggling to perform even the most basic of drawing operations on iOS 4.3.5 where iOS 5.1.1 has no problem. As you can imagine I'm not thrilled about the idea of re-writing everything for this cornerstone case.
This also applies for people suggesting using Core Animation. Although I've used an elipses changing colour (a task perfectly suited for Core Animation) in the demo for the sake of simplicity, the drawing operations I'd actual like to perform are a large quantity of lines expanding over time, a drawing task Quartz 2D is ideal for (when it is performant!). Plus, again, this would require a re-write and doesn't help explain this odd performance problem.
Code
TViewController.m (Implementation of a standard view controller)
#import "TViewController.h"
#import "TCustomView.h"
// VERSION 1 features the custom UIView the same size as the screen.
// VERSION 2 features the custom UIView nine times the size of the screen.
#define VERSION 2
#interface TViewController ()
#property (strong, nonatomic) TCustomView *customView;
#property (strong, nonatomic) NSTimer *animationTimer;
#end
#implementation TViewController
- (void)viewDidLoad
{
// Custom subview.
TCustomView *customView = [[TCustomView alloc] init];
customView.backgroundColor = [UIColor whiteColor];
#if VERSION == 1
customView.frame = CGRectMake(0.0f, 0.0f, 320.0f, 460.0f);
#else
customView.frame = CGRectMake(0.0f, 0.0f, 960.0f, 1380.0f);
#endif
[self.view addSubview:customView];
UITapGestureRecognizer *singleTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap:)];
[customView addGestureRecognizer:singleTap];
self.customView = customView;
}
#pragma mark - Timer Loop
- (void)handleTap:(UITapGestureRecognizer *)tapGesture
{
self.customView.value = 0.0f;
if (!self.animationTimer || !self.animationTimer.isValid) {
self.animationTimer = [NSTimer scheduledTimerWithTimeInterval:0.1 target:self selector:#selector(animationLoop) userInfo:nil repeats:YES];
}
}
#pragma mark - Timer Loop
- (void)animationLoop
{
// Update model here. For simplicity, increment a single value.
self.customView.value += 0.01f;
if (self.customView.value >= 1.0f)
{
self.customView.value = 1.0f;
[self.animationTimer invalidate];
}
[self.customView setNeedsDisplayInRect:CGRectMake(0.0f, 0.0f, 320.0f, 460.0f)];
}
#end
-
TCustomView.h (Custom view header)
#import <UIKit/UIKit.h>
#interface TCustomView : UIView
#property (assign) CGFloat value;
#end
-
TCustomView.m (Custom view implementation)
#import "TCustomView.h"
#implementation TCustomView
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
// Draw ellipses.
CGContextSetRGBFillColor(context, self.value, self.value, self.value, 1.0f);
CGContextFillEllipseInRect(context, rect);
// Draw value itself.
[[UIColor redColor] set];
NSString *value = [NSString stringWithFormat:#"%f", self.value];
[value drawAtPoint:rect.origin withFont:[UIFont fontWithName:#"Arial" size:15.0f]];
}
#end
Since both the iPod Touch 4th Gen and iPad 1st Gen have similar hardware (same amount of memory / same GPU) it suggests the problem you are seeing is due to an un-optimized code path in iOS4.
If you look at the size of the views that cause the (negative) performance spike on iOS4 they both have one side longer than 1024. Originally 1024x1024 was the maximum size a UIView could be, and whilst this restriction has since been lifted it is entirely likely that views larger than this only became efficient in iOS5 and later.
I'd conjecture that the excess memory copying you are seeing in iOS4 is due to UIKit using a full size memory buffer for the large UIView, but then having to copy appropriate sized tiles of it before they can be composited; and that in iOS5 and later they've either removed any restriction on the size of the tiles that can be composited, or changed the way UIKit renders for such large UIViews.
In terms of working around this bottleneck on iOS4 you can try tiling the area you want to cover with smaller UIViews. If you structure it as:
Parent View - contains drawing and event related code
Tile View 1 - contains drawRect
...
Tile View n - contains drawRect
In each tile view, you can ask the parent view to render its contents after adjusting the graphics context's transform appropriately. This means you don't have to change the drawing code, it will just be invoked multiple times (there is a small overhead for this, but remember each invocation will be drawing only a portion of the whole view).
Note that its important that the parent view does not have a drawRect method - otherwise UIKit will think you want to draw into it directly and it will create a backing store thus putting you back in the same situation.
There is also CATiledLayer that you could look into - this does the tiling for you but asynchronously; meaning that your drawing code and such has to handle being executed from one or more background threads.
As you have observed, the time is mainly spent to transfer some data. I think in iOS 4.3.5, CoreGraphics does not use the GPU and the graphic memory to implement the primitive drawing functions like CGContextFillEllipseInRect etc...
Then each time you need to draw something it is drawn in main memory with the CPU to calculate everything needed and then copied to the graphic memory. That takes a long time of course because the bus is quite slow.
I guess that since iOS 5. or 5.1 , the primitive drawing functions call some GPU shaders (programs inside the GPU) and then all the heavy stuff is done there.
Then only a few data (parameters and program code) are transferred from main RAM memory to graphic memory.

CISourceOverCompositing produce different results on device and in simulator

I'm trying to use the CISourceOverCompositing filter but I'm hitting some wall.
This is the relevant code. mask is a UIImage and images is an array of UIImage
ci_mask = [[CIImage alloc] initWithCGImage: mask.CGImage];
ctx = [CIContext contextWithOptions: nil];
compo = [CIFilter filterWithName: #"CISourceOverCompositing"];
for(int i = 0; i < images.count; i++) {
UIImage *image = [images objectAtIndex: i];
ci_base = [[CIImage alloc] initWithCGImage: image.CGImage];
[compo setDefaults];
[compo setValue: ci_mask forKey: #"inputImage"];
[compo setValue: ci_base forKey: #"inputBackgroundImage"];
result = compo.outputImage;
CGImageDestinationAddImage(
dst_ref,
[ctx createCGImage: result fromRect:result.extent],
frame_props
);
}
mask contains an alpha channel which is correctly applied in the simulator but not on the device. The output only show the mask as-is seemingly without using the alpha channel to blend images.
The almost same code using CoreGraphics API works fine (but then I can't apply other CIFilters)
I'll probably try to use CIBlendWithMask but then I'll have to extract the mask and add complexity...
Look for different capitalization in your filenames and the files being specified. They don't have to be the same case to work in the simulator, but they are case sensitive on the device. This has thrown me off many times and if you don't look for it is quite difficult to track down.
OK I found the issue and it's a bit tricky. First, to answer Jeshua, both the mask and the base are generated so the path isn't relevant here (but I'll keep that in mind, definitively good to know).
Now for the "solution". When generating the mask I used a combination of CG* calls on a background context (CGImageCreateWithMask, ...). Now, it seems that the result of those call gives me a CGImage seemingly without alpha channel (CGImageGetAlphaInfo returns 0) but... both CoreGraphics APIs on device and in the simulator AND CoreImage APIs but only in the simulator applies the still present alpha channel.
Creating a CGContext with kCGImageAlphaPremultipliedLast and using CGContextDrawImage with kCGBlendModeSourceOut (or whatever you need to "hollow out" your image) keeps the alpha channel intact and this works on both the simulator and the device.
I'll file a radar as either the simulator or the device is wrong.

how to detect collision of image views in a puzzle game

i am doing a simple jigsaw puzzle game.
for that i crop a single image into 9 pieces and display it on 9 image views.
Now i need to detect collision when one image view is comes over the half of another image view frame,
and replace image or image view each other.
how can i done can any one please help me.
You can use the CGRectIntersectsRect() function, it takes to CGRect's and returns YES if the rects intersect, otherwise NO.
Here is a short example:
if(CGRectIntersectsRect(image1.frame, image2.frame))
{
UIImage *temp = [[image1 image] retain];
[image1 setImage:image2.image];
[image2 setIamge:[temp autorelease]];
}
(It works of course easier if you have an array to iterate through)