Using UIImageView for a flipbook anim on iphone - iphone

I'm using UIImageView to run a flipbook anim like this:
mIntroAnimFrame = [[UIImageView alloc] initWithFrame:CGRectMake( 0, 0, 480, 320);
mIntroAnimFrame.image = [UIImage imageNamed:#"frame0000.tif"];
Basically, when determine it is time, I flip the image by just calling:
mIntroAnimFrame.image = [UIImage imageNamed:#"frame0000.tif"];
again with the right frame. Should I be doing this differently? Are repeated calls to set the image this way possibly bogging down main memory, or does each call essentially free the previous UIImage because nothing else references it? I suspect the latter is true.
Also, is there an easy way to preload the images? The anim seems to slow down at times. Should I simply load all the images into a dummy UIImage array so they are preloaded, then refer to it when I want to show it with mIntroAnimFrame.image = mPreloadedArray[i]; ?

I was typing up an example but remembered there was a perfect one at http://www.iphoneexamples.com/
NSArray *myImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"myImage1.png"],
[UIImage imageNamed:#"myImage2.png"],
[UIImage imageNamed:#"myImage3.png"],
[UIImage imageNamed:#"myImage4.gif"],
nil];
UIImageView *myAnimatedView = [UIImageView alloc];
[myAnimatedView initWithFrame:[self bounds]];
myAnimatedView.animationImages = myImages;
myAnimatedView.animationDuration = 0.25; // seconds
myAnimatedView.animationRepeatCount = 0; // 0 = loops forever
[myAnimatedView startAnimating];
[self addSubview:myAnimatedView];
[myAnimatedView release];
Was this what you were thinking of?

[UIImage imageNamed:] will cache images, so they will not be immediately freed. There is nothing wrong with that, but if you know you will not use the image again for a while, use a different method to get the images.
UIImageView also has built in animation abilities if you have a regular frame rate.
mIntroAnimFrame.animationImages = mPreloadedArray;
[mIntroAnimFrame startAnimating];

Related

UIGraphicsContext memory leak

Hi In my app I have a function that takes an Image of the current view and turns it into a blurred image then adds it to the current.view. All though I remove the view using [remove from superview] it the memory still stays high. I am using core graphics and set all of the UI Images to zero.
I do get a memory leak warning
-(void)burImage
{
//Get a screen capture from the current view.
UIGraphicsBeginImageContext(CGSizeMake(320, 450));
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the image
CIImage *blurImg = [CIImage imageWithCGImage:viewImg.CGImage];
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setValue:blurImg forKey:#"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat:22.0f] forKey:#"inputRadius"];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImg = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[blurImg extent]];
UIImage *outputImg = [UIImage imageWithCGImage:cgImg];
//Add UIImageView to current view.
UIImageView *imgView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 450)];
[imgView setTag:1109];
imgView.image = outputImg;
[imgView setTag:1108];
gaussianBlurFilter = nil;
outputImg = nil;
blurImg = nil;
viewImg = nil;
[self.view addSubview:imgView];
UIGraphicsEndImageContext();
}
The static analyzer ("Analyze" on the Xcode "Product" menu) is informing you that you are missing a needed CGImageRelease(cgImg) at the end of your method. If you have a Core foundation object returned from a method/function with "Create" or "Copy" in the name, you are responsible for releasing it.
By the way, if you tap on the icon (once in the margin, and again on the version that appears in the error message), it will show you more information:
That can be helpful for tracking back to where the problem originated, in this case the call to createCGImage. If you look at the documentation for createCGImage, it confirms this diagnosis, reporting:
Return Value
A Quartz 2D image. You are responsible for releasing the returned image when you no longer need it.
For general counsel about releasing Core Foundation objects, see the Create Rule in the Memory Management Programming Guide for Core Foundation.

How do I add a video layer with alpha channel (transparency) to an iphone app?

I would like to add a video of a moving object to a background that can be changed, what format should I use? How should I implement this?
EDIT: I would like to make an effect like this:
This is done using making animations manually. I converted the frames to png(which supports alpha channel) and played the frames with a fixed frame rate.
I did not find any video that supports an alpha channel.
Why not just make it an animation? I'm sure with enough frames it could look smooth enough.
NSArray *myImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"myImage1.png"],
[UIImage imageNamed:#"myImage2.png"],
[UIImage imageNamed:#"myImage3.png"],
[UIImage imageNamed:#"myImage4.gif"],
nil];
UIImageView *myAnimatedView = [UIImageView alloc];
[myAnimatedView initWithFrame:[self bounds]];
myAnimatedView.animationImages = myImages;
myAnimatedView.animationDuration = 0.25; // seconds
myAnimatedView.animationRepeatCount = 0; // 0 = loops forever
[myAnimatedView startAnimating];
[self addSubview:myAnimatedView];
[myAnimatedView release];

Adding an imageview to a layer

I'm sure this is simple but I am trying to add an array of images to a layer. Here is what I have so far:
// Create the fish layer
fishLayer = [CALayer layer];
//fish = [UIImageView imageNamed:#"Fish.png"];
fish.animationImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"swim01.png"],
[UIImage imageNamed:#"swim02.png"],
[UIImage imageNamed:#"swim03.png"],
[UIImage imageNamed:#"swim04.png"],
[UIImage imageNamed:#"swim05.png"],
[UIImage imageNamed:#"swim06.png"],
[UIImage imageNamed:#"swim05.png"],
[UIImage imageNamed:#"swim04.png"],
[UIImage imageNamed:#"swim03.png"],
[UIImage imageNamed:#"swim02.png"], nil];
fish.animationDuration = 1.50;
fish.animationRepeatCount = -1;
[fish startAnimating];
//[self.view addSubview:fish];
//This should add the animated array to layer.
fishLayer.contents = fish;
fishLayer.bounds = CGRectMake(0, 0, 56, 56);
fishLayer.position = CGPointMake(self.view.bounds.size.height / 2,
self.view.bounds.size.width / 2);
[self.view.layer addSublayer:fishLayer];
There is no error but the array of images don't appear on the screen. I think maybe this line is the probem..
fishLayer.contents = fish;
I have added the imageview to my header files and added it in the XIB
Please help if you can,
Cheers,
Adam
Reading your code, it looks like you're not initializing fish. Since you're not getting any errors, I assume it is an instance variable, which means it gets set to nil initially. So when you set fish.animationImages, you essentially do nothing (fish is nil). Same with every other use of fish in this code snippet.
It looks like you were using views initially, but then commented all that out. Why are you trying to use a layer? You should be able to just do this:
fish = [UIImageView imageNamed:#"Fish.png"];
fish.animationImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"swim01.png"],
[UIImage imageNamed:#"swim02.png"],
[UIImage imageNamed:#"swim03.png"],
[UIImage imageNamed:#"swim04.png"],
[UIImage imageNamed:#"swim05.png"],
[UIImage imageNamed:#"swim06.png"],
[UIImage imageNamed:#"swim05.png"],
[UIImage imageNamed:#"swim04.png"],
[UIImage imageNamed:#"swim03.png"],
[UIImage imageNamed:#"swim02.png"], nil];
fish.animationDuration = 1.50;
fish.animationRepeatCount = -1;
[fish startAnimating];
[self.view addSubview:fish];
fish.bounds = CGRectMake(0, 0, 56, 56);
fish.position = CGPointMake(self.view.bounds.size.height / 2,
self.view.bounds.size.width / 2);

pagingScrollView Slideshow

I have started trying to understand the way apple implements the photoviewer app for the iPhone after watching both videos from WWDC 2009 and 2010 about scrollViews and paging through photographs and I am taking each step very slowly to understand the processes very well before I implement it all into my app.
I have run into a couple of problems at the moment:
Firstly I have it set up to have a paging scroll view to swipe left and right between photos with a bit of space in between like on the videos but when I add an image inside a UIImageView th image is too big for the screen, I have tried adding UIViewContentModeScaleAspectFit; to no effect.
When I go through the for loop to look at each element of the array containing the images the UIViews overlap and show one photograph on top of the other, I would like to know how to separate them into the next section of the paging scroll view.
- (void)loadView{
UIImage *img0 = [UIImage imageNamed:#"29.png"];
UIImage *img1 = [UIImage imageNamed:#"33.png"];
NSMutableArray *imgArray = [[NSMutableArray alloc] initWithObjects:img0, img2, nil];
CGRect pagingScrollViewFrame = [[UIScreen mainScreen] bounds];
pagingScrollViewFrame.origin.x -= 10;
pagingScrollViewFrame.size.width += 20;
pagingScrollView = [[UIScrollView alloc] initWithFrame:pagingScrollViewFrame];
pagingScrollView.pagingEnabled = YES;
pagingScrollView.backgroundColor = [UIColor blackColor];
pagingScrollView.contentSize = CGSizeMake(pagingScrollViewFrame.size.width * [imgArray count], pagingScrollViewFrame.size.height);
self.view = pagingScrollView;
for (int i=0; i < [imgArray count]; i++) {
UIImageView *page = [[UIImageView alloc] initWithImage: [imgArray objectAtIndex:i]];
page.contentMode = UIViewContentModeScaleAspectFit;
[pagingScrollView addSubview:page];
}}
As I have mentioned I am fairly new to programming for the iPhone and am taking things slowly to fully understand, eventually this program will mimic the native app with pinching and tapping for zoom.
If you're using UIImageViews, you should make sure that your view has its clipsToBounds field set to yes. Try adding:
UIImageView *page = [[UIImageView alloc]
initWithImage:[imgArray objectAtIndex:i]];
[page setContentMode:UIViewContentModeScaleAspectFit];
[page setClipsToBounds:YES];
[pagingScrollView addSubview:page];
Then, make sure you are setting your image's frame to the correct offset within the scroll view. The frame's origin.x needs to be the width of the frame times the image index. So you need something like this:
[page setFrame:CGRectMake(i*pagingScrollViewFrame.size.width, y, width, height)];
where i is your index from your for loop.
Though in the sample code from the WWDC session you're referring to, this is done in a method called -configurePage. Have you downloaded the sample code?
-[UIImageView initWithImage:] sets the frame to be the image's size. You'll need to use page.frame = [[UIScreen mainScreen] bounds] or similar to scale images to the correct size.

Is it correct to load openGL textures like this?

I started some months ago an OpenGL project which became larger and larger... I began with the crashLanding sample and i use Texture2D.
I also use a singleton class to load my textures, and here is what the texture load looks like :
//Load the background texture and configure it
_textures[kTexture_Background] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"fond.png"]];
glBindTexture(GL_TEXTURE_2D, [_textures[kTexture_Background] name]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Load the textures
_textures[kTexture_Batiment] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"batiment_Ext.png"]];
_textures[kTexture_Balcon] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"balcon.png"]];
_textures[kTexture_Devanture] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"devanture.png"]];
_textures[kTexture_Cactus_Troncs] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"cactus-troncs.png"]];
_textures[kTexture_Cactus_Gauche] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"cactus1.png"]];
_textures[kTexture_Cactus_Droit] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"cactus2.png"]];
_textures[kTexture_Pierre] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"pierre.png"]];
_textures[kTexture_Enseigne] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"enseigne.png"]];
_textures[kTexture_Menu] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"saloonIntro.jpg"]];
_textures[kTexture_GameOver] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"gameOver.jpg"]];
for (int i = 0 ; i < kNumTexturesScene ; i ++)
{
[arrayOfText addObject:[[[NSData alloc] init] autorelease]];
}
// sort my array
for (int i = 0 ; i < kNumTexturesScene ; i ++)
{
[arrayOfText replaceObjectAtIndex:i withObject:_textures[i]];
}
[dictionaryOfTexture setObject:[arrayOfText copy] forKey:kTextureDecor];
[arrayOfText removeAllObjects];
and so on for almost 50 pictures
It works well on the 3GS but there are some issues sometimes with 3G.
Am i wrong with all of this ?
Thanks
Things that you need to consider:
iPhone works only with textures that have dimensions of a power of 2 -- if your textures don't have such dimensions, and they still works, that means that they are resized by the software -- that takes time, and more importantly valuable texture memory.
iPhone has video memory for about 3 textures of 1024x1024 size -- you may run out of texture memory.
switching textures during rendering is slow... extremely slow. The less you switch textures the better – ideally, create at most 3 textures, and switch them at most 3 times
to achieve that you need to learn a technique called texture atlasing
You might be running out of texture memory, which isn't surprising
if you consider that the class you're using probably pads your image out to power of 2 dimensions.
You could use the texture memory better by combining your sprites into
a so called sprite atlas.