I started some months ago an OpenGL project which became larger and larger... I began with the crashLanding sample and i use Texture2D.
I also use a singleton class to load my textures, and here is what the texture load looks like :
//Load the background texture and configure it
_textures[kTexture_Background] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"fond.png"]];
glBindTexture(GL_TEXTURE_2D, [_textures[kTexture_Background] name]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Load the textures
_textures[kTexture_Batiment] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"batiment_Ext.png"]];
_textures[kTexture_Balcon] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"balcon.png"]];
_textures[kTexture_Devanture] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"devanture.png"]];
_textures[kTexture_Cactus_Troncs] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"cactus-troncs.png"]];
_textures[kTexture_Cactus_Gauche] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"cactus1.png"]];
_textures[kTexture_Cactus_Droit] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"cactus2.png"]];
_textures[kTexture_Pierre] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"pierre.png"]];
_textures[kTexture_Enseigne] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"enseigne.png"]];
_textures[kTexture_Menu] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"saloonIntro.jpg"]];
_textures[kTexture_GameOver] = [[Texture2D alloc] initWithImage: [UIImage imageNamed:#"gameOver.jpg"]];
for (int i = 0 ; i < kNumTexturesScene ; i ++)
{
[arrayOfText addObject:[[[NSData alloc] init] autorelease]];
}
// sort my array
for (int i = 0 ; i < kNumTexturesScene ; i ++)
{
[arrayOfText replaceObjectAtIndex:i withObject:_textures[i]];
}
[dictionaryOfTexture setObject:[arrayOfText copy] forKey:kTextureDecor];
[arrayOfText removeAllObjects];
and so on for almost 50 pictures
It works well on the 3GS but there are some issues sometimes with 3G.
Am i wrong with all of this ?
Thanks
Things that you need to consider:
iPhone works only with textures that have dimensions of a power of 2 -- if your textures don't have such dimensions, and they still works, that means that they are resized by the software -- that takes time, and more importantly valuable texture memory.
iPhone has video memory for about 3 textures of 1024x1024 size -- you may run out of texture memory.
switching textures during rendering is slow... extremely slow. The less you switch textures the better – ideally, create at most 3 textures, and switch them at most 3 times
to achieve that you need to learn a technique called texture atlasing
You might be running out of texture memory, which isn't surprising
if you consider that the class you're using probably pads your image out to power of 2 dimensions.
You could use the texture memory better by combining your sprites into
a so called sprite atlas.
Related
Hi In my app I have a function that takes an Image of the current view and turns it into a blurred image then adds it to the current.view. All though I remove the view using [remove from superview] it the memory still stays high. I am using core graphics and set all of the UI Images to zero.
I do get a memory leak warning
-(void)burImage
{
//Get a screen capture from the current view.
UIGraphicsBeginImageContext(CGSizeMake(320, 450));
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Blur the image
CIImage *blurImg = [CIImage imageWithCGImage:viewImg.CGImage];
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:#"CIAffineClamp"];
[clampFilter setValue:blurImg forKey:#"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:#encode(CGAffineTransform)] forKey:#"inputTransform"];
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: #"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: #"inputImage"];
[gaussianBlurFilter setValue:[NSNumber numberWithFloat:22.0f] forKey:#"inputRadius"];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImg = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[blurImg extent]];
UIImage *outputImg = [UIImage imageWithCGImage:cgImg];
//Add UIImageView to current view.
UIImageView *imgView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 320, 450)];
[imgView setTag:1109];
imgView.image = outputImg;
[imgView setTag:1108];
gaussianBlurFilter = nil;
outputImg = nil;
blurImg = nil;
viewImg = nil;
[self.view addSubview:imgView];
UIGraphicsEndImageContext();
}
The static analyzer ("Analyze" on the Xcode "Product" menu) is informing you that you are missing a needed CGImageRelease(cgImg) at the end of your method. If you have a Core foundation object returned from a method/function with "Create" or "Copy" in the name, you are responsible for releasing it.
By the way, if you tap on the icon (once in the margin, and again on the version that appears in the error message), it will show you more information:
That can be helpful for tracking back to where the problem originated, in this case the call to createCGImage. If you look at the documentation for createCGImage, it confirms this diagnosis, reporting:
Return Value
A Quartz 2D image. You are responsible for releasing the returned image when you no longer need it.
For general counsel about releasing Core Foundation objects, see the Create Rule in the Memory Management Programming Guide for Core Foundation.
Iam developing one application.In that i want to set the uiview background as image.That image size is 1.6 mb.I followed the below code.
UIGraphicsBeginImageContext(self.view.frame.size);
[[UIImage imageNamed:#"image.png"] drawInRect:self.view.bounds];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.view.backgroundColor = [UIColor colorWithPatternImage:image];
But it takes more time compared to small size image.So please help me how to load 3mb size of image also like as small size of image.
You could do the drawing operations on a background queue, so it won't freeze the UI:
CGRect imgRect = self.view.bounds;
dispatch_async(dispatch_get_global_queue(0, 0), ^{
UIGraphicsBeginImageContext(imgRect.size);
[[UIImage imageNamed:#"image.png"] drawInRect:imgRect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIColor* bgcolor = [UIColor colorWithPatternImage:image];
dispatch_async(dispatch_get_main_queue(), ^{
self.view.backgroundColor = bgcolor;
});
});
Also optimize the image by using the tool ImageOptim.
Its best if you resize image and then add in Application bundle to execute faster.
here I think there is no need get image firstly for resizing for UIView s pattern image in background.
You can solve memory issue like this:
NSString *imgfile = [[NSBundle mainBundle] pathForResource:#"image" ofType:#"png"];
UIImage *currentImage = [UIImage imageWithContentsOfFile:imgfile];
self.view.backgroundColor = [UIColor colorWithPatternImage:currentImage];
Refer ios-imagenamed-vs-imagewithcontentsoffile link.
EDIT : Use #phix23 answer for smooth UI
I was using GPUImage framework (some old version) to blend two images (adding border overlay to a certain image).
After I have updated to latest framework version, after applying such a blend, I get an empty black image.
I'm using next method:
- (void)addBorder {
if (currentBorder != kBorderInitialValue) {
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
GPUImagePicture *imageToProcess = [[GPUImagePicture alloc] initWithImage:self.imageToWorkWithView.image];
GPUImagePicture *border = [[GPUImagePicture alloc] initWithImage:self.imageBorder];
blendFilter.mix = 1.0f;
[imageToProcess addTarget:blendFilter];
[border addTarget:blendFilter];
[imageToProcess processImage];
self.imageToWorkWithView.image = [blendFilter imageFromCurrentlyProcessedOutput];
[blendFilter release];
[imageToProcess release];
[border release];
}
}
What is the problem?
You're forgetting to process the border image. After [imageToProcess processImage], add the line:
[border processImage];
For a two images being input into a blend, you have to use -processImage on both after they have been added to the blend filter. I changed the way that the blend filter works in order to fix some bugs, and this is how you need to do things now.
This is the code I'm currently using for merging two images with GPUImageAlphaBlendFilter.
GPUImagePicture *mainPicture = [[GPUImagePicture alloc] initWithImage:image];
GPUImagePicture *topPicture = [[GPUImagePicture alloc] initWithImage:blurredImage];
GPUImageAlphaBlendFilter *blendFilter = [[GPUImageAlphaBlendFilter alloc] init];
[blendFilter setMix:0.5];
[mainPicture addTarget:blendFilter];
[topPicture addTarget:blendFilter];
[blendFilter useNextFrameForImageCapture];
[mainPicture processImage];
[topPicture processImage];
UIImage * mergedImage = [blendFilter imageFromCurrentFramebuffer];
I would like to add a video of a moving object to a background that can be changed, what format should I use? How should I implement this?
EDIT: I would like to make an effect like this:
This is done using making animations manually. I converted the frames to png(which supports alpha channel) and played the frames with a fixed frame rate.
I did not find any video that supports an alpha channel.
Why not just make it an animation? I'm sure with enough frames it could look smooth enough.
NSArray *myImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"myImage1.png"],
[UIImage imageNamed:#"myImage2.png"],
[UIImage imageNamed:#"myImage3.png"],
[UIImage imageNamed:#"myImage4.gif"],
nil];
UIImageView *myAnimatedView = [UIImageView alloc];
[myAnimatedView initWithFrame:[self bounds]];
myAnimatedView.animationImages = myImages;
myAnimatedView.animationDuration = 0.25; // seconds
myAnimatedView.animationRepeatCount = 0; // 0 = loops forever
[myAnimatedView startAnimating];
[self addSubview:myAnimatedView];
[myAnimatedView release];
I'm using UIImageView to run a flipbook anim like this:
mIntroAnimFrame = [[UIImageView alloc] initWithFrame:CGRectMake( 0, 0, 480, 320);
mIntroAnimFrame.image = [UIImage imageNamed:#"frame0000.tif"];
Basically, when determine it is time, I flip the image by just calling:
mIntroAnimFrame.image = [UIImage imageNamed:#"frame0000.tif"];
again with the right frame. Should I be doing this differently? Are repeated calls to set the image this way possibly bogging down main memory, or does each call essentially free the previous UIImage because nothing else references it? I suspect the latter is true.
Also, is there an easy way to preload the images? The anim seems to slow down at times. Should I simply load all the images into a dummy UIImage array so they are preloaded, then refer to it when I want to show it with mIntroAnimFrame.image = mPreloadedArray[i]; ?
I was typing up an example but remembered there was a perfect one at http://www.iphoneexamples.com/
NSArray *myImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"myImage1.png"],
[UIImage imageNamed:#"myImage2.png"],
[UIImage imageNamed:#"myImage3.png"],
[UIImage imageNamed:#"myImage4.gif"],
nil];
UIImageView *myAnimatedView = [UIImageView alloc];
[myAnimatedView initWithFrame:[self bounds]];
myAnimatedView.animationImages = myImages;
myAnimatedView.animationDuration = 0.25; // seconds
myAnimatedView.animationRepeatCount = 0; // 0 = loops forever
[myAnimatedView startAnimating];
[self addSubview:myAnimatedView];
[myAnimatedView release];
Was this what you were thinking of?
[UIImage imageNamed:] will cache images, so they will not be immediately freed. There is nothing wrong with that, but if you know you will not use the image again for a while, use a different method to get the images.
UIImageView also has built in animation abilities if you have a regular frame rate.
mIntroAnimFrame.animationImages = mPreloadedArray;
[mIntroAnimFrame startAnimating];