This problem has been vexing me for a little over a day. The gist is that I have a Grouped UITableView setup such that there is only one entry per section. Each cell within the table view contains a UIImageView that takes up the entire width and length of the cell. Furthermore, each cell is at a specified set height and width.
Here is my init method:
- (id)initWithStyle:(UITableViewCellStyle)style reuseIdentifier:(NSString *)reuseIdentifier{
self = [super initWithStyle:style reuseIdentifier:reuseIdentifier];
if (self) {
[[self layer] setMasksToBounds:NO];
[[self layer] setShouldRasterize:YES];
[[self layer] setRasterizationScale:[[UIScreen mainScreen] scale]];
[[self layer] setCornerRadius:10];
_bgCoverView = [[UIImageView alloc] initWithFrame:CGRectZero];
[_bgCoverView setClipsToBounds:YES];
[[_bgCoverView layer] setCornerRadius:10];
[_bgCoverView setContentMode:UIViewContentModeScaleAspectFill];
[self.contentView addSubview:_bgCoverView];
[_bgCoverView release];
//Init other parts of the cell, but they're not pertinent to the question
}
return self;
}
This is my layout subviews:
- (void)layoutSubviews{
[super layoutSubviews];
if (!self.editing){
[_bgCoverView setFrame:CGRectMake(0, 0, tableViewCellWidth, self.contentView.frame.size.height)];
}
}
This is my setter for the cell:
- (void)setGroup:(Group*)group{
//Set background cover for the contentView
NSSet *usableCovers = [[group memories] filteredSetUsingPredicate:[NSPredicate predicateWithFormat:#"pictureMedData != nil || pictureFullData != nil || (pictureMedUrl != nil && NOT pictureMedUrl contains[cd] '/missing.png')"]];
Memory *mem = [usableCovers anyObject];
[_bgCoverView setImage:[UIImage imageWithContentsOfFile:#"no_pics_yet.png"]];
if (mem != nil){
[Folio cacheImageAtStringURL:[mem pictureMedUrl] forManagedObject:mem withDataKey:#"pictureMedData" inDisplay:_bgCoverView];
}
}
This is where I think I'm getting a performance ding. First, there's the filtered set call. If there are a lot of objects in the set, this could slow stuff down. However, there tend to be relatively few objects. Furthermore, when I removed this line of code, I saw no performance change, so it's pretty unlikely.
So, that pretty much leaves my cache method (which is located in a helper class). Here's the method:
+(void)cacheImageAtStringURL:(NSString *)urlString forManagedObject:(NSManagedObject*)managedObject withDataKey:(NSString*)dataKey inDisplay:(NSObject *)obj{
int objType = 0; //Assume UIButton
if (obj == nil)
objType = -1;
else if ([obj isKindOfClass:[UIImageView class]])
objType = 1; //Assign to UIImageView
NSData *data = (NSData*)[managedObject valueForKey:dataKey];
if ([data bytes]){
if (objType == 0) [(UIButton*)obj setBackgroundImage:[UIImage imageWithData:data] forState:UIControlStateNormal];
else if (objType == 1) [(UIImageView*)obj setImage:[UIImage imageWithData:data]];
return;
}
//Otherwise, do an ASIHTTPRequest and set the image when it's returned, save the data into core data so that we get a cache hit the next time.
}
Regardless of if the images are cached or not, there are major performance issues with scrolling down on this view. If I comment out the cache method, it works pretty well. Additionally, other views that call this method are sluggish, so I'm pretty sure it's something here.
I will also say, however, that I'm also suspicious of the cornerRadius code, however, I heard that if you set shouldRasterize to YES, it'll result in a performance speed-up. Still, I'm not sure if that's 100% true or if my implementation is off.
Any help would be greatly appreciated.
UPDATE
Still not 100% fixed yet, but we are getting there.
These variables must be set on the request:
[request setCachePolicy:ASIOnlyLoadIfNotCachedCachePolicy];
[request setCacheStoragePolicy:ASICachePermanentlyCacheStoragePolicy];
The first tells the cache to only ping the server for images if they are not cached. The second tells the cache to store the images permanently for the lifecycle of the app.
This got me part-way there. Some images loaded instantly whereas others seemed sluggish. I did more research and added these lines of code in my app delegate:
[[ASIDownloadCache sharedCache] setShouldRespectCacheControlHeaders:NO];
[ASIHTTPRequest setDefaultCache:[ASIDownloadCache sharedCache]];
The first line is crucial. ASIHTTPRequest's default action is to follow what the web server tells you. So, if it tells you in its headers not to cache a response, it won't. This line overrides that. Now, this worked really well on my iOS simulator. Then, I tried it on my 4S and it was slow again.
The reason was because I set up a block of code in my request that, upon the successful retrieval of an image, I would save it to Core Data. This call was being triggered every time, even for cached requests.
This is because ASIHTTPRequest will still make a request regardless of whether or not it uses a cached copy of your data. If it has a cached copy, it won't send out a request to a server, but will populate its request object's response data with cached data instead. So after my block takes care of display work, I call:
if ([request didUseCachedResponse]) return;
This works very well on the 4S and the 4. However, the 3GS is still disastrously slow :-(.
This is because ASIHTTPRequest is using a cached copy of my NSData and not my UIImage. So, it needs to re-create the image each time it uses the cache. This makes sense because ASIHTTPRequest is only returning data, and doesn't know what you are going to do with the data afterwards, so it'll always call your code that's set up when the request is returned. And because my code block converts the image from data, that's what I get.
A WAY FORWARD
I think to get the performance speedup I want, it's necessary to implement a custom UIImage cache. If we find the file in the cache, we return the UIImage object. If not, then we create a request and save it in the cache on return.
The tip with shouldRasterize is a valid one, though from my experience this decreases the quality of the graphic significantly. Also, since like iOS4.3 or so (can't remember anymore) they enormously improved the cornerRadius call, it's a very long time since I last needed to activate rasterizing for this kind of problem.
You could always deactivate cornerRadius and see how it performs.
What I really want to tell you however is that ASIHTTPRequest has built-in cache support, see
http://allseeing-i.com/ASIHTTPRequest/How-to-use#using_a_download_cache
Try it, it worked flawlessly for me. Think about what cache policies fit your needs best. Of course, if you need the images in CoreData you'll have to insert them, but I wouldn't try to cache them manually from CoreData.
Oh and something else ... ASIHTTPRequest is not under development anymore, you might want to consider switching to another framework (like AFNetworking). ASIHTTPRequest e.g. doesn't support ARC.
Check out this post. I use a blocks and GCD for image loading (especially useful in a TableViewCell). This loads the images on the fly if you put the method call in the cellForRowAtIndexPath method. I have no slugglish UI.
Related
I have four urls which consists images...
I'm downloding those images and placing them into documents folder..
here is my code..
-(void)viewDidLoad
{
NSMutableArray *myUrlsArray=[[NSMutableArray alloc]init];
[myUrlsArray addObject:#"http://blogs.sfweekly.com/thesnitch/steve_jobs3.jpg"];
[myUrlsArray addObject:#"http://www.droid-life.com/wp-content/uploads/2012/12/Steve-Jobs-Apple.jpg"];
[myUrlsArray addObject:#"http://2.bp.blogspot.com/-T6nbl0rQoME/To0X5FccuCI/AAAAAAAAEZQ/ipUU7JfEzTs/s1600/steve-jobs-in-time-magazine-front-cover.png"];
[myUrlsArray addObject:#"http://images.businessweek.com/ss/08/09/0929_most_influential/image/steve_jobs.jpg"];
[myUrlsArray addObject:#"http://cdn.ndtv.com/tech/gadget/image/steve-jobs-face.jpg"];
for (int i=0; i<myUrlsArray.count; i++)
{
[self downloadImageFromURL:[myUrlsArray objectAtIndex:i] withName:[NSString stringWithFormat:#"MyImage%i.jpeg",i]];
}
}
#pragma mark- downloading File
-(void)downloadImageFromURL:(NSString *)myURLString withName:(NSString *)fileName
{
UIImage *image = [[UIImage alloc] initWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:myURLString]]];
NSLog(#"%f,%f",image.size.width,image.size.height);
// Let's save the file into Document folder.**
NSString *documentsPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES) objectAtIndex:0];
NSString *jpegPath = [NSString stringWithFormat:#"%#/%#",documentsPath,fileName];// this path if you want save reference path in sqlite
NSData *data2 = [NSData dataWithData:UIImageJPEGRepresentation(image, 1.0f)];//1.0f = 100% quality
[data2 writeToFile:jpegPath atomically:YES];
}
NOW... I need to display a UIProgressView for above downloading progress accurately.
how can i achieve this functionality...
Can any one provide some guidelines to achieve this..
Thanks in advance...
I'd suggest you use some asynchronous downloading technique (either AFNetworking, SDWebImage, or roll your own with delegate-based NSURLSession) rather than dataWithContentsOfURL so that (a) you don't block the main queue; and (b) you can get progress updates as the downloads proceed.
I'd also suggest creating a NSProgress for each download. When your delegate method gets updates about how many bytes have been downloaded, update the NSProgress object.
You then can associate each NSProgress with a observedProgress for a UIProgressView, and when you update your NSProgress, the UI can be updated automatically.
Or, if you and a single UIProgressView to show the aggregate progress of all of the NSProgress for each download, you can create a parent NSProgress, establish each download's NSProgress as a child of the parent NSProgress, and then, as each download updates its respective NSProgress, this will automatically trigger the calculation of the parent NSProgress. And again, you can tie that parent NSProgress to a master UIProgressView, and you'll automatically update the UI with the total progress, just by having each download update its individual NSProgress.
There is a trick, though, insofar as some web services will not inform you of the number of bytes to be expected. They'll report an "expected number of bytes" of NSURLResponseUnknownLength, i.e. -1! (There are logical reasons why it does that which are probably beyond the scope of this question.) That obviously makes it hard to calculate what percentage has been downloaded.
In that case, there are a few approaches:
You can throw up your hands and just use an indeterminate progress indicator;
You can try changing the request such that web service will report meaningful "expected number of bytes" values (e.g. https://stackoverflow.com/a/22352294/1271826); or
You can use an "estimated download size" to estimate the percentage completion. For example, if you know your images are, on average, 100kb each, you can do something like the following to update the NSProgress associated with a particular download:
if (totalBytesExpectedToWrite >= totalBytesWritten) {
self.progress.totalUnitCount = totalBytesExpectedToWrite;
} else {
if (totalBytesWritten <= 0) {
self.progress.totalUnitCount = kDefaultImageSize;
} else {
double written = (double)totalBytesWritten;
double percent = tanh(written / (double)kDefaultImageSize);
self.progress.totalUnitCount = written / percent;
}
}
self.progress.completedUnitCount = totalBytesWritten;
This is a bit of sleight of hand that uses the tanh function to return a "percent complete" value that smoothly and asymptotically approaches 100%, using the kDefaultImageSize as the basis for the estimation.
It's not perfect, but it yields a pretty decent proxy for percent completion.
Your call to dataWithContentsOfURL is synchronous, meaning you don't get updates as the download is in process.
You can use a library like AFNetworking (https://github.com/AFNetworking/AFNetworking) which has callbacks to the progress of the download.
Actually a better solution is to use SDWebImage manager which will load the images in the background for you and cache them. Then the next time you use that image it will check the cache. Google it.
That way the user also doesn't have to sit around and wait while you're downloading stuff..
Then look at this other question that has some ideas on how to do a status:
How to show an activity indicator in SDWebImage
Do not use dataWithContentsOfURL, you are blocking the main thread until the data arrives.
Instead create your own connection with NSURLConnection and start listening to your delegate.
connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response: get the total data size with [response expectedContentLength].
connection:(NSURLConnection *)connection didReceiveData:(NSData *)data: This is where you do your calculations and update your UIProgressView. Something like, loadedBytes/total data size.
Good luck.
FIXED: I'm not sure yet why. Code has been updated below, and notes at the bottom
I'm having a very strange error that does not show up on the Simulator (even when running Instruments), unless I turn on all the Zombie and Debug options. However, it will crash the phone after a minute of updates (1 update per second). I have a 2D array that I take a subset of, apply a colormap, and turn into an image (this array changes constantly). Then I pass that image to the model, and the viewcontroller grabs it from the model once it receives notification of an update. I'll layout the 3 classes -Spectrogram, Model, ViewController:
Here are the important bits of each (there is more, but not relevant):
Spectrogram.h (Sorry, I can't get this to indent correctly on here)
#interface: Spectrogram : NSObject
{
NSMutableData *arrayData;
}
//renamed so Xcode allows the object with +1 reference count to be returned
- (CGImageRef)newSpectrogramImage;
Spectrogram.m
#implementation Spectrogram
- (CGImageRef)newSpectrogramImage
{
//slightly reordered
NSMutableData *imageData = [[NSMutableData alloc] init];
...code to go through arrayData and colormap it (get RGB transform) and store in imageData...
CGImageRef arrayImage = nil;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//specifically tell CGImage there is no alpha channel
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNone;
//Use the toll-free bridge between NSData and CFData
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)imageData);
arrayImage = CGImageCreate((slices-startSlice), bins, 8, 24, 3*(slices-startSlice), colorSpace, bitmapInfo, provider, NULL, false, kCGRenderingIntentDefault); //image is rotated now, so width and height are switched
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
CGImageRelease(arrayImage);
[imageData release];
//Pass the CGImageRef with a reference count of 1
return arrayImage;
}
Model.h
#interface: Model : NSManagedObject
{
Spectrogram *spectrogram;
}
//Function the viewController can call to get the update
- (CGImageRef)newSpectrogramImage;
Model.m
#implementation Model
... there is a function that adds new data to the array and notifies all listeners...
- (CGImageRef)newSpectrogramImage {
return [spectrogram newSpectrogramImage];
}
ViewController.h
#interface: ViewController : UIViewController
{
}
//the root controller actually alloc's the record, and sets this property when creating this view
#property (nonatomic, retain) Record *currentRecord;
ViewController.m
#implementation ViewController
#synthesize spectrogramView;
- (void)viewDidLoad
{
[super viewDidLoad];
[[NSNotificationCenter ... listen for model updates...
}
//now, the real meat
- (void)recordUpdated:(NSNotification*)notification
{
CALayer *myLayer = self.view.layer;
CGImageRef spectrogramImage = nil;
spectrogramImage = [currentRecord newSpectrogramImage];
myLayer.contents = (id)spectrogramImage;
CGImageRelease(spectrogramImage);
}
I've changed this so many times in the last day trying to hunt down where and why it fails. I've tried passing the CGImageRef instead (but since that isn't an object, I'm worried about making copies of what can be a -huge- image) and it still fails. And it works perfectly in the simulator (will run for dozens of minutes). But fails within a minute on the iphone, or if I turn on the debug options for the simulator (it will fail as soon as the viewcontroller is loaded).
On a side note, that might be of some use. This viewcontroller is loaded as a modal view when the phone is turned sideways (works great). However, I have a lot of NSLogs in there and I see this viewcontroller's dealloc is called before the mainviewcontroller even gets to viewwilldisappear - but it still runs. And then this controller's dealloc is called again when the phone is turned back and the view disappears.
Note
The version that worked well in the simulator passed CGImageRefs all the way to the viewController, instead of UIImages. I've tried at least 50 different combinations of where to create the UIImage from the CGImage, and what's posted above is just one of them (all of them fail eventually, or immediately). Of note, with the code above, if I do add this to the viewController modelUpdated:
CGSize size = currentModel.spectrogramImage.size;
NSLog(#"width: %f", size.width);
and comment out assigning it to the spectrogramView, the width is reported correctly, so the UIImage is getting passed along, it's just not getting retained (This is how I understand the EXE_BAD_ACCESS error).
Also, recently I receive dan EXE_BAD_ACCESS on the
self.spectrogramImage = [spectrogram getSpectrogramImage];
line. So, I think the error may be inside the Spectrogram class. Even though the CGImage and UIImage code was taken from Apple examples.
Fixed notes
I read that setting the contents of the CALayer was a much quicker way to pass a CGImageRef to a view - and no UIImage intermediary. Unfortunately, I only commit working changes, so I can't see every iteration I went through. However, I know that I had something very similar to this several times that kept crashing. The problem was always that as it is currently written, the program would crash with EXE_BAD_ACCESS. And if I upped the reference count (or didn't release it), then it would work perfectly, but the object would leak. I still don't understand how that is possible. To have the difference between a leak, and a BAD ACCESS be a reference count of 1 (and not 2 or more).
I can't answer your specific question, but this might help you know what is going on a little bit better. I have a function "logMemUsage" that outputs your memory usage and shows how much it changed since last time. If you call it once a second or so, you can better understand how memory is being used in your app. If it keeps growing, obviously there's a leak, if it goes up and down as you expect it, that's good, if it doesn't go down when you think it should, you'll see it. It's in github here in Utilities.h/.m
It's hard to figure this out from what you've posted, but why not use properties with retain on all your allocated data? For example, you're returning an autoreleased UIImage from getSpectrogramImage, and it gets stored into Model with the call self.spectrogramImage = [spectrogram getSpectrogramImage];, but there isn't a property for spectrogramImage anywhere I can see, even though you're using the self. mechanism. Is it just that you didn't bother to post it? The way it's written it could be getting autoreleased, and then when you try to use it...
I am working on a web-services data processing app and I am trying to make the app run as quickly as possible. When a certain 3 finger pan gesture is performed, I call a method that sends updated information off to the server to get a new batch of images to update the existing ones with.
So lets say there are 15 images in an array, I filter through them with a 2 finger gesture, and then if I want to change something about them, I can do the 3 finger gesture, and I get that same set back, just tweaked a bit (contrast/brightness, etc.).
Is what I want though is to be able to update the imageView that is displaying the images after the first image has been retrieved, so as to give the user a feel for what the rest in the series are going to look like. But no matter what I try, and no matter how many different threads I try and implement, I can't get the imageView to update before the entire download is complete. Once the batch download is done (which is handled on a separate thread) the imageView updates with the new images and everything is great.
The first step in the process is this:
if(UIGestureRecognizerStateEnded == [recognize state]){
[self preDownload:windowCounter Level:levelCounter ForPane:tagNumber];// Where this method is what gets the first image, and tries to set it to the imageView
[self downloadAllImagesWithWL:windowCounter Level:levelCounter ForPane:tagNumber]; //And this method goes and gets all the rest of the images
}
This is my preDownload method:
-(void)preDownload:(int)window Level:(int)level ForPane:(int) pane{
int guidIndex = [[globalGuids objectAtIndex:pane] intValue];
UIImage *img = [DATA_CONNECTION getImageWithSeriesGUID:[guids objectAtIndex:guidIndex] ImageID:counter Window:window Level:level];
if(pane==0){
NSLog(#"0");
[imageView3 setImage:img];
}else if(pane==1){
NSLog(#"1");
[imageView31 setImage:img];
}else if(pane==2){
NSLog(#"2");
[imageView32 setImage:img];
}else if(pane==3){
NSLog(#"3");
[imageView33 setImage:img];
}
}
So by separating this out into two different methods (there are no threads being implemented at this point, these methods are being called before all that) I was thinking that after the preDownload method completed, that the imageView would update, and then control would continue on down into the downloadAllImagesWithWL method, but that doesn't appear to be the case.
Am I missing something simple here? What can I do to update my GUI elements before that second method is through running?
You are right. However the viewn won't refresh until your code reaches runloop. You can do 2 things:
Make your downloadAllImagesWithWL method async, so it will return after you called it, your main thread reaches runloop, gui updates, and the download method will tell your logic through a callback when its done.
OR
A simplier hackier (and bad) solution would be to run runloop for some time before you call your download method. Something like this: [[NSRunloop currentRunLoop] runUnitlDate: [Date dateWithTimeIntervalSinceNow: 0.1]]; It will run runloop for 0.1 second.
When the image is set, the image view will mark itself as needing display. The actual display won't occur until the beginning of the next run loop. In OS X, you can use -display to draw the view immediately, but I don't think Apple created a public method to do this on iOS. However, if the next method simply creates the background thread, then it will return quickly and the display update will probably occur before the thread finishes.
I have been working with Apple's iPhone CoreDateRecipes sample code to learn more about tableviews and core data. I have coded my own test app based off of that sample, and it works well except for one thing. When I choose a photo for the 'recipe', no matter if it is from the camera or the library, when I hit "Done" to leave editing mode, it takes about 15 seconds before returning control to the user. This happens when testing on the device - in simulator, there is still a delay, but it is only 2-4 seconds.
I tested the "edit/done" button without choosing a photo and editing other data, and it saves instantaneously, so I'm pretty sure the image is to blame. Below is the code where it leaves editing mode, and the image processing code - what can I add/change/remove to speed this up? I know these sample code pieces are just proofs of concept, but I can't believe they published an example with such a crappy user experience!
Thanks, as always, for any guidance...let me know if there is any other code you need to see, or you can see the whole sample project here
- (void)setEditing:(BOOL)editing animated:(BOOL)animated {
[super setEditing:editing animated:animated];
[self updatePhotoButton];
nameTextField.enabled = editing;
overviewTextField.enabled = editing;
[self.navigationItem setHidesBackButton:editing animated:YES];
if (!editing) {
NSManagedObjectContext *context = recipe.managedObjectContext;
NSError *error = nil;
if (![context save:&error]) {
NSLog(#"Error in RecipeDetailViewController:setEditing -- %#, %#",error, [error userInfo]);
abort();
}
}
}
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)selectedImage editingInfo:(NSDictionary *)editingInfo {
NSManagedObject *oldImage = recipe.image;
if (oldImage != nil) {
[recipe.managedObjectContext deleteObject:oldImage];
}
NSManagedObject *image = [NSEntityDescription insertNewObjectForEntityForName:#"Image" inManagedObjectContext:recipe.managedObjectContext];
recipe.image = image;
[image setValue:selectedImage forKey:#"image"];
CGSize size = selectedImage.size;
CGFloat ratio = 0;
if (size.width > size.height) {
ratio = 70.0 / size.width;
} else {
ratio = 70.0 / size.height;
}
CGRect rect = CGRectMake(0.0, 0.0, ratio * size.width, ratio * size.height);
UIGraphicsBeginImageContext(rect.size);
[selectedImage drawInRect:rect];
recipe.thumbnailImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self dismissModalViewControllerAnimated:YES];
}
First, as Gilbert pointed out, example code is not meant for production use and will be slow.
Second, you should not store images in Core Data. The example may show how to do it but it is generally a very bad idea. You should be storing images on disk and then keeping a pointer (file path) to the image in Core Data. There are some exceptions to this rule (very small images) but you should rethink your design.
Lastly, A lot of the slowness you are seeing may not be Core Data related. The image picker code is very slow on its own. I would recommend changing the code to just write to disk and see how slow that is compared to the writing to Core Data. I would be surprised if it was much faster.
Update
You can store small images inside of Core Data and my advice from other posts stands, mores for the desktop than iOS. The reason for this is the cache. On iOS, the cache that Core Data uses is very small. If you store images in the database that are large, you can blow out that cache easily and cause future calls that should be in the cache to hit the disk instead. This is not a hard and fast rule of "don't store binary data" but more of a rule of watch your performance and if you are hitting the disk more than you should, binary data could easily be the cause.
Contacts
With regard to contacts, you can be slower because they are doing things differently than you are and they could easily be using private APIs to access the camera. Because it is Apple, they don't necessarily play by the same rules as us.
I'm not an iPhone developer, but generally, example code does not take into account the user experience. Example code shows examples.
In general, you need to perform expensive (long running) operations in additional threads.
Maybe this blog post will help: Respect the Main Thread
I downloaded the project, built it and ran it in the simulator. I couldn't reproduce your problem. I found that the time it took to save an image was visually instantaneous i.e. no longer than the view transition.
If you see the same issue on the original, unmodified project, then you have something else going on. Make sure you have the latest version of the project which should be the one at your link. I know there is at least one really old one floating around because I hit it recently.
i release an image with [myimageview.image release];
but after the app was in background it comes foreground again it release that image again, so it crash! All my tries to check if the app came from a background did not worked.
So how could i check if an object is already released.. so i dont do it twice?
thankx
chris
EDIT
After several complains about my bad coding :) here an example:
First I initialize an Array with the path to a lot of fullscreen images
Also there are around 10 Different Arrays for 10 different Scenes.
When I put directly the Images in an array it just needed to much memory from the beginning, or when i released a scene totaly , it needed to load the whole image array again and that came to slow. So I just load the path to the images into an array and assign each pic in a loop to my imageview while runtime.
Init once:
imageArray_stand= [[NSArray alloc] initWithObjects:
#"FrankieArmeRaus_0001.jpg",
... up to 60 Images
#"FrankieArmeRaus_0061.jpg",nil];
In a Loop thats called each 1/10 Second:
if ([myimageview.image retainCount] > 1)
//if ( myimageview.image != nil) // does not work = crash
{
[myimageview.image release];
myimageview.image = nil;
}
myimageview.image = [UIImage imageNamed:[imageArray_stand2 objectAtIndex:piccounter-1]];
Problem came, because when the app went into background and than into foreground again it seems to release the image 2 times, so i needed a solution to check that.
I am happy about any solution (just NOW it works) thats better.
Even to load all images completly into an array, but as mentioned it needs to much mem and to reassign while runtime needs to long to load. (1 sec for 50 Images)
Also I needed to RELASE and set to NIL, because otherwise it would even make my memory usage out of limit.
You never, ever, release a property of another object. You release the entire myimageview object, or you just assign to it's properties. How those properties are memory manged is the private business of the myimageview object.
Easiest way is to do something like this:
if (someObject != nil)
{
[someObject release];
someObject = nil;
}
However, I don't think you should be releasing myimageview.image, assuming that myimageview is a UIImageView. The UIImageView is responsible for managing its image; you shouldn't be messing with its retain count.
Maybe what you really want to do is this:
myimageview.image = nil;
This will cause myimageview to release it and stop pointing to the now-invalid memory.
solved it with
if ([myimageview.image retainCount] > 1) [myimageview.image release];