Okay, I have an app that tells me what color of pixel I touched by reading the screen (like a screenshot) after each touch. To retrieve the pixels, I use a method similar to that appearing here. But it seems that after each touch, the image data is still being held on to (and not to mention saving hundreds of unwanted screenshots in my photo album by the way) and I start getting memory notifications shortly before the app finally crashes.... My app starts out at 3.5MB but after each touch this figure increases until it is at about 100MB, after which the app crashes.
QUESTION:
How do I free this data after each touch?
(Here is the link again for Source)
The provided code frees all its buffers. The memory leak must be elsewhere.
If you want to use a more streamlined way of reading one pixel's color, you could consider the approach suggested in this answer. The idea is to use a very small buffer and draw the view with a transform that shifts the pixel into the range covered by the context.
Related
I currently work on a 2D project on Unity (for mobiles/tablets), where I download a few heavy pictures by using UnityWebRequestTexture.GetTexture(url), which I then store using File.WriteAllBytes(filePath, webRequest.downloadHandler.data).
Later in the project, I need to show the downloaded picture on a RawImage element. To avoid blocking the app I read the bytes via FileStream on a separate Thread, which works well.
But when I call (from the main thread) ((Texture2D) myRawImage.texture).LoadImage(loadedBytes), my Profiler shows a huge peak, and if most devices handle it correctly, on iPad Mini I often get a MemoryWarning at this very point. Which, when repeated, leads to the app being forced close by iOS.
For info, the downloaded image's size vary depending on the device asking it. We initially intended to display 2048x1536 pictures that way on iPad Mini, but we were getting MemoryWarning on each display. We're now using 750x1334, but we still get MemoryWarning from time to time.
My question then is : is there a better way to do the "LoadImage" step ? I believe that the conversion from byte[] to Texture2D is where my memory problem lies, but I didn't find any workaround to this.
Or eventually: Is there a better way to download/store the image? If the image was in the app's resources from the beginning, I don't seem to have any trouble displaying it on the same device. Maybe there is a way to download the texture while applying it the same compression algorithm that assets receive when building an app ?
Using Texture2D.Compress() after loading the texture compresses the texture by at least 90%.
Reference
I have an iPhone app that, among other things, allows users to store photos. When a new photo is added to the app's data store, I cache a thumbnail version of the image so that the photo thumbnail grids load in a reasonable amount of time.
The problem is that these thumbnails look great on a pre-Retina Display screen, but they look a little blurry on RD displays. It's not so bad that the images are unusable, but I would really like to be able to get the full benefit of Retina Display for images users saved with older versions of my app.
The problem is that re-creating all these thumbnails takes way too long. In my tests, it took about a minute and a half to re-encode a sample database to high-res thumbnails (admittedly a large one) on my iPhone 4. It will be even worse on older hardware.
How can I get around this? Doing a one-time migration seems out of the question, given the performance results above. Other options are shrinking the thumbnails lazily (i.e. as they're displayed on-screen) and then saving them to the database at that point. Screens full of old images will be sluggish the first time they're viewed, and then snappier after that.
Are there other approaches to consider? Anyone else faced this problem?
I dont like the idea that you try and convert the images.
User will quickly get impatient and say you app is buggy and takes ages to load.
I think you solve the situation without any re-processing of full sized images.
On older hardware you would not have a retina display (so no need to upsize the images). If they have a retina display then they have a fast iPhone iPod.
I would suggest you graphically solve the problem by how you display the thumbnail images. so instead of fullscreen put a border around this image and show it at its true resolution (dont upscale it). Or show 4 images where you normally show 1 (since iPhone screen is 4x the resolution).
Instead of resampling the original massive image, you could do a bicubic upsample of the thumbnail making it 4x the size. This will make it slightly blurry but it should look better than the iPhone scaling which will look really bad. The upsample would be ultra fast as its working with a small image.
I cannot help you out on upsampling but there will be some code somewhere.
Cheers, John.
Screens full of old images will be sluggish the first time they're viewed, and then snappier after that.
It doesn't have to be sluggish.
It's a bit of a pain, but you can do most of your processing in a background thread. Set the thread priority to something low (like 0.1) to avoid making the UI too slow. The easiest way to do this is to set up an NSOperation for each image you need to convert and add them to a NSOperationQueue with maxConcurrentOperationCount=1.
If writes are not atomic, in -applicationDidEnterBackground: or -applicationWillTerminate: (or in something listening for the corresponding notifications notifications), do something like [queue cancelAllOperations]; for (NSOperation * operation in queue) { [operation setThreadPriority:1]; } [queue waitUntilAllOperationsAreFinished];; you get about 10 seconds or so which should be enough for the image conversion to finish writing to disk (and thus avoid half-written files). For added protection, check [operation isCancelled] immediately before the write if it might take longer than 10 seconds. Obviously, in -applicationWillEnterForeground:, you should restart the conversion (remembering that some of the images have already been converted).
Concurrency issues are fun to track down...
(Note that [data writeToFile:path atomically:YES] isn't sufficient — it's likely to leave temporary files lying around if the app is killed during the write. I'd recommend storing thumbnails in Core Data if you can, but that might be out of the question for existing apps.)
I'm seeing my app being killed by iOS with an out of memory message, however, while tracing the progress of the app in the Allocations Instrument, I see lots of mallocs that seem to be occurring outside of the code I've written.
I'm not seeing any leaks being caught, so I assume these allocations are supposed to be there. Thing is, because I'm not sure about why they have been allocated, I'm not sure what I can do to optimize the app and prevent the OS from jettisoning my app.
Does anyone know why the memory is being allocated, or is there any way for me to find out?
Here are a couple of shots from Instruments showing the mallocs. In the second shot, all of the allocations have the same stack trace.
EDIT
I' displaying a single large image as the UIView background (1024x768), then overlaying a smaller (600px square) UIView with some custom drawing and a third UIView (550px square) over the top of those that contains two 550px square images overlayed.
I'm guessing that this is not appropriate, and there is probably a better way of achieving the composition of views I need for the app to work.
Should this be possible on the iPad?
I think there's not really much information to go on here - if you add a bit more information about what this view in your app is doing you might get some more informed suggestions.
From the screenshot, it would appears large blocks are being allocated to display an image.
Given that I'd hazard a guess that either you're trying to display some very large images, or you UIView is large, or you have more UIViews in memory that you need to display the current screen.
I guess the easiest way to track down exactly where they're coming from would be to disable the part of the application you suspect then run again and see if the allocations still occur.
EDIT
Are all the images the same size as you're displaying them? (ie. are you trying to display a 5M photo as the 1024x768 background?) If not you probably need to scale them down to the size you are display them, or at least closer.
If you're not needing transparency, make sure to make all the views opaque.
I figured out the source of the problem - I was using
[UIImage imageNamed:#'Someimage']
to load in my images. This, as I'm sure many people are aware, caches the image data. I had enough images of sufficient size to cause my app to be jettisoned.
The problem was apparent not because of the size of the image but because of both the size and number of images I was using. The lesson here is be careful with [UIImage imageNamed:].
Thanks for all of the help, chaps!
Mallocs can occur inside of other API's that your app calls (such as loading images, views, playing long sounds, etc.) You can try changing the size of your images, views, sounds and other objects by various amounts as a test, and see if the size of the malloc'd memory changes track one of the changes that you've made.
I've got an app I'm working on where we handle a LOT of images at once in a scrollview. (Here's how it looks, each blue block being in image on a scrollview expanding to the right: http://i.stack.imgur.com/o7lFx.png) So to be able to handle the large strain doing this puts on memory. So I've implemented a bunch of techniques such as reusing imageviews etc which have all worked quite successfully in keeping my memory usage down. Another thing I do is instead of keeping the actual image in memory (which I of course couldn't do for all of them because that would run out of memory very quickly) I only keep the image's filepath in memory and then read the image when the user scrolls to an area of the scroll view near that image. However, although this is memory efficient, it's causing a LOT of lag in the scrollview because of the fact that it has to constantly read images from the disk. I can't think of a good solution on how to fix this. Basically right now the app draws to the screen only the visible uiimageviews and while the user scrolls the app will look to see if it can dequeue another imageview so it doesn't have to allocate another one and at that point it reads the image into memory, but as I said it's causing the scrolling action to be very slow. Any ideas on a strategy to use to fix this? Does anyone know what the native photos app does to handle this kind of thing? Thanks so much!
I can suggest you a simple solution to balance both the memory and the computer processing. You only keep small images like thumbnails in memory and only keep about 20 of them. One project that I am doing, I keep 20 thumbnail images (100 x 100) recently accessed, which doesn't cost a lot of memory. I believe that it costs about 200 kb all the time but comparing to a general available memory. I think it is good enough.
It also depends on your use case : if user scroll really fast and you don't know when will they go. You can have even smaller images than the thumnail and when you show it on the UIImageView, you resize it to fit. When user stops scrolling for a while. You can start loading bigger images and then you have a nicer images. User may not even notice about the process
I don't think there is a solution that can be fast and using as less memory as possible. Because we have memory, maybe not big but have enough if we use it smartly.
Slow scrolling performance might mean that you're blocking the main thread while loading images. In that case, the scrolling animation won't continue until the images are loaded, which would indeed cause pretty choppy scrolling performance.
It would be better to lazily load requested images in the background, while the main thread continues to handle the scrolling animation. A library that provides this functionality (among other things) is the 'three20' library. See the Tidbits document, and scroll down to the bottom where the 'TTImageView' class is described.
I had a similar issue with a PDF viewer, The recommended way to do this is to have as low a res image as you can get away with and if you are allowing the user to blow the image up/zoom, then have two versions or three versions of that image increasing the res as you go.
Put as much code as you can get away with in the didDecelerate method (like loading in higher res images like vodkhang talks about), rather than processing loads in didScroll. Recycle Views out of scope as you have said. and beware of autoreleased Context based Image Creation functions.
Load images in on background threads intelligently (based on the scrollView Offset position and zoom level), and think about using CALayer/Tiled Layer drawing for larger images.
Three20 (an open source iOs lib) has a great Photo Viewer that can be subclassed, it has thumbnail navigation, large image paging, caching and gestures right out of the box.
I make screen shot with iphone camera use the UIGetScreenImage()method, I want to make this images sequence to video, but the memory is limited , i think write the image data to documents maybe a good choice for me. so when capture the image use UIGetSreenImage(), start a new thread to write the image data to documents, but this will delay the thread which used to capture image.
I don't know how to deal with this issues, Would you give me some advises? any reply will appreciate.
Foremost, I'd stay away from UIGetScreenImage() if you can, at least if you plan on releasing your app to the App Store. UIGetScreenImage() is not a public API, and it's one of the most likely "forbidden" APIs that Apple would reject your app for using.
If you want a screen shot (i.e. what the user sees on the device's screen), I think the approach that you've laid out works well, but you're going to need to get a graphic representation of the UIView that contains all your other views.
All UIView code must be on the main thread, though. You can do your file I/O on the background thread, but the actual getting of the image will have to be main thread.
How about NSOperationQueue?
Whenever the screen is captured and an image is created, an NSOperation which writes an image into documents is added a NSOperationQueue.
And I think you should find the best number of the max concurrent operation count (NSOperation's setMaxConcurrentOperationCount).
The max concurrent operation count depends on your app.