I am designing this app and there's a section where the users are allowed to draw with their fingers. As I provide a undo for the operation, when the user starts drawing I have to quickly grab the current drawing content and store it somewhere.
I first tried to store the undo as a CGLayer and also as an image, but my app memory usage went up from 7 to 19 Mb. With 19 MB I am relatively safe, because 24 Mb appears to be the theoretical limit beyond things start to be dangerous. The problem is that I have another section of my app that requires lots of memory and if I run this, the memory peaks from 19 to 28 Mb, that is too dangerous to risk.
Then I decided to save the image on disk. To prevent the little gap that happens when the image has to be saved when the user fires TouchesBegan I refined, to the limits of sanity, the saving to disk method and now I barely don't feel any gap. I said, I barely don't feel, but I still feel a little hair gap, I would say <0.1s that takes to the line start drawing.
What I do is to fire a queue operation to manage the file saving.
Is there any other ways you guys can envision on how could this be improved?
thanks
Use mmap and CGBitmapContextCreate to create images that are backed by a file; the kernel will lazily page in and out parts of the file as they are needed.
Combine this with rjobidon's suggestion by snapshotting every so often and you should have a robust and speedy undo system.
I suggest to store the image as vector data, this way you could play and undo very quickly and divide by thousand times the memory used. For example you can only store:
Gesture coordinates (x, y)
Gesture type (touchBegan, touchEnded)
Pen changes (color, width, effect)
You also have a canvas to render the current image.
Good luck!
How about doing the storing at the end of the previous operation? Then the beginning of each stroke will be instant.
You could even do it on a timer so it only saves an undo if you pause drawing for a few seconds, which would mean the undo takes them back to the previous stage of the drawing, rather than just the previous stroke, which may be desirable.
Related
Good morning, Let me first explain you all the scene behind my question
Currently I'm working with image processing app.In that I'm applying filter effect. There is a number of examples exist in market for this. I check most of them. And I found none of them cause a delay while applying effect. Most of them has decreased delay time by scaling uiimage
But in my case I don't want to lose the pixels and though want to cut off the delay time.
In my app it takes about 5 secs for applying any filtereffect.
For filtering I'm using,
[<Image_View> setImage:[<Image Name> performSelector:#selector(<effect name>)]];
where is a function name resides into UIImage+FiltrrCompositions file.
Any help is appreciated !
Operations on large images take time depending on the image size, that's not new.
You want to
keep your app responsive as well as
maintain high quality large image sizes.
If you really want to manage this challenge, I'd recommend that you leverage the logic of your program code by supporting
downscaled thumbnail images as well as
background threads for operations on the large size image.
This will make your code more complex. But there's no abbreviation to speed up a complex operation on complex data. It's up to you.
For more information about multithreading and background tasks with iOS I recommend this tutorial
I currently have an app where I do a lot of image manipulation. I basically take an image that is 320x320 (or 640x640 on Retina) and scale it down to 128x128 (or 256x256 on Retina) before rounding off its corners and applying a glossy highlight. Everything is done using Core Graphics drawing.
At any one time there could be around 600 images that need this processing so I do around 40 on app launch using a background thread and cache them into a FIFO queue. When an image not in the cache needs processing, I do so and add it to the end of the cache, discarding the first cached image. If that first image is needed again it goes through the same process.
What I would like to know is if it would make more sense, and is ultimately more efficient, to save the discarded images to disk rather than recreate them from scratch the next time they are needed, as I could instead just read them from disk.
These images are also displayed using a CALayer and therefore there may be a overhead when the layers' contents are set because of the conversion from UIImage to CGImage. If I store them on disk I believe they can be read directly as a CGImage?
Any ideas and input on improving the efficiency of this process will be sincerely welcomed.
My personal choice would be to use a disk cache.
However, you say 'which is more efficient' - what do you mean?
If you mean faster then the disk cache is probably going to win.
If you mean more space efficient then recreating them will win.
If you mean lower memory usage then it entirely depends on your implementation!
You will have to try it and see :)
However, the advantage of the disk solution is that the second time your app starts up, it will already have done the processing so will start faster. That's why I'd use the disk.
From my experience, saving and then reading from the disk is faster. I had some memory warnings by doing it over and over, instead of saving and reading. But, the only to know for sure is to try. I was using around 1000 images, so it makes senses in my case to use the disk.
It's also good to give a try github's libs that downloads and caches UIImage/NSData from Internet.
It may by SDWebImage(https://github.com/rs/SDWebImage) or APSmartStorage (https://github.com/Alterplay/APSmartStorage).
APSmartStorage helps to get data from network and automatically caches data on disk or in memory in a smart configurable way. Should be good enough.
I've got an app I'm working on where we handle a LOT of images at once in a scrollview. (Here's how it looks, each blue block being in image on a scrollview expanding to the right: http://i.stack.imgur.com/o7lFx.png) So to be able to handle the large strain doing this puts on memory. So I've implemented a bunch of techniques such as reusing imageviews etc which have all worked quite successfully in keeping my memory usage down. Another thing I do is instead of keeping the actual image in memory (which I of course couldn't do for all of them because that would run out of memory very quickly) I only keep the image's filepath in memory and then read the image when the user scrolls to an area of the scroll view near that image. However, although this is memory efficient, it's causing a LOT of lag in the scrollview because of the fact that it has to constantly read images from the disk. I can't think of a good solution on how to fix this. Basically right now the app draws to the screen only the visible uiimageviews and while the user scrolls the app will look to see if it can dequeue another imageview so it doesn't have to allocate another one and at that point it reads the image into memory, but as I said it's causing the scrolling action to be very slow. Any ideas on a strategy to use to fix this? Does anyone know what the native photos app does to handle this kind of thing? Thanks so much!
I can suggest you a simple solution to balance both the memory and the computer processing. You only keep small images like thumbnails in memory and only keep about 20 of them. One project that I am doing, I keep 20 thumbnail images (100 x 100) recently accessed, which doesn't cost a lot of memory. I believe that it costs about 200 kb all the time but comparing to a general available memory. I think it is good enough.
It also depends on your use case : if user scroll really fast and you don't know when will they go. You can have even smaller images than the thumnail and when you show it on the UIImageView, you resize it to fit. When user stops scrolling for a while. You can start loading bigger images and then you have a nicer images. User may not even notice about the process
I don't think there is a solution that can be fast and using as less memory as possible. Because we have memory, maybe not big but have enough if we use it smartly.
Slow scrolling performance might mean that you're blocking the main thread while loading images. In that case, the scrolling animation won't continue until the images are loaded, which would indeed cause pretty choppy scrolling performance.
It would be better to lazily load requested images in the background, while the main thread continues to handle the scrolling animation. A library that provides this functionality (among other things) is the 'three20' library. See the Tidbits document, and scroll down to the bottom where the 'TTImageView' class is described.
I had a similar issue with a PDF viewer, The recommended way to do this is to have as low a res image as you can get away with and if you are allowing the user to blow the image up/zoom, then have two versions or three versions of that image increasing the res as you go.
Put as much code as you can get away with in the didDecelerate method (like loading in higher res images like vodkhang talks about), rather than processing loads in didScroll. Recycle Views out of scope as you have said. and beware of autoreleased Context based Image Creation functions.
Load images in on background threads intelligently (based on the scrollView Offset position and zoom level), and think about using CALayer/Tiled Layer drawing for larger images.
Three20 (an open source iOs lib) has a great Photo Viewer that can be subclassed, it has thumbnail navigation, large image paging, caching and gestures right out of the box.
I am trying to improve the performance of scrolling in our app. I have followed all of the generally accepted advice (draw it yourself with CG, the cell is opaque, no subviews, etc.) but it still stutters sometimes when we have background CPU and network activity.
One solution stated here:
http://www.fieryrobot.com/blog/2008/10/08/more-glassy-scrolling-with-uitableview/
is to cache bitmap snapshots of the cells, which we tried. But the bitmaps were fuzzy and took up a ton of memory (~a few hundred kb each).
One suggestion in the comments of that link is to cache the cells using CGLayer or CALayer(?) because they go to the graphic card's memory. So a few questions,
1) Has any tried this? Sample code?
2) How much memory does the iphone/ipod touch graphics card have? Does this make any sense?
3) Any other suggestions for speeding things up?
More information
I used the CPU sampler (on the phone) and systematically eliminated things from the cell to figure out the problem. A few things:
1) It isn't the setup of the cell. If I remove just the drawing calls (drawinrect etc), but leave the setup, it is glassy.
2) It isn't the drawing of the smallest image (25x25 png), if I put that in it is fine.
3) If I add the second or third image (the first is a big background 320x1004kB, the other is a button image 61x35 4kB) it stutters. I am grabbing both UIImages in a class method, so it is cached.
4) The text is also a problem. It looks like by drawRect spends 75% of its time in the three NSString drawInRect methods I am using. Like:
[mytext drawInRect:drawrect withFont:myFont lineBreakMode:UILineBreakModeTailTruncation];
Those calls seem to go through webcore, perhaps that causes some of the stutters? I need to be able to format two lines of text, and one small paragraph of text. I need the ability to truncate the text with ellipses. Can I perform these out of the critical path and cache them? May I can do that part with a layer?
CGLayers are not cached on the GPU, they are used during the process of drawing Core Graphics elements to a context. CALayers do have their visual contents cached on the GPU. This can "hide" some of your memory usage, but you're still going to run into memory problems if you hold on to a lot of CALayers.
Honestly, if you've followed the table view best practices, as described by Loren Brichter and others, of drawing all of your content via Core Graphics in one layer, making your cell opaque, and obeying the cell reuse mechanism of the table view, there isn't much more you can do. Overloading the CPU on the iPhone will cause stuttering of your scrolling, no matter how optimized you can make it. The inertial scrolling animation does require some CPU power to run.
One last thing to make sure of is that the background CPU and network processes you refer to really are running on a background thread. Anything running in the main thread will cause your interaction methods to pause while that task is processing, potentially adding to the choppiness of your scrolling.
Before you go too deep into the optimisation, are you using the cell reuse mechanism ([tableView dequeueReusableCellWithIdentifier:]) for creating UITableViewCells in your delegate's tableView:cellForRowAtIndexPath: ?
Also, have you run the code in the Simulator using Instruments' Activity Monitor? Set the interval to 1 ms (I found the default - 10 ms - too big for this) and see where most of your time is spent when doing the scrolling.
For the last few weeks I've been tearing my hair out trying to get a tiling mechanism working on an iPhone. I need to scale and crop images of about 150mb so that they can be saved as tiles that will be requested by a scroll view, allowing the user to see the image in high resolution.
The problem is that these images are really pushing the bounds of what an iPhone can handle. It seems quite easy to scale these huge images down to 1000 or so across and do that tiling, but for large zoom levels I need to scale it mid-way, say 4000 across and that's too big. So I hit upon the idea of making medium sized blocks from the full sized image and tiling each of those and the medium zoom.
By creating an autoreleasepool around the inner loops, and draining it after each cycle I can mostly keep the memory under control but sometimes, and to me it seems random, memory is getting leaked, or at least not drained. I'm doing all this on a secondary thread and when it gets back to the first function in that thread I release the thread's own autoreleasepool and only then do the last memory artifacts get cleared. It doesn't seem to bother the simulator but the iPhone is much less forgiving and it crashes before it can complete the whole tiling process. The cropping code I am using is from Hive05
http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Has anyone else had to deal with such massive images before? Is pre-generating tiles the best way to go? Any suggestions on why some loops would increase the memory and some not, or how to force every auto-released thing to clear on the inner pool instead of waiting for the outer pool?
Thanks for reading this far.
for got to add, these images are TIFs, so perhaps a direct reading of the bitmap info would be better than scaling and cropping the entire thing
First of all, I have serious doubts that an 150 MB image would fit in the device's memory, even if we're talking about a 3GS. That one has about 128 MB available memory for 3rd party apps maximum. See the device console messages and look for memory warnings, I guess you'll see that before crashing, the app emits them when trying to load your image. Reading the bitmap info in chunks would seem more sensible, as you'll be managing smaller sections at a time. I don't think Cocoa has a random-access file API, so you'll have to resort to a C function.
I've managed to write the loops that cycle through tiles of 1024x1024 and my iPhone 3G is able to finish the processing. It takes over 30 minutes though so it isn't great but that's what you get for working with 150mb TIF on a cellphone.
To keep the memory usage low I had to drain the AutoReleasepools after each iteration. Apple Tech Support pointed out that since the iPhone is a reference-counted environment rather than a garbage collected environment it is better to create a new AutoReleasePool at the start of each inner loop and drain it at the end of each loop than to create it before any loops start, drain it many times and then release it after the loops are done. Until I made that change my app would crash an iPhone but run fine on the simulator.