Save sequence images to iphone - iphone

I make screen shot with iphone camera use the UIGetScreenImage()method, I want to make this images sequence to video, but the memory is limited , i think write the image data to documents maybe a good choice for me. so when capture the image use UIGetSreenImage(), start a new thread to write the image data to documents, but this will delay the thread which used to capture image.
I don't know how to deal with this issues, Would you give me some advises? any reply will appreciate.

Foremost, I'd stay away from UIGetScreenImage() if you can, at least if you plan on releasing your app to the App Store. UIGetScreenImage() is not a public API, and it's one of the most likely "forbidden" APIs that Apple would reject your app for using.
If you want a screen shot (i.e. what the user sees on the device's screen), I think the approach that you've laid out works well, but you're going to need to get a graphic representation of the UIView that contains all your other views.
All UIView code must be on the main thread, though. You can do your file I/O on the background thread, but the actual getting of the image will have to be main thread.

How about NSOperationQueue?
Whenever the screen is captured and an image is created, an NSOperation which writes an image into documents is added a NSOperationQueue.
And I think you should find the best number of the max concurrent operation count (NSOperation's setMaxConcurrentOperationCount).
The max concurrent operation count depends on your app.

Related

How Can I Record the Screen with Acceptable Performance While Keeping the UI Responsive?

I'm looking for help with a performance issue in an Objective-C based iOS app.
I have an iOS application that captures the screen's contents using CALayer's renderInContext method. It attempts to capture enough screen frames to create a video using AVFoundation. The screen recording is then combined with other elements for research purposes on usability. While the screen is being captured, the app may also be displaying the contents of a UIWebView, going out over the network to fetch data, etc... The content of the Web view is not under my control - it is arbitrary content from the Web.
This setup is working but as you might imagine, it's not buttery smooth. Since the layer must be rendered on the main thread, there's more UI contention than I'd like. What I'd like to do is to have a setup where the responsiveness of the UI is prioritized over the screen capture. For instance, if the user is scrolling the Web view, I'd rather drop frames on the recording than have a terrible scrolling experience.
I've experimented with several techniques, from dispatch_source coalescing to submitting the frame capture requests as blocks to the main queue to CADisplayLink. So far they all seem to perform about the same. The frame capture is currently being triggered in the drawRect of the screen's main view.
What I'm asking here is: given the above, what techniques would you suggest I try to achieve my goals? I realize the answer may be that there is no great answer... but I'd like to try anything, however wacky it might sound.
NOTE: Whatever techniques need to be App Store friendly. Can't use something like the CoreSurface hack that Display Recorder used/uses.
Thanks for your help!
"Since the layer must be rendered on the main thread" this is not true, as long as you don't touch UIKit.
Please see https://stackoverflow.com/a/12844171/136305
Maybe you can record at half resolution to speed up things, if that fits the requirements?

Updating saved images for Retina Display

I have an iPhone app that, among other things, allows users to store photos. When a new photo is added to the app's data store, I cache a thumbnail version of the image so that the photo thumbnail grids load in a reasonable amount of time.
The problem is that these thumbnails look great on a pre-Retina Display screen, but they look a little blurry on RD displays. It's not so bad that the images are unusable, but I would really like to be able to get the full benefit of Retina Display for images users saved with older versions of my app.
The problem is that re-creating all these thumbnails takes way too long. In my tests, it took about a minute and a half to re-encode a sample database to high-res thumbnails (admittedly a large one) on my iPhone 4. It will be even worse on older hardware.
How can I get around this? Doing a one-time migration seems out of the question, given the performance results above. Other options are shrinking the thumbnails lazily (i.e. as they're displayed on-screen) and then saving them to the database at that point. Screens full of old images will be sluggish the first time they're viewed, and then snappier after that.
Are there other approaches to consider? Anyone else faced this problem?
I dont like the idea that you try and convert the images.
User will quickly get impatient and say you app is buggy and takes ages to load.
I think you solve the situation without any re-processing of full sized images.
On older hardware you would not have a retina display (so no need to upsize the images). If they have a retina display then they have a fast iPhone iPod.
I would suggest you graphically solve the problem by how you display the thumbnail images. so instead of fullscreen put a border around this image and show it at its true resolution (dont upscale it). Or show 4 images where you normally show 1 (since iPhone screen is 4x the resolution).
Instead of resampling the original massive image, you could do a bicubic upsample of the thumbnail making it 4x the size. This will make it slightly blurry but it should look better than the iPhone scaling which will look really bad. The upsample would be ultra fast as its working with a small image.
I cannot help you out on upsampling but there will be some code somewhere.
Cheers, John.
Screens full of old images will be sluggish the first time they're viewed, and then snappier after that.
It doesn't have to be sluggish.
It's a bit of a pain, but you can do most of your processing in a background thread. Set the thread priority to something low (like 0.1) to avoid making the UI too slow. The easiest way to do this is to set up an NSOperation for each image you need to convert and add them to a NSOperationQueue with maxConcurrentOperationCount=1.
If writes are not atomic, in -applicationDidEnterBackground: or -applicationWillTerminate: (or in something listening for the corresponding notifications notifications), do something like [queue cancelAllOperations]; for (NSOperation * operation in queue) { [operation setThreadPriority:1]; } [queue waitUntilAllOperationsAreFinished];; you get about 10 seconds or so which should be enough for the image conversion to finish writing to disk (and thus avoid half-written files). For added protection, check [operation isCancelled] immediately before the write if it might take longer than 10 seconds. Obviously, in -applicationWillEnterForeground:, you should restart the conversion (remembering that some of the images have already been converted).
Concurrency issues are fun to track down...
(Note that [data writeToFile:path atomically:YES] isn't sufficient — it's likely to leave temporary files lying around if the app is killed during the write. I'd recommend storing thumbnails in Core Data if you can, but that might be out of the question for existing apps.)

Handling many image files in scrollview on iPhone similar to photos app

So I've seen this question asked before and in fact I asked it last night but I thought I'd give it another go just to see if I could get any other unique views on the problem.
The Problem — I have an app with a large number of uiimageviews (with image downloaded to disk) in a scrollview which of course has two large problems facing it: Memory use, and performance. In my app memory use isn't so much a problem because I am employing techniques such as dequeuing and reusing imageviews and such. But performance is another thing entirely. Right now, as a memory saving procedure, I only store image filepaths in memory because it would be ridiculous to store images in memory. But the problem with this is that reading from the disk takes more time than from memory and slows down scrolling on the scrollview immensely.
So, what kind of techniques do any of you suggest for something like this? I've seen three20 but don't want to use it because I need high customizability in my view and that just won't do. Image files are not large, but just thumbnail size so there is no scaling or excess size. There's got to be an intuitive way to handle this. The built in photos app handles up to thousands of photos perfectly with low memory and slick and smooth scrolling performance.
Fundamentally, the problem is that you're probably doing a bunch of disk I/O on your UI thread, which is basically guaranteed to cause performance problems.
You should consider loading your images on a background thread and updating the image views on the main thread when the images are loaded. Depending on your use case you can get more or less clever about how far you preload in advance, etc, so you can have images ready. (There might be some usable source code or even Apple sample code out there that does something like this, but I don't know of it off the top of my head.)
You may notice that some applications (not sure about the Photos app) have an intermediate stage where they load a very small thumb size image for all images, and scale it up to the render size, which acts as a placeholder until the full size version is loaded-- if the user scrolls past that image before the full size is loaded, the visible effect is nearly the same as if the image was there all along.

iPhone: Reading many images quickly

I've got an app I'm working on where we handle a LOT of images at once in a scrollview. (Here's how it looks, each blue block being in image on a scrollview expanding to the right: http://i.stack.imgur.com/o7lFx.png) So to be able to handle the large strain doing this puts on memory. So I've implemented a bunch of techniques such as reusing imageviews etc which have all worked quite successfully in keeping my memory usage down. Another thing I do is instead of keeping the actual image in memory (which I of course couldn't do for all of them because that would run out of memory very quickly) I only keep the image's filepath in memory and then read the image when the user scrolls to an area of the scroll view near that image. However, although this is memory efficient, it's causing a LOT of lag in the scrollview because of the fact that it has to constantly read images from the disk. I can't think of a good solution on how to fix this. Basically right now the app draws to the screen only the visible uiimageviews and while the user scrolls the app will look to see if it can dequeue another imageview so it doesn't have to allocate another one and at that point it reads the image into memory, but as I said it's causing the scrolling action to be very slow. Any ideas on a strategy to use to fix this? Does anyone know what the native photos app does to handle this kind of thing? Thanks so much!
I can suggest you a simple solution to balance both the memory and the computer processing. You only keep small images like thumbnails in memory and only keep about 20 of them. One project that I am doing, I keep 20 thumbnail images (100 x 100) recently accessed, which doesn't cost a lot of memory. I believe that it costs about 200 kb all the time but comparing to a general available memory. I think it is good enough.
It also depends on your use case : if user scroll really fast and you don't know when will they go. You can have even smaller images than the thumnail and when you show it on the UIImageView, you resize it to fit. When user stops scrolling for a while. You can start loading bigger images and then you have a nicer images. User may not even notice about the process
I don't think there is a solution that can be fast and using as less memory as possible. Because we have memory, maybe not big but have enough if we use it smartly.
Slow scrolling performance might mean that you're blocking the main thread while loading images. In that case, the scrolling animation won't continue until the images are loaded, which would indeed cause pretty choppy scrolling performance.
It would be better to lazily load requested images in the background, while the main thread continues to handle the scrolling animation. A library that provides this functionality (among other things) is the 'three20' library. See the Tidbits document, and scroll down to the bottom where the 'TTImageView' class is described.
I had a similar issue with a PDF viewer, The recommended way to do this is to have as low a res image as you can get away with and if you are allowing the user to blow the image up/zoom, then have two versions or three versions of that image increasing the res as you go.
Put as much code as you can get away with in the didDecelerate method (like loading in higher res images like vodkhang talks about), rather than processing loads in didScroll. Recycle Views out of scope as you have said. and beware of autoreleased Context based Image Creation functions.
Load images in on background threads intelligently (based on the scrollView Offset position and zoom level), and think about using CALayer/Tiled Layer drawing for larger images.
Three20 (an open source iOs lib) has a great Photo Viewer that can be subclassed, it has thumbnail navigation, large image paging, caching and gestures right out of the box.

How do I free these resources in my iPhone application?

Okay, I have an app that tells me what color of pixel I touched by reading the screen (like a screenshot) after each touch. To retrieve the pixels, I use a method similar to that appearing here. But it seems that after each touch, the image data is still being held on to (and not to mention saving hundreds of unwanted screenshots in my photo album by the way) and I start getting memory notifications shortly before the app finally crashes.... My app starts out at 3.5MB but after each touch this figure increases until it is at about 100MB, after which the app crashes.
QUESTION:
How do I free this data after each touch?
(Here is the link again for Source)
The provided code frees all its buffers. The memory leak must be elsewhere.
If you want to use a more streamlined way of reading one pixel's color, you could consider the approach suggested in this answer. The idea is to use a very small buffer and draw the view with a transform that shifts the pixel into the range covered by the context.