Saving CMSampleBufferRef for later processing - cmsamplebufferref

I am trying to use AVFoundation framework to capture a 'series' of still images from AVCaptureStillImageOutput QUICKLY, like the burst mode in some cameras. I want to use the completion handler,
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
and pass the imageSampleBuffer to an NSOperation object for later processing. However i cant find a way to retain the buffer in the NSOperation class.
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
//Add to queue
SaveImageDataOperation *saveOperation = [[SaveImageDataOperation alloc] initWithImageBuffer:imageSampleBuffer];
[_saveDataQueue addOperation:saveOperation];
[saveOperation release];
//Continue
[self captureCompleted];
}];
Does any one know what I maybe doing wrong here? Is there a better approach to do this?

"IMPORTANT: Clients of CMSampleBuffer must explicitly manage the retain count by calling CFRetain and CFRelease, even in processes using garbage collection."
SOURCE: CoreMedia.Framework CMSampleBuffer.h

I've been doing a lot of work with CMSampleBuffer objects recently and I've learned that most of the media buffers sourced by the OS during real-time operations are allocated from pools. If AVFoundation (or CoreVideo/CoreMedia) runs out of buffers in a pool (ie. you CFRetain a buffer for a 'long' time), the real time aspect of the process is going to suffer or block until you CFRelease the buffer back into the pool.
So, in addition to manipulating the CFRetain/CFRelease count on the CMSampleBuffer you should only keep the buffer retained long enough to unpack (deep copy the bits) in the CMBlockBuffer/CMFormat and create a new CMSampleBuffer to pass to your NSOperationQueue or dispatch_queue_t for later processing.
In my situation I wanted to pass compressed CMSampleBuffers from the VideoToolbox over a network. I essentially created a deep copy of the CMSampleBuffer, with my application having full control over the memory allocation/lifetime. From there, I put the copied CMSampleBuffer on a queue for the network I/O to consume.
If the sample data is compressed, deep copying should be relatively fast. In my application, I used NSKeyedArchiver to create an NSData object from the relevant parts of the source CMSampleBuffer. For H.264 video data, that meant the CMBlockBuffer contents, the SPS/PPS header bytes and also the SampleTimingInfo. By serializing those elements I could reconstruct a CMSampleBuffer on the other end of the network that behaved identically to to the one that VideoToolbox had given me. In particular, AVSampleBufferLayer was able to display them as if they were natively sourced on the machine.
For your application I would recommend the following:
Take your source CMSampleBuffer and compress the pixel data. If you
can, use the hardware encoder in VideoToolbox to create I-frame only
H.264 images which will be very high quality. The VT encoder
apparently is very good for battery life as well, probably much
better than JPEG unless they have a hardware JPEG codec on the
system as well.
Deep copy the compressed CMSampleBuffer output by
the VideoToolbox, VT will CFRelease the original CMSampleBuffer back
to the pool used by the capture subsystem.
Retain the VT compressed CMSampleBuffer only long enough to enqueue a deep copy for later processing.
Since the AVFoundation movie recorder can do steps #1 and #2 in real time without running out of buffers, you should be able to deep copy and enqueue your data on a dispatch_queue without exhausting the buffer pools used by the video capture component and VideoToolbox components.

Related

How to memory manage a CMSampleBuffer

I'm getting frames from my camera in the following way:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let imageBuffer: CVImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
}
From the Apple documentation ...
If you need to reference the CMSampleBuffer object outside of the scope of this method, you must CFRetain it and then CFRelease it when you are finished with it.
To maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible. If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped.
Is it okay to hold a reference to CVImageBuffer without explicitly setting sampleBuffer = nil? I only ask because the latest version of Swift automatically memory manages CF data structures so CFRetain and CFRelease are not available.
Also, what is the reasoning behind "This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible." ? Why would a memory block be copied in the first place?
Is it okay to hold a reference to CVImageBuffer without explicitly setting sampleBuffer = nil?
If you're going to keep a reference to the image buffer, then keeping a reference to its "containing" CMSampleBuffer definitely cannot hurt. Will the "right thing" be done if you keep a reference to the CVImageBuffer but not the CMSampleBuffer? Maybe.
Also, what is the reasoning behind "This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible." ? Why would a memory block be copied in the first place?
There are questions on SO about how to do a deep copy on an image CMSampleBuffer, and the answers are not straightforward, so the chances of unintentionally copying one's memory block are very low. I think the intention of this documentation is to inform you that AVCaptureVideoDataOutput is efficient! and that this efficiency (via fixed size frame pools) can have the surprising side effect of dropped frames if you hang onto too many CMSampleBuffers for too long, so don't do that.
The warning is slightly redundant however, because even without the spectre of dropped frames, uncompressed video CMSampleBuffers are already a VERY hot potato due to their size and frequency. You only need to reference a few seconds' worth to use up gigabytes of RAM, so it is imperative to process them as quickly possible and then release/nil any references to them.

Get lower quality UIImage from NSData?

I have an image compressed into NSData using JPEG compression. I access it with [UIImage imageWithContentsOfFile:]. With larger images though, this takes a few seconds. Is there a faster way to load images from the file system, perhaps at the same speed that images are loaded from the bundle? And if not, is there a way to load a lower quality version of the image temporarily while the full quality version loads, other than saving a lower quality version too?
Although you may be able to build something like this using JPEG 2000 (you'd need to build your own copy of the jpeg library as discussed here, and then hand-write the reading code), I don't think you're going to get good return on investment there. The cost of reading the data off disk is still likely to overwhelm everything else.
First, if you're reading from your bundle, use PNG if at all possible. iOS highly optimizes PNGs stored in the bundle (part of the copying process is to rewrite them in an iOS-specific optimized format).
No matter what you do, if you want a place holder you are probably going to need to provide it somehow yourself, either as a separate file, or as a custom file format that you read and manage yourself. This wouldn't be an incredibly difficult format to devise, but you'd still need to do all the resizing beforehand somewhere.
The main key is that reading a large image file is expensive and you shouldn't do it on the main thread. You need to do this stuff on a background queue (GCD or operation) and update the UI when the data becomes available. There's no really easy way around this fact.
Low qulaity is smaller file than others... Here is the code to check the files in document folder of an app.
NSFileManager *manager = [NSFileManager defaultManager];
if ([manager fileExistsAtPath:path]) {
NSDictionary *attributes = [manager attributesOfItemAtPath:path error:nil];
unsigned long long size = [attributes fileSize];
resultlbl.text = [NSString StringWithFormat:#"%d",size];
}

Converting/uploading large amounts of data from iPad to Dropbox

I'm finishing up my app by running it through Instruments as well as stressing it with large amounts of data. The Instruments tests go fine, but the stress test is where I'm having issues. Without getting into too much detail, I'm giving my app increasing amounts of Core Data events with which it needs to extrapolate data, make graphs, and present locations on a MKMapView instance. I started small and increased to 56000 events, which it handled fine wihtout any leaks or memory warnings (and I was quite proud of it for handling it all).
My app implements the Dropbox API to allow for uploading and downloading templates and data for sync purposes. Files uploaded from my app are converted from Core Data to an NSDictionary, then to NSData. I create a temporary folder for the data, then upload that file to Dropbox, which works fine.....normally. If I try to upload my data file with 56000 events, then it crashes. I've logged it and watched as the data is converted. It reaches the last event with no issues, but when it's supposed to start uploading to Dropbox, the app crashes and I cannot for the life of me figure out why. I see memory warnings pop up on my log. Typically, it will go Level=1, Level=2, Level=1, Level=2, then crash, which confuses me as it never reaches Level=3.
The majority of the information I've found is in my edit at the botton. Below is some relevant code:
- (void)uploadSurveys:(NSDictionary *)dict {
NSArray *templateArray = [dict objectForKey:#"templates"];
NSArray *dataArray = [dict objectForKey:#"data"];
NSString *filename;
NSLog(#"upload called");
if ([templateArray count] || [dataArray count]) {
if ([templateArray count]) {
// irrelevent code;
}
if ([dataArray count]) {
SurveyData *survey;
for (int i = 0; i < [dataArray count]; i++) {
BOOL matchExists = NO;
// ...... code to make sure no file exists in dropbox folder and creates new version if necessary;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSData *data = [self convertSurvey:survey];
dispatch_async(dispatch_get_main_queue(), ^{
[self uploadData:data withFilename:filename];
NSLog(#"converted and uploading");
});
});
}
}
}
[self convertSurvey:survey] simply converts my Core Data object to NSData.
- (void)uploadData:(NSData *)data withFilename:(NSString *)filename {
NSFileManager *manager = [NSFileManager defaultManager];
NSString *pathComponent = [NSString stringWithFormat:#"tempData.%#", filename];
NSString *path = [NSTemporaryDirectory() stringByAppendingPathComponent:pathComponent];
if ([manager createFileAtPath:path contents:data attributes:nil]) {
[self.restClient uploadFile:filename toPath:[NSString stringWithFormat:#"/%#", currentSearch] fromPath:path];
NSLog(#"uploading data");
}
}
Any help would be much appreicated and I thoroughly thank you in advance. I'm just trying to figure out if I'm either taking the wrong approach for large files or if it's simply not allowed. If I have to split the files, that is fine, but I'd prefer to know what is going on that prevents my app from performing this action before I try to make a workaround. Thank you again.
UPDATE: As this issue is now the only hinderance to the release of my application, I'm adding a bounty to this question to hopefully get a solution or workaround. It will be up for a week, after which given time I am most likely going to just split up the files as they upload to ensure that this apparent size limit is not reached. This approach is not ideal, which is why a better solution is very welcomed, but is my backup plan if this fails to bring in something more convenient.
EDIT: It appears that NSTemporaryDirectory plays no part in this at all. Here is the new situation. As you can see in the code above, NSData *data = [self convertSurvey:survey]; is called in a secondary thread (which isn't the issue). I have been logging the objects created and knew that they had reached the last one, but never thought to check and see if the NSData file was returned. Turns out, it isn't. In short, I convert all my Core Data objects into arrays and place them into a dictionary (only for the relevant survey/data to be converted). This does indeed work and the dictionary is created. Then I create an NSData file using NSData *data = [NSKeyedArchiver archivedDataWithRootObject:d]; where d is my dictionary. Directly after that, I call return data; to set the value for NSData *data = [self convertSurvey:survey];. This being the case, it appears the NSData or NSKeyedArchiver are at fault here. According to the Apple documentation:
Using 32-bit Cocoa, the size of the data is subject to a theoretical 2GB limit (in practice, because memory will be used by other objects this limit will be smaller); using 64-bit Cocoa, the size of the data is subject to a theoretical limit of about 8EB (in practice, the limit should not be a factor).
I have checked the file sizes in small increments to see where the failure occurs. I have successfully gotten 48.2MB of data through, but not 51.5MB, which leads me to believe that the issue occurs around 50MB, well below the theoretical limit for NSData (unless there is a discrepancy between iOS and OS X in that respect).
Hopefully this new information will help to solve this problem
The 2 GB limit for NSData is completely theoretical on iOS, even the iPhone 4 only has 512 MB of RAM and iOS (unlike Mac OS X) cannot swap, so if your physical RAM is full, you crash (or your app is terminated before that).
The 50 MB NSData object alone is already very large and it's not the only object you have in memory – given that you convert the data from Core Data to a dictionary representation and then to NSData, you probably consume at least twice as much memory (likely more). The system and other apps also need RAM, so you're probably reaching a limit.
Try running your app in Instruments to see how much memory you actually consume.
To reduce your peak memory usage, you have a couple of options that largely depend on your data model:
As Jason Foreman suggested in his answer, try to avoid having your whole file in memory at once. Using NSFileHandle, you can write chunks of data to a file without needing to have the whole data in memory at once. Of course, this requires that you prepare your data accordingly, so that it can be split into chunks. A higher-level approach might be to serialize your data into an XML format that you could write out as a stream. If your data format is very simple, something like CSV might also work.
Don't use NSData for uploading to Dropbox. Write your data to a file instead (see above) and point the Dropbox SDK to that file. The Dropbox SDK makes it pretty easy to do so (DBRestClient has an uploadFile:toPath:fromPath: method).
If your data model makes it difficult to take a streaming approach, try to segment the data into more manageable parts. You could then use your old method of serializing dictionaries, just with multiple files.
Be careful with Core Data's memory usage. Try to re-fault objects using refreshObject:mergeChanges: if possible to break cyclic references within your data (see the Core Data Programming Guide for details).
Avoid using autorelease pools while you're in a long-running loop or create a separate NSAutoreleasePool that gets drained in each iteration of your loop.
A way to work around this type of memory pressure is to build your APIs using streams, both for writing your converted data to a file on disk and also for uploading the data to a web service.
During conversion you can use an NSOutputStream to write chunks of data to the file to avoid keeping an large chunk of data in memory at one time. Then, NSMutableURLRequest can accept an NSStream for the body instead of an NSData, so you should create an NSInputStream to read from your file back from disk and upload it.
Using streams in this way will ensure you never have 50+ MB of data loaded and should avoid the memory warnings you are seeing.

How to limit memory consumption when using audio unit

For my app, I need to play music on background when user navigate inside it.
So, starting from MixerHost, I developed an audio mixer which is able to play 8 tracks simultaneously. Nevertheless, It consumes to much memory because the 8 tracks files are entirely loaded in 8 buffers.
To limit the memory consumption, I load only a small chunk of data at the beginning, and I feed with new data in the callback like that
result = ExtAudioFileRead ( audioFileObject, &numberOfPacketsToRead, bufferList );
It works quite well, but sometimes the playback is shortly paused. I know the origin of the problem: making FS access in the callback.
But is there another solution to limit memory consumption ?
The way this is typically handled is with a shared ring buffer. The ring buffer acts like a shock absorber between the real-time render thread and the slow disk accesses. Create a new thread that does nothing but read audio from the file and stores it in the ring buffer. Then, in your render callback just read from the ring buffer.
Apple has provided an implementation of a ring buffer suitable for use with Audio Units, called CARingBuffer. It's available in /Developer/Extras/CoreAudio/PublicUtility/CARingBuffer.

How can I reuse an NSData to read multiple large files?

I need to read several dozen files and do some trivial processing with their contents. Each file individually won't cause problems, but having all the data loaded at once will quickly exhaust my memory.
I started with:
for (NSString *filename in filenames)
do_something([NSData dataWithContentsOfFile:filename]);
Then of course, I remembered that Objective-C on the iPhone is not really garbage collected, and those would all stick around until the end of the frame anyway. Okay:
for (NSString *filename in filenames) {
NSData *d = [[NSData alloc] initWithContentsOfFile:filename];
do_something(d);
[d release];
}
This nominally only uses as much memory as the largest file, but that's only assuming the allocator is playing friendly at the moment - it could also thrash and fragment everything.
Is there some way I can make an NSMutableData, and keep reusing that Data's buffer, growing it as necessary? I need it as an NSData for other third-party APIs. The best idea I have at the moment is mallocing/reallocing a char* buffer as I go, reading using e.g. stdio, and constructing NSDatas with freeWhenDone:NO backed by that; that way I only thrash/retain a small amount per file.
What you are doing is the second example is fine. Even if you reused an NSMutableData object for its capacity another NSData object would need to be created with the file contents. If you are running into memory issues consider modifying do_something() to work with NSInputStreams.
You could use -[NSData initWithContentsOfMappedFile:] with your second example to keep the memory usage as low as possible.
From the documentation:
A mapped file uses virtual memory techniques to avoid copying pages of the file into memory until they are actually needed.