Skip over frames while processing video on iOS - iphone

I'm trying to process a local video file and simply do some analysis on the pixel data. Nothing is being output. My current code iterates through each frame of the video but I'd actually like to skip ~15 frames at a time to speed things up. Is there a way to skip over frames without decoding them?
In Ffmpeg, I could simply call av_read_frame without calling avcodec_decode_video2.
Thanks in advance! Here's my current code:
- (void) readMovie:(NSURL *)url
{
[self performSelectorOnMainThread:#selector(updateInfo:) withObject:#"scanning" waitUntilDone:YES];
startTime = [NSDate date];
AVURLAsset * asset = [AVURLAsset URLAssetWithURL:url options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler:
^{
dispatch_async(dispatch_get_main_queue(),
^{
AVAssetTrack * videoTrack = nil;
NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
if ([tracks count] == 1)
{
videoTrack = [tracks objectAtIndex:0];
videoDuration = CMTimeGetSeconds([videoTrack timeRange].duration);
NSError * error = nil;
// _movieReader is a member variable
_movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
if (error)
NSLog(#"%#", error.localizedDescription);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_420YpCbCr8Planar];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
AVAssetReaderTrackOutput* output = [AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:videoTrack
outputSettings:videoSettings];
output.alwaysCopiesSampleData = NO;
[_movieReader addOutput:output];
if ([_movieReader startReading])
{
NSLog(#"reading started");
[self readNextMovieFrame];
}
else
{
NSLog(#"reading can't be started");
}
}
});
}];
}
- (void) readNextMovieFrame
{
//NSLog(#"readNextMovieFrame called");
if (_movieReader.status == AVAssetReaderStatusReading)
{
//NSLog(#"status is reading");
AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
if (sampleBuffer)
{ // I'm guessing this is the expensive part that we can skip if we want to skip frames
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information of the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// do my pixel analysis
// Unlock the image buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CFRelease(sampleBuffer);
[self readNextMovieFrame];
}
else
{
NSLog(#"could not copy next sample buffer. status is %d", _movieReader.status);
NSTimeInterval scanDuration = -[startTime timeIntervalSinceNow];
float scanMultiplier = videoDuration / scanDuration;
NSString* info = [NSString stringWithFormat:#"Done\n\nvideo duration: %f seconds\nscan duration: %f seconds\nmultiplier: %f", videoDuration, scanDuration, scanMultiplier];
[self performSelectorOnMainThread:#selector(updateInfo:) withObject:info waitUntilDone:YES];
}
}
else
{
NSLog(#"status is now %d", _movieReader.status);
}
}
- (void) updateInfo: (id*)message
{
NSString* info = [NSString stringWithFormat:#"%#", message];
[infoTextView setText:info];
}

If you want less accurate frame processing (not frame by frame) you should use AVAssetImageGenerator.
This class returns a frame for a specified time you asked.
Specifically, build an Array filled with times between the clip's duration with 0.5s difference between each time (iPhone films at about 29.3 fps if you want every 15 frames its about frame for every 30 seconds) and let the image generator returns your frames.
For each frame you can see the time you requested and the actual time of the frame. It's default value is around 0.5s tolerance from the time you asked but you can also change that by changing the properties:
requestedTimeToleranceBefore
and
requestedTimeToleranceAfter
I hope I answered your question,
Good luck.

Related

How to get Picture frames by frames in Xcode

Hi I want to get picture frames by frames using iphone back camera. What I have did so far.
I open Camera in full mode.
(IBAction) showCameraUI {
[self startCameraControllerFromViewController: self
usingDelegate: self];
}
(BOOL) startCameraControllerFromViewController: (UIViewController*) controller
usingDelegate: (id ) delegate {
if (([UIImagePickerController isSourceTypeAvailable:
UIImagePickerControllerSourceTypeCamera] == NO)
|| (delegate == nil)
|| (controller == nil))
return NO;
UIImagePickerController *cameraUI = [[UIImagePickerController alloc] init];
cameraUI.sourceType = UIImagePickerControllerSourceTypeCamera;
// Displays a control that allows the user to choose picture or
// movie capture, if both are available:
cameraUI.mediaTypes =
[UIImagePickerController availableMediaTypesForSourceType:
UIImagePickerControllerSourceTypeCamera];
// Hides the controls for moving & scaling pictures, or for
// trimming movies. To instead show the controls, use YES.
cameraUI.allowsEditing = NO;
cameraUI.delegate = delegate;
cameraUI.showsCameraControls=NO;
cameraUI.navigationBarHidden=YES;
cameraUI.toolbarHidden=YES;
cameraUI.wantsFullScreenLayout=YES;
cameraUI.cameraViewTransform = CGAffineTransformScale(cameraUI.cameraViewTransform, CAMERA_SCALAR_SX, CAMERA_SCALAR_SY);
UIButton *btnRecording = [UIButton buttonWithType:UIButtonTypeRoundedRect];
CGRect buttonRect =CGRectMake(190 , 420, 100, 39); // position in the parent view and set the size of the button
btnRecording.frame = buttonRect;
[btnRecording setTitle:#"Recording" forState:UIControlStateNormal];
// add targets and actions
[btnRecording addTarget:self action:#selector(buttonClicked:) forControlEvents:UIControlEventTouchUpInside];
cameraUI.cameraOverlayView= btnRecording;
[controller presentModalViewController: cameraUI animated: YES];
return YES;
}
Setup AVCapture to get pictures frames by frames.
(void)setupCaptureSession {
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// session.AVCaptureTorchModeOn=YES;
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetLow;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
[device lockForConfiguration:nil];
[device setTorchMode:AVCaptureTorchModeOn]; // use AVCaptureTorchModeOff to turn off
[device unlockForConfiguration];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init] ;
// output.alwaysDiscardsLateVideoFrames = YES;
[session addOutput:output];
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
// output.minFrameDuration = CMTimeMake(1, 1);
// Start the session running to start the flow of data
NSLog(#"session is going to start at here");
[session startRunning];
// Assign session to an ivar.
//[self setSession:session];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {
NSLog(#"picture is getting");
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (!colorSpace)
{
NSLog(#"CGColorSpaceCreateDeviceRGB failure");
return nil;
}
// Get the base address of the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the data size for contiguous planes of the pixel buffer.
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
// Create a Quartz direct-access data provider that uses data we supply
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize,
NULL);
// Create a bitmap image from data supplied by our data provider
CGImageRef cgImage =
CGImageCreate(width,
height,
8,
32,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
provider,
NULL,
true,
kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
// Create and return an image object representing the specified Quartz image
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
// Create a UIImage from the sample buffer data
NSLog(#"picture is getting");
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
// [self.delegate cameraCaptureGotFrame:image];
}
Now Delegate "captureOutput"is not getting call.
I don't know where I am doing wrong. This will be help me. Thanks in advance.

fetching photo alassetlibrary asset representation size zero

When I am fetching photos using the ALAssetLibrary, for some images, the AssetRepresentation.size comes zero, which does not make the image, on my ImageView.
Here is the code:
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library enumerateGroupsWithTypes:ALAssetsGroupAlbum usingBlock:^(ALAssetsGroup *group, BOOL *stop) {
if ([[group valueForProperty:ALAssetsGroupPropertyName] isEqual:self.groupName]) {
[group enumerateAssetsUsingBlock:^(ALAsset *result, NSUInteger index, BOOL *stop){
//Get the asset type
NSString *assetType = [result valueForProperty:ALAssetPropertyType];
if ([assetType isEqualToString:ALAssetTypePhoto]) {
NSLog(#"Photo Asset");
}
//Get URLs for the assets
NSDictionary *assetURLs = [result valueForProperty:ALAssetPropertyURLs];
NSUInteger assetCounter = 0;
for (NSString *assetURLKey in assetURLs) {
assetCounter++;
}
//Get the asset's representation object
ALAssetRepresentation *assetRepresentation = [result defaultRepresentation];
//From this asset representation, we take out data for image and show it using imageview.
dispatch_async(dispatch_get_main_queue(), ^(void){
CGImageRef imgRef = [assetRepresentation fullResolutionImage];
//Img Construction
UIImage *image = [[[UIImage alloc] initWithCGImage:imgRef] autorelease];
NSLog(#"before %#:::%lld", [image description], [assetRepresentation size]); //Prints '0' for size
if((image != nil)&& [assetRepresentation size] != 0){
//display in image view
}
else{
// NSLog(#"Failed to load the image.");
}
});
}];
}
}failureBlock:^(NSError *error){
NSLog(#"Error retrieving photos: %#", error);
}];
[library release];
Please Help. What am I doing wrong here? and how should I get the image?
Joe Smith is right! You release the library too soon. The moment the AssetLibrary is released, all the asset object will be gone with it. And Because these enumeration is block codes so the release of AssetLibrary will be executed somewhere during the reading process.
I suggest that you create your assetLibrary in the app delegate to keep it alive and only reset it when you receive change notification ALAssetsLibraryChangedNotification from the ALAssetLibrary
I think you released the assets library too soon.

Get Exif data from UIImage - UIImagePickerController [duplicate]

This question already has answers here:
UIImagePickerController and extracting EXIF data from existing photos
(18 answers)
Closed 7 years ago.
How can we get Exif information from UIImage selected from UIImagePickerController?
I had done much R&D for this and got many replies but still failed to implement this.
I had gone through this this and this link
Please help me to solve this problem.
Thanks in advance..
Interesting question! I came up with the following solution working for images picked from your photo library (note my code is using ARC):
Import AssetsLibrary.framework and ImageIO.framework.
Then include the needed classes inside your .h-file:
#import <AssetsLibrary/ALAsset.h>
#import <AssetsLibrary/ALAssetRepresentation.h>
#import <ImageIO/CGImageSource.h>
#import <ImageIO/CGImageProperties.h>
And put this inside your imagePickerController:didFinishPickingMediaWithInfo: delegate method:
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:[info objectForKey:UIImagePickerControllerReferenceURL]
resultBlock:^(ALAsset *asset) {
ALAssetRepresentation *image_representation = [asset defaultRepresentation];
// create a buffer to hold image data
uint8_t *buffer = (Byte*)malloc(image_representation.size);
NSUInteger length = [image_representation getBytes:buffer fromOffset: 0.0 length:image_representation.size error:nil];
if (length != 0) {
// buffer -> NSData object; free buffer afterwards
NSData *adata = [[NSData alloc] initWithBytesNoCopy:buffer length:image_representation.size freeWhenDone:YES];
// identify image type (jpeg, png, RAW file, ...) using UTI hint
NSDictionary* sourceOptionsDict = [NSDictionary dictionaryWithObjectsAndKeys:(id)[image_representation UTI] ,kCGImageSourceTypeIdentifierHint,nil];
// create CGImageSource with NSData
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((__bridge CFDataRef) adata, (__bridge CFDictionaryRef) sourceOptionsDict);
// get imagePropertiesDictionary
CFDictionaryRef imagePropertiesDictionary;
imagePropertiesDictionary = CGImageSourceCopyPropertiesAtIndex(sourceRef,0, NULL);
// get exif data
CFDictionaryRef exif = (CFDictionaryRef)CFDictionaryGetValue(imagePropertiesDictionary, kCGImagePropertyExifDictionary);
NSDictionary *exif_dict = (__bridge NSDictionary*)exif;
NSLog(#"exif_dict: %#",exif_dict);
// save image WITH meta data
NSString *documentsDirectory = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSURL *fileURL = nil;
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(sourceRef, 0, imagePropertiesDictionary);
if (![[sourceOptionsDict objectForKey:#"kCGImageSourceTypeIdentifierHint"] isEqualToString:#"public.tiff"])
{
fileURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/%#.%#",
documentsDirectory,
#"myimage",
[[[sourceOptionsDict objectForKey:#"kCGImageSourceTypeIdentifierHint"] componentsSeparatedByString:#"."] objectAtIndex:1]
]];
CGImageDestinationRef dr = CGImageDestinationCreateWithURL ((__bridge CFURLRef)fileURL,
(__bridge CFStringRef)[sourceOptionsDict objectForKey:#"kCGImageSourceTypeIdentifierHint"],
1,
NULL
);
CGImageDestinationAddImage(dr, imageRef, imagePropertiesDictionary);
CGImageDestinationFinalize(dr);
CFRelease(dr);
}
else
{
NSLog(#"no valid kCGImageSourceTypeIdentifierHint found …");
}
// clean up
CFRelease(imageRef);
CFRelease(imagePropertiesDictionary);
CFRelease(sourceRef);
}
else {
NSLog(#"image_representation buffer length == 0");
}
}
failureBlock:^(NSError *error) {
NSLog(#"couldn't get asset: %#", error);
}
];
One thing I noticed is, that iOS will ask the user to allow location services – if he denies, you won't be abled to get the image data …
EDIT
Added code to save the image including its meta data. It's a quick approach, so maybe there is a better way, but it works!
These answers all seem extremely complex. If the image has been saved to the Camera Roll, and you have the ALAsset (either from UIImagePicker or ALAssetLibrary) you can get the metadata like so:
asset.defaultRepresentation.metadata;
If you want to save that image from camera roll to another location (say in Sandbox/Documents) simply do:
CGImageDestinationRef imageDestinationRef = CGImageDestinationCreateWithURL((__bridge CFURLRef)urlToSaveTo, kUTTypeJPEG, 1, NULL);
CFDictionaryRef imagePropertiesRef = (__bridge CFDictionaryRef)asset.defaultRepresentation.metadata;
CGImageDestinationAddImage(imageDestinationRef, asset.defaultRepresentation.fullResolutionImage, imagePropertiesRef);
if (!CGImageDestinationFinalize(imageDestinationRef)) NSLog(#"Failed to copy photo on save to %#", urlToSaveTo);
CFRelease(imageDestinationRef);
I had found solution and got answer from here
From here We can get GPS info as well..
Amazing and thanks all for helping me to solve this problem.
UPDATE
This is another function that I had created myself, also return Exif data as well as GPS data and in this function we doesn't need any third party library.. but you have to turn on location services for this. and use current latitude and longitude for that. so have to use CoreLocation.framework
//FOR CAMERA IMAGE
-(NSMutableData *)getImageWithMetaData:(UIImage *)pImage
{
NSData* pngData = UIImagePNGRepresentation(pImage);
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)pngData, NULL);
NSDictionary *metadata = (NSDictionary *) CGImageSourceCopyPropertiesAtIndex(source, 0, NULL);
NSMutableDictionary *metadataAsMutable = [[metadata mutableCopy]autorelease];
[metadata release];
//For GPS Dictionary
NSMutableDictionary *GPSDictionary = [[[metadataAsMutable objectForKey:(NSString *)kCGImagePropertyGPSDictionary]mutableCopy]autorelease];
if(!GPSDictionary)
GPSDictionary = [NSMutableDictionary dictionary];
[GPSDictionary setValue:[NSNumber numberWithDouble:currentLatitude] forKey:(NSString*)kCGImagePropertyGPSLatitude];
[GPSDictionary setValue:[NSNumber numberWithDouble:currentLongitude] forKey:(NSString*)kCGImagePropertyGPSLongitude];
NSString* ref;
if (currentLatitude <0.0)
ref = #"S";
else
ref =#"N";
[GPSDictionary setValue:ref forKey:(NSString*)kCGImagePropertyGPSLatitudeRef];
if (currentLongitude <0.0)
ref = #"W";
else
ref =#"E";
[GPSDictionary setValue:ref forKey:(NSString*)kCGImagePropertyGPSLongitudeRef];
[GPSDictionary setValue:[NSNumber numberWithFloat:location.altitude] forKey:(NSString*)kCGImagePropertyGPSAltitude];
//For EXIF Dictionary
NSMutableDictionary *EXIFDictionary = [[[metadataAsMutable objectForKey:(NSString *)kCGImagePropertyExifDictionary]mutableCopy]autorelease];
if(!EXIFDictionary)
EXIFDictionary = [NSMutableDictionary dictionary];
[EXIFDictionary setObject:[NSDate date] forKey:(NSString*)kCGImagePropertyExifDateTimeOriginal];
[EXIFDictionary setObject:[NSDate date] forKey:(NSString*)kCGImagePropertyExifDateTimeDigitized];
//add our modified EXIF data back into the image’s metadata
[metadataAsMutable setObject:EXIFDictionary forKey:(NSString *)kCGImagePropertyExifDictionary];
[metadataAsMutable setObject:GPSDictionary forKey:(NSString *)kCGImagePropertyGPSDictionary];
CFStringRef UTI = CGImageSourceGetType(source);
NSMutableData *dest_data = [NSMutableData data];
CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)dest_data, UTI, 1, NULL);
if(!destination)
dest_data = [[pngData mutableCopy] autorelease];
else
{
CGImageDestinationAddImageFromSource(destination, source, 0, (CFDictionaryRef) metadataAsMutable);
BOOL success = CGImageDestinationFinalize(destination);
if(!success)
dest_data = [[pngData mutableCopy] autorelease];
}
if(destination)
CFRelease(destination);
CFRelease(source);
return dest_data;
}
//FOR PHOTO LIBRARY IMAGE
-(NSMutableData *)getImagedataPhotoLibrary:(NSDictionary *)pImgDictionary andImage:(UIImage *)pImage
{
NSData* data = UIImagePNGRepresentation(pImage);
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)data, NULL);
NSMutableDictionary *metadataAsMutable = [[pImgDictionary mutableCopy]autorelease];
CFStringRef UTI = CGImageSourceGetType(source);
NSMutableData *dest_data = [NSMutableData data];
//For Mutabledata
CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)dest_data, UTI, 1, NULL);
if(!destination)
dest_data = [[data mutableCopy] autorelease];
else
{
CGImageDestinationAddImageFromSource(destination, source, 0, (CFDictionaryRef) metadataAsMutable);
BOOL success = CGImageDestinationFinalize(destination);
if(!success)
dest_data = [[data mutableCopy] autorelease];
}
if(destination)
CFRelease(destination);
CFRelease(source);
return dest_data;
}
and We will retrieve that data like this
//FOR CAMERA IMAGE
NSData *originalImgData = [self getImageWithMetaData:imgOriginal];
//FOR PHOTO LIBRARY IMAGE
[self getImagedataPhotoLibrary:[[myasset defaultRepresentation] metadata] andImage:imgOriginal];
For all of this you should have to Import AssetsLibrary.framework and ImageIO.framework.
I have used this method for getting the exifdata dictionary from image , I hope this will also work for you
-(void)getExifDataFromImage:(UIImage *)currentImage
{
NSData* pngData = UIImageJPEGRepresentation(currentImage, 1.0);
CGImageSourceRef mySourceRef = CGImageSourceCreateWithData((CFDataRef)pngData, NULL);
//CGImageSourceRef mySourceRef = CGImageSourceCreateWithURL((__bridge CFURLRef)myURL, NULL);
if (mySourceRef != NULL)
{
NSDictionary *myMetadata = (__bridge NSDictionary *)CGImageSourceCopyPropertiesAtIndex(mySourceRef,0,NULL);
NSDictionary *exifDic = [myMetadata objectForKey:(NSString *)kCGImagePropertyExifDictionary];
NSDictionary *tiffDic = [myMetadata objectForKey:(NSString *)kCGImagePropertyTIFFDictionary];
NSLog(#"exifDic properties: %#", myMetadata); //all data
float rawShutterSpeed = [[exifDic objectForKey:(NSString *)kCGImagePropertyExifExposureTime] floatValue];
int decShutterSpeed = (1 / rawShutterSpeed);
NSLog(#"Camera %#",[tiffDic objectForKey:(NSString *)kCGImagePropertyTIFFModel]);
NSLog(#"Focal Length %#mm",[exifDic objectForKey:(NSString *)kCGImagePropertyExifFocalLength]);
NSLog(#"Shutter Speed %#", [NSString stringWithFormat:#"1/%d", decShutterSpeed]);
NSLog(#"Aperture f/%#",[exifDic objectForKey:(NSString *)kCGImagePropertyExifFNumber]);
NSNumber *ExifISOSpeed = [[exifDic objectForKey:(NSString*)kCGImagePropertyExifISOSpeedRatings] objectAtIndex:0];
NSLog(#"ISO %ld",[ExifISOSpeed integerValue]);
NSLog(#"Taken %#",[exifDic objectForKey:(NSString*)kCGImagePropertyExifDateTimeDigitized]);
}
}
You need ALAssetsLibrary to actually retrieve the EXIF info from an image. The EXIF is added to an image only when it is saved to the Photo Library. Even if you use ALAssetLibrary to get an image asset from the library, it will lose all EXIF info if you set it to a UIImage.
I have tried to insert GPS coordinates into image metadata picked by iPad Camera as it was suggested by Mehul.
It Works, Thank you for your post.
P.S.
Who intends to use that code, just substitude the two geolocations at the top of the function -(NSMutableData *)getImageWithMetaData:(UIImage *)pImage {
double currentLatitude = [locationManager location].coordinate.latitude;
double currentLongitude = [locationManager location].coordinate.longitude;
...
By supposing that you have already initializied somewhere locationManager in your code, like this:
locationManager = [[CLLocationManager alloc] init];
[locationManager setDesiredAccuracy:kCLLocationAccuracyBest];
[locationManager setDelegate:self]; // Not necessary in this case
[locationManager startUpdatingLocation]; // Not neccessary in this case
and by importing CoreLocation/CoreLocation.h and ImageIO/ImageIO.h headers with associated frameworks.

How to write exif metadata to an image (not the camera roll, just a UIImage or JPEG)

I am aware of how to save metadata using ALAssets. But, I want to save an image, or upload it somewhere, with exif intact. I have exif data as an NSDictionary. But how can I inject it properly into a UIImage (or probably an NSData JPEG representation)?
I am using UIImagePickerController to get the image from the camera and my flow is a bit different than the one described by Chiquis. Here it is:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *image = info[#"UIImagePickerControllerOriginalImage"];
NSString *fullPhotoFilename = ...; // generate the photo name and path here
NSData *photoData = [UIImage taggedImageData:image.jpegData metadata:info[#"UIImagePickerControllerMediaMetadata"] orientation:image.imageOrientation];
[photoData writeToFile:fullPhotoFilename atomically:YES];
}
And using a UIImage category to put combine the image data with its metadata:
#import <ImageIO/ImageIO.h>
#import "UIImage+Tagging.h"
#import "LocationHelper.h"
#implementation UIImage (Tagging)
+ (NSData *)writeMetadataIntoImageData:(NSData *)imageData metadata:(NSMutableDictionary *)metadata {
// create an imagesourceref
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef) imageData, NULL);
// this is the type of image (e.g., public.jpeg)
CFStringRef UTI = CGImageSourceGetType(source);
// create a new data object and write the new image into it
NSMutableData *dest_data = [NSMutableData data];
CGImageDestinationRef destination = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)dest_data, UTI, 1, NULL);
if (!destination) {
NSLog(#"Error: Could not create image destination");
}
// add the image contained in the image source to the destination, overidding the old metadata with our modified metadata
CGImageDestinationAddImageFromSource(destination, source, 0, (__bridge CFDictionaryRef) metadata);
BOOL success = NO;
success = CGImageDestinationFinalize(destination);
if (!success) {
NSLog(#"Error: Could not create data from image destination");
}
CFRelease(destination);
CFRelease(source);
return dest_data;
}
+ (NSData *)taggedImageData:(NSData *)imageData metadata:(NSDictionary *)metadata orientation:(UIImageOrientation)orientation {
CLLocationManager *locationManager = [CLLocationManager new];
CLLocation *location = [locationManager location];
NSMutableDictionary *newMetadata = [NSMutableDictionary dictionaryWithDictionary:metadata];
if (!newMetadata[(NSString *)kCGImagePropertyGPSDictionary] && location) {
newMetadata[(NSString *)kCGImagePropertyGPSDictionary] = [LocationHelper gpsDictionaryForLocation:location];
}
// Reference: http://sylvana.net/jpegcrop/exif_orientation.html
int newOrientation;
switch (orientation) {
case UIImageOrientationUp:
newOrientation = 1;
break;
case UIImageOrientationDown:
newOrientation = 3;
break;
case UIImageOrientationLeft:
newOrientation = 8;
break;
case UIImageOrientationRight:
newOrientation = 6;
break;
case UIImageOrientationUpMirrored:
newOrientation = 2;
break;
case UIImageOrientationDownMirrored:
newOrientation = 4;
break;
case UIImageOrientationLeftMirrored:
newOrientation = 5;
break;
case UIImageOrientationRightMirrored:
newOrientation = 7;
break;
default:
newOrientation = -1;
}
if (newOrientation != -1) {
newMetadata[(NSString *)kCGImagePropertyOrientation] = #(newOrientation);
}
NSData *newImageData = [self writeMetadataIntoImageData:imageData metadata:newMetadata];
return newImageData;
}
And finally, here is the method I am using to generate the needed GPS dictionary:
+ (NSDictionary *)gpsDictionaryForLocation:(CLLocation *)location {
NSTimeZone *timeZone = [NSTimeZone timeZoneWithName:#"UTC"];
NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
[formatter setTimeZone:timeZone];
[formatter setDateFormat:#"HH:mm:ss.SS"];
NSDictionary *gpsDict = #{(NSString *)kCGImagePropertyGPSLatitude: #(fabs(location.coordinate.latitude)),
(NSString *)kCGImagePropertyGPSLatitudeRef: ((location.coordinate.latitude >= 0) ? #"N" : #"S"),
(NSString *)kCGImagePropertyGPSLongitude: #(fabs(location.coordinate.longitude)),
(NSString *)kCGImagePropertyGPSLongitudeRef: ((location.coordinate.longitude >= 0) ? #"E" : #"W"),
(NSString *)kCGImagePropertyGPSTimeStamp: [formatter stringFromDate:[location timestamp]],
(NSString *)kCGImagePropertyGPSAltitude: #(fabs(location.altitude)),
};
return gpsDict;
}
Hope it helps someone. Thanks to Gustavo Ambrozio, Chiquis and several others SO members I was able to piece it together and use it in my project.
UIImage does not contain metadata information (it is stripped). So if you want to save it without using the imagepicker method (not in camera roll):
Follow the answer here to write to a file with the metadata intact:
Problem setting exif data for an image
no idea why would this be downvoted but here is the method:
In this case im getting the image through AVFoundation and this is what goes in the
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
// code here
}
block code:
CFDictionaryRef metaDict = CMCopyDictionaryOfAttachments(NULL, imageSampleBuffer, kCMAttachmentMode_ShouldPropagate);
CFMutableDictionaryRef mutable = CFDictionaryCreateMutableCopy(NULL, 0, metaDict);
// Create formatted date
NSTimeZone *timeZone = [NSTimeZone timeZoneWithName:#"UTC"];
NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
[formatter setTimeZone:timeZone];
[formatter setDateFormat:#"HH:mm:ss.SS"];
// Create GPS Dictionary
NSDictionary *gpsDict = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:fabs(loc.coordinate.latitude)], kCGImagePropertyGPSLatitude
, ((loc.coordinate.latitude >= 0) ? #"N" : #"S"), kCGImagePropertyGPSLatitudeRef
, [NSNumber numberWithFloat:fabs(loc.coordinate.longitude)], kCGImagePropertyGPSLongitude
, ((loc.coordinate.longitude >= 0) ? #"E" : #"W"), kCGImagePropertyGPSLongitudeRef
, [formatter stringFromDate:[loc timestamp]], kCGImagePropertyGPSTimeStamp
, [NSNumber numberWithFloat:fabs(loc.altitude)], kCGImagePropertyGPSAltitude
, nil];
// The gps info goes into the gps metadata part
CFDictionarySetValue(mutable, kCGImagePropertyGPSDictionary, (__bridge void *)gpsDict);
// Here just as an example im adding the attitude matrix in the exif comment metadata
CMRotationMatrix m = att.rotationMatrix;
GLKMatrix4 attMat = GLKMatrix4Make(m.m11, m.m12, m.m13, 0, m.m21, m.m22, m.m23, 0, m.m31, m.m32, m.m33, 0, 0, 0, 0, 1);
NSMutableDictionary *EXIFDictionary = (__bridge NSMutableDictionary*)CFDictionaryGetValue(mutable, kCGImagePropertyExifDictionary);
[EXIFDictionary setValue:NSStringFromGLKMatrix4(attMat) forKey:(NSString *)kCGImagePropertyExifUserComment];
CFDictionarySetValue(mutable, kCGImagePropertyExifDictionary, (__bridge void *)EXIFDictionary);
NSData *jpeg = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer] ;
After this code you will have your image in the jpeg nsdata and the correspoding dictionary for that image in the mutable cfdictionary.
All you have to do now is:
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)jpeg, NULL);
CFStringRef UTI = CGImageSourceGetType(source); //this is the type of image (e.g., public.jpeg)
NSMutableData *dest_data = [NSMutableData data];
CGImageDestinationRef destination = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)dest_data,UTI,1,NULL);
if(!destination) {
NSLog(#"***Could not create image destination ***");
}
//add the image contained in the image source to the destination, overidding the old metadata with our modified metadata
CGImageDestinationAddImageFromSource(destination,source,0, (CFDictionaryRef) mutable);
//tell the destination to write the image data and metadata into our data object.
//It will return false if something goes wrong
BOOL success = CGImageDestinationFinalize(destination);
if(!success) {
NSLog(#"***Could not create data from image destination ***");
}
//now we have the data ready to go, so do whatever you want with it
//here we just write it to disk at the same path we were passed
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0]; // Get documents folder
NSString *dataPath = [documentsDirectory stringByAppendingPathComponent:#"ImagesFolder"];
NSError *error;
if (![[NSFileManager defaultManager] fileExistsAtPath:dataPath])
[[NSFileManager defaultManager] createDirectoryAtPath:dataPath withIntermediateDirectories:NO attributes:nil error:&error]; //Create folder
// NSString *imageName = #"ImageName";
NSString *fullPath = [dataPath stringByAppendingPathComponent:[NSString stringWithFormat:#"%#.jpg", name]]; //add our image to the path
[dest_data writeToFile:fullPath atomically:YES];
//cleanup
CFRelease(destination);
CFRelease(source);
Note how I'm not saving using the ALAssets but directly into a folder of my choice.
Btw most of this code can be found in the link I posted at first.
There is easier way. If you need to save some exif, you can use SimpleExif pod
First create a ExifContainer:
ExifContainer *container = [[ExifContainer alloc] init];
and populate it with all requred data:
[container addUserComment:#"A long time ago, in a galaxy far, far away"];
[container addCreationDate:[NSDate dateWithTimeIntervalSinceNow:-10000000]];
[container addLocation:locations[0]];
Then you can add this data to image:
NSData *imageData = [[UIImage imageNamed:#"DemoImage"] addExif:container];
Then you just save this data as a JPEG
I faced the same problem, now I can upload files with EXIF data, also you can compress photo if need it, this solved the issue for me:
// Get your image.
UIImage *loImgPhoto = [self getImageFromAsset:loPHAsset];
// Get your metadata (includes the EXIF data).
CGImageSourceRef loImageOriginalSource = CGImageSourceCreateWithData(( CFDataRef) loDataFotoOriginal, NULL);
NSDictionary *loDicMetadata = (__bridge NSDictionary *) CGImageSourceCopyPropertiesAtIndex(loImageOriginalSource, 0, NULL);
// Set your compression quality (0.0 to 1.0).
NSMutableDictionary *loDicMutableMetadata = [loDicMetadata mutableCopy];
[loDicMutableMetadata setObject:#(lfCompressionQualityValue) forKey:(__bridge NSString *)kCGImageDestinationLossyCompressionQuality];
// Create an image destination.
NSMutableData *loNewImageDataWithExif = [NSMutableData data];
CGImageDestinationRef loImgDestination = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)loNewImageDataWithExif, CGImageSourceGetType(loImageOriginalSource), 1, NULL);
// Add your image to the destination.
CGImageDestinationAddImage(loImgDestination, loImgPhoto.CGImage, (__bridge CFDictionaryRef) loDicMutableMetadata);
// Finalize the destination.
if (CGImageDestinationFinalize(loImgDestination))
{
NSLog(#"Successful image creation.");
// process the image rendering, adjustment data creation and finalize the asset edit.
//Upload photo with EXIF metadata
[self myUploadMethod:loNewImageDataWithExif];
}
else
{
NSLog(#"Error -> failed to finalize the image.");
}
CFRelease(loImageOriginalSource);
CFRelease(loImgDestination);
getImageFromAsset method:
-(UIImage *)getImageFromAsset:(PHAsset *)aPHAsset
{
__block UIImage *limgImageResult;
PHImageRequestOptions *lPHImageRequestOptions = [PHImageRequestOptions new];
lPHImageRequestOptions.synchronous = YES;
[self.imageManager requestImageForAsset:aPHAsset
targetSize:PHImageManagerMaximumSize
contentMode:PHImageContentModeDefault//PHImageContentModeAspectFit
options:lPHImageRequestOptions
resultHandler:^(UIImage *limgImage, NSDictionary *info) {
limgImageResult = limgImage;
}];
return limgImageResult;
}
Here's the basics of setting Make and Model metadata on a .jpg file in Swift 3 https://gist.github.com/lacyrhoades/09d8a367125b6225df5038aec68ed9e7 The higher level versions, like using ExifContainer pod, did not work for me.

AVFoundation - Retiming CMSampleBufferRef Video Output

First time asking a question here. I'm hoping the post is clear and sample code is formatted correctly.
I'm experimenting with AVFoundation and time lapse photography.
My intent is to grab every Nth frame from the video camera of an iOS device (my iPod touch, version 4) and write each of those frames out to a file to create a timelapse. I'm using AVCaptureVideoDataOutput, AVAssetWriter and AVAssetWriterInput.
The problem is, if I use the CMSampleBufferRef passed to captureOutput:idOutputSampleBuffer:fromConnection:, the playback of each frame is the length of time between original input frames. A frame rate of say 1fps. I'm looking to get 30fps.
I've tried using CMSampleBufferCreateCopyWithNewTiming(), but then after 13 frames are written to the file, the captureOutput:idOutputSampleBuffer:fromConnection: stops being called. The interface is active and I can tap a button to stop the capture and save it to the photo library for playback. It appears to play back as I want it, 30fps, but it only has those 13 frames.
How can I accomplish my goal of 30fps playback?
How can I tell where the app is getting lost and why?
I've placed a flag called useNativeTime so I can test both cases. When set to YES, I get all frames I'm interested in as the callback doesn't 'get lost'. When I set that flag to NO, I only ever get 13 frames processed and am never returned to that method again. As mentioned above, in both cases I can playback the video.
Thanks for any help.
Here is where I'm trying to do the retiming.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
BOOL useNativeTime = NO;
BOOL appendSuccessFlag = NO;
//NSLog(#"in captureOutpput sample buffer method");
if( !CMSampleBufferDataIsReady(sampleBuffer) )
{
NSLog( #"sample buffer is not ready. Skipping sample" );
//CMSampleBufferInvalidate(sampleBuffer);
return;
}
if (! [inputWriterBuffer isReadyForMoreMediaData])
{
NSLog(#"Not ready for data.");
}
else {
// Write every first frame of n frames (30 native from camera).
intervalFrames++;
if (intervalFrames > 30) {
intervalFrames = 1;
}
else if (intervalFrames != 1) {
//CMSampleBufferInvalidate(sampleBuffer);
return;
}
// Need to initialize start session time.
if (writtenFrames < 1) {
if (useNativeTime) imageSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
else imageSourceTime = CMTimeMake( 0 * 20 ,600); //CMTimeMake(1,30);
[outputWriter startSessionAtSourceTime: imageSourceTime];
NSLog(#"Starting CMtime");
CMTimeShow(imageSourceTime);
}
if (useNativeTime) {
imageSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTimeShow(imageSourceTime);
// CMTime myTiming = CMTimeMake(writtenFrames * 20,600);
// CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer, myTiming); // Tried but has no affect.
appendSuccessFlag = [inputWriterBuffer appendSampleBuffer:sampleBuffer];
}
else {
CMSampleBufferRef newSampleBuffer;
CMSampleTimingInfo sampleTimingInfo;
sampleTimingInfo.duration = CMTimeMake(20,600);
sampleTimingInfo.presentationTimeStamp = CMTimeMake( (writtenFrames + 0) * 20,600);
sampleTimingInfo.decodeTimeStamp = kCMTimeInvalid;
OSStatus myStatus;
//NSLog(#"numSamples of sampleBuffer: %i", CMSampleBufferGetNumSamples(sampleBuffer) );
myStatus = CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault,
sampleBuffer,
1,
&sampleTimingInfo, // maybe a little confused on this param.
&newSampleBuffer);
// These confirm the good heath of our newSampleBuffer.
if (myStatus != 0) NSLog(#"CMSampleBufferCreateCopyWithNewTiming() myStatus: %i",myStatus);
if (! CMSampleBufferIsValid(newSampleBuffer)) NSLog(#"CMSampleBufferIsValid NOT!");
// No affect.
//myStatus = CMSampleBufferMakeDataReady(newSampleBuffer); // How is this different; CMSampleBufferSetDataReady ?
//if (myStatus != 0) NSLog(#"CMSampleBufferMakeDataReady() myStatus: %i",myStatus);
imageSourceTime = CMSampleBufferGetPresentationTimeStamp(newSampleBuffer);
CMTimeShow(imageSourceTime);
appendSuccessFlag = [inputWriterBuffer appendSampleBuffer:newSampleBuffer];
//CMSampleBufferInvalidate(sampleBuffer); // Docs don't describe action. WTF does it do? Doesn't seem to affect my problem. Used with CMSampleBufferSetInvalidateCallback maybe?
//CFRelease(sampleBuffer); // - Not surprisingly - “EXC_BAD_ACCESS”
}
if (!appendSuccessFlag)
{
NSLog(#"Failed to append pixel buffer");
}
else {
writtenFrames++;
NSLog(#"writtenFrames: %i", writtenFrames);
}
}
//[self displayOuptutWritterStatus]; // Expect and see AVAssetWriterStatusWriting.
}
My setup routine.
- (IBAction) recordingStartStop: (id) sender
{
NSError * error;
if (self.isRecording) {
NSLog(#"~~~~~~~~~ STOPPING RECORDING ~~~~~~~~~");
self.isRecording = NO;
[recordingStarStop setTitle: #"Record" forState: UIControlStateNormal];
//[self.captureSession stopRunning];
[inputWriterBuffer markAsFinished];
[outputWriter endSessionAtSourceTime:imageSourceTime];
[outputWriter finishWriting]; // Blocks until file is completely written, or an error occurs.
NSLog(#"finished CMtime");
CMTimeShow(imageSourceTime);
// Really, I should loop through the outputs and close all of them or target specific ones.
// Since I'm only recording video right now, I feel safe doing this.
[self.captureSession removeOutput: [[self.captureSession outputs] objectAtIndex: 0]];
[videoOutput release];
[inputWriterBuffer release];
[outputWriter release];
videoOutput = nil;
inputWriterBuffer = nil;
outputWriter = nil;
NSLog(#"~~~~~~~~~ STOPPED RECORDING ~~~~~~~~~");
NSLog(#"Calling UIVideoAtPathIsCompatibleWithSavedPhotosAlbum.");
NSLog(#"filePath: %#", [projectPaths movieFilePath]);
if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum([projectPaths movieFilePath])) {
NSLog(#"Calling UISaveVideoAtPathToSavedPhotosAlbum.");
UISaveVideoAtPathToSavedPhotosAlbum ([projectPaths movieFilePath], self, #selector(video:didFinishSavingWithError: contextInfo:), nil);
}
NSLog(#"~~~~~~~~~ WROTE RECORDING to PhotosAlbum ~~~~~~~~~");
}
else {
NSLog(#"~~~~~~~~~ STARTING RECORDING ~~~~~~~~~");
projectPaths = [[ProjectPaths alloc] initWithProjectFolder: #"TestProject"];
intervalFrames = 30;
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
NSMutableDictionary * cameraVideoSettings = [[[NSMutableDictionary alloc] init] autorelease];
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_32BGRA]; //kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
[cameraVideoSettings setValue: value forKey: key];
[videoOutput setVideoSettings: cameraVideoSettings];
[videoOutput setMinFrameDuration: CMTimeMake(20, 600)]; //CMTimeMake(1, 30)]; // 30fps
[videoOutput setAlwaysDiscardsLateVideoFrames: YES];
queue = dispatch_queue_create("cameraQueue", NULL);
[videoOutput setSampleBufferDelegate: self queue: queue];
dispatch_release(queue);
NSMutableDictionary *outputSettings = [[[NSMutableDictionary alloc] init] autorelease];
[outputSettings setValue: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setValue: [NSNumber numberWithInt: 1280] forKey: AVVideoWidthKey]; // currently assuming
[outputSettings setValue: [NSNumber numberWithInt: 720] forKey: AVVideoHeightKey];
NSMutableDictionary *compressionSettings = [[[NSMutableDictionary alloc] init] autorelease];
[compressionSettings setValue: AVVideoProfileLevelH264Main30 forKey: AVVideoProfileLevelKey];
//[compressionSettings setValue: [NSNumber numberWithDouble:1024.0*1024.0] forKey: AVVideoAverageBitRateKey];
[outputSettings setValue: compressionSettings forKey: AVVideoCompressionPropertiesKey];
inputWriterBuffer = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo outputSettings: outputSettings];
[inputWriterBuffer retain];
inputWriterBuffer.expectsMediaDataInRealTime = YES;
outputWriter = [AVAssetWriter assetWriterWithURL: [projectPaths movieURLPath] fileType: AVFileTypeQuickTimeMovie error: &error];
[outputWriter retain];
if (error) NSLog(#"error for outputWriter = [AVAssetWriter assetWriterWithURL:fileType:error:");
if ([outputWriter canAddInput: inputWriterBuffer]) [outputWriter addInput: inputWriterBuffer];
else NSLog(#"can not add input");
if (![outputWriter canApplyOutputSettings: outputSettings forMediaType:AVMediaTypeVideo]) NSLog(#"ouptutSettings are NOT supported");
if ([captureSession canAddOutput: videoOutput]) [self.captureSession addOutput: videoOutput];
else NSLog(#"could not addOutput: videoOutput to captureSession");
//[self.captureSession startRunning];
self.isRecording = YES;
[recordingStarStop setTitle: #"Stop" forState: UIControlStateNormal];
writtenFrames = 0;
imageSourceTime = kCMTimeZero;
[outputWriter startWriting];
//[outputWriter startSessionAtSourceTime: imageSourceTime];
NSLog(#"~~~~~~~~~ STARTED RECORDING ~~~~~~~~~");
NSLog (#"recording to fileURL: %#", [projectPaths movieURLPath]);
}
NSLog(#"isRecording: %#", self.isRecording ? #"YES" : #"NO");
[self displayOuptutWritterStatus];
}
OK, I found the bug in my first post.
When using
myStatus = CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault,
sampleBuffer,
1,
&sampleTimingInfo,
&newSampleBuffer);
you need to balance that with a CFRelease(newSampleBuffer);
The same idea holds true when using a CVPixelBufferRef with a piexBufferPool of an AVAssetWriterInputPixelBufferAdaptor instance. You would use CVPixelBufferRelease(yourCVPixelBufferRef); after calling the appendPixelBuffer: withPresentationTime: method.
Hope this is helpful to someone else.
With a little more searching and reading I have a working solution. Don't know that it is best method, but so far, so good.
In my setup area I've setup an AVAssetWriterInputPixelBufferAdaptor. The code addition looks like this.
InputWriterBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput: inputWriterBuffer
sourcePixelBufferAttributes: nil];
[inputWriterBufferAdaptor retain];
For completeness to understand the code below, I also have these three lines in the setup method.
fpsOutput = 30; //Some possible values: 30, 10, 15 24, 25, 30/1.001 or 29.97;
cmTimeSecondsDenominatorTimescale = 600 * 100000; //To more precisely handle 29.97.
cmTimeNumeratorValue = cmTimeSecondsDenominatorTimescale / fpsOutput;
Instead of applying a retiming to a copy of the sample buffer. I now have the following three lines of code that effectively does the same thing. Notice the withPresentationTime parameter for the adapter. By passing my custom value to that, I gain the correct timing I'm seeking.
CVPixelBufferRef myImage = CMSampleBufferGetImageBuffer( sampleBuffer );
imageSourceTime = CMTimeMake( writtenFrames * cmTimeNumeratorValue, cmTimeSecondsDenominatorTimescale);
appendSuccessFlag = [inputWriterBufferAdaptor appendPixelBuffer: myImage withPresentationTime: imageSourceTime];
Use of the AVAssetWriterInputPixelBufferAdaptor.pixelBufferPool property may have some gains, but I haven't figured that out.