Extracting audio channel from Linear PCM - iphone

I would like to extract a channel audio from the an LPCM raw file ie extract left and right channel of a stereo LPCM file. The LPCM is 16 bit depth,interleaved, 2 channels,litle endian. From what I gather the order of byte is {LeftChannel,RightChannel,LeftChannel,RightChannel...} and since it is 16 bit depth there will be 2 bytes of sample for each channel right?
So my question is if i want to extract the left channel then I would take the bytes in 0,2,4,6...n*2 address? while the right channel would be 1,3,4,...(n*2+1).
Also after extracting the audio channel, should i set the format of the extracted channel as 16 bit depth ,1 channel?
Thanks in advance
This is the code that I currently use to extract PCM audio from AssetReader.. This code works fine with writing a music file without its channel being extracted so I it might be caused by the format or something...
NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
// [NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
NSError *assetError = nil;
AVAssetReader *assetReader = [[AVAssetReader assetReaderWithAsset:songAsset
error:&assetError]
retain];
if (assetError) {
NSLog (#"error: %#", assetError);
return;
}
AVAssetReaderOutput *assetReaderOutput = [[AVAssetReaderAudioMixOutput
assetReaderAudioMixOutputWithAudioTracks:songAsset.tracks
audioSettings: outputSettings]
retain];
if (! [assetReader canAddOutput: assetReaderOutput]) {
NSLog (#"can't add reader output... die!");
return;
}
[assetReader addOutput: assetReaderOutput];
NSArray *dirs = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [dirs objectAtIndex:0];
//CODE TO SPLIT STEREO
[self setupAudioWithFormatMono:kAudioFormatLinearPCM];
NSString *splitExportPath = [[documentsDirectoryPath stringByAppendingPathComponent:#"monoleft.caf"] retain];
if ([[NSFileManager defaultManager] fileExistsAtPath:splitExportPath]) {
[[NSFileManager defaultManager] removeItemAtPath:splitExportPath error:nil];
}
AudioFileID mRecordFile;
NSURL *splitExportURL = [NSURL fileURLWithPath:splitExportPath];
OSStatus status = AudioFileCreateWithURL(splitExportURL, kAudioFileCAFType, &_streamFormat, kAudioFileFlags_EraseFile,
&mRecordFile);
NSLog(#"status os %d",status);
[assetReader startReading];
CMSampleBufferRef sampBuffer = [assetReaderOutput copyNextSampleBuffer];
UInt32 countsamp= CMSampleBufferGetNumSamples(sampBuffer);
NSLog(#"number of samples %d",countsamp);
SInt64 countByteBuf = 0;
SInt64 countPacketBuf = 0;
UInt32 numBytesIO = 0;
UInt32 numPacketsIO = 0;
NSMutableData * bufferMono = [NSMutableData new];
while (sampBuffer) {
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
//frames = audioBuffer.mData;
NSLog(#"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
NSLog(#"The buffer size is %d",audioBuffer.mDataByteSize);
//Append mono left to buffer data
for (int i=0; i<audioBuffer.mDataByteSize; i= i+4) {
[bufferMono appendBytes:(audioBuffer.mData+i) length:2];
}
//the number of bytes in the mutable data containing mono audio file
numBytesIO = [bufferMono length];
numPacketsIO = numBytesIO/2;
NSLog(#"numpacketsIO %d",numPacketsIO);
status = AudioFileWritePackets(mRecordFile, NO, numBytesIO, &_packetFormat, countPacketBuf, &numPacketsIO, audioBuffer.mData);
NSLog(#"status for writebyte %d, packets written %d",status,numPacketsIO);
if(numPacketsIO != (numBytesIO/2)){
NSLog(#"Something wrong");
assert(0);
}
countPacketBuf = countPacketBuf + numPacketsIO;
[bufferMono setLength:0];
}
sampBuffer = [assetReaderOutput copyNextSampleBuffer];
countsamp= CMSampleBufferGetNumSamples(sampBuffer);
NSLog(#"number of samples %d",countsamp);
}
AudioFileClose(mRecordFile);
[assetReader cancelReading];
[self performSelectorOnMainThread:#selector(updateCompletedSizeLabel:)
withObject:0
waitUntilDone:NO];
The output format with audiofileservices is as follows:
_streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
_streamFormat.mBitsPerChannel = 16;
_streamFormat.mChannelsPerFrame = 1;
_streamFormat.mBytesPerPacket = 2;
_streamFormat.mBytesPerFrame = 2;// (_streamFormat.mBitsPerChannel / 8) * _streamFormat.mChannelsPerFrame;
_streamFormat.mFramesPerPacket = 1;
_streamFormat.mSampleRate = 44100.0;
_packetFormat.mStartOffset = 0;
_packetFormat.mVariableFramesInPacket = 0;
_packetFormat.mDataByteSize = 2;

Sounds almost right - you have a 16 bit depth, so that means each sample will take 2 bytes. That means the left channel data will be in bytes {0,1}, {4,5}, {8,9} and so on. Interleaved means the samples are interleaved, not the bytes.
Other than that I would try it out and see if you have any problems with your code.
Also after extracting the audio
channel, should i set the format of
the extracted channel as 16 bit depth
,1 channel?
Only one of the two channels is remaining after your extraction, so yes, this is correct.

I had a similar error that the audio sounded 'slow', the reason for this is that you specified mChannelsPerFrame of 1, whereas you have a dual channel sound. Set it to 2 and it should speed up the playback. Also do tell if after you do this the output 'sounds' correctly... :)

I'm trying to split my stereo audio into two mono files (split stereo audio to mono streams on iOS). I've been using your code but can't seem to get it to work. Whats the contents of your setupAudioWithFormatMono method?

Related

Avfoundation create mini clips from recording

i was wondering how would i be able to create mini videos every certain amount of time from my recording without stopping my recording? i tried to look for an equivalent of AvAssetImageGenerator for videos an example would be nice.
The easiest way is to use two AVAssetWriters and set up the next writer while the current one is recording, then stop after x time and swap the writers. You should be able to swap the writers without dropping any frames.
Edit:
How to do AVAssetWriter "juggling"
Step 1: Create instance objects for the writers and pixelbuffer adaptors (and you'll want file names for these files as well that you know)
AVAssetWriter* mWriter[2];
AVAssetWriterInputPixelBufferAdaptor* mPBAdaptor[2];
NSString* mOutFile[2];
int mCurrentWriter, mFrameCount, mTargetFrameCount;
Step 2: Create a method for setting up a writer (since you'll be doing this over and over again)
-(int) setupWriter: (int) writer
{
NSAutoreleasePool* p = [[NSAutoreleasePool alloc] init];
NSDictionary* writerSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264, AVVideoCodecKey, [NSNumber numberWithInt: mVideoWidth], AVVideoWidthKey, [NSNumber numberWithInt: mVideoHeight], AVVideoHeightKey, nil];
NSDictionary* pbSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:mVivdeoWidth],kCVPixelBufferWidthKey,
[NSNumber numberWithInt:mVideoHeight], kCVPixelBufferHeightKey,
[NSNumber numberWithInt:0],kCVPixelBufferExtendedPixelsLeftKey,
[NSNumber numberWithInt:0],kCVPixelBufferExtendedPixelsRightKey,
[NSNumber numberWithInt:0],kCVPixelBufferExtendedPixelsTopKey,
[NSNumber numberWithInt:0],kCVPixelBufferExtendedPixelsBottomKey,
[NSNumber numberWithInt:mVideoWidth],kCVPixelBufferBytesPerRowAlignmentKey,
[NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey, nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterWithMediaType: AVMediaTypeVideo outputSettings: writerSettings];
// Create an audio input here if you want...
mWriter[writer] = [[AVAssetWriter alloc] initWithURL: [NSURL fileURLWithPath:mOutfile[writer]] fileType: AVFileTypeMPEG4 error:nil];
mPBAdaptor[writer] = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput: writerInput sourcePixelBufferAttributes: pbSettings];
[mWriter[writer] addInput: writerInput];
// Add your audio input here if you want it
[p release];
}
Step 3: Gotta tear these things down!
- (void) tearDownWriter: (int) writer
{
if(mWriter[writer]) {
if(mWriter[writer].status == 1) [mWriter[writer] finishWriting]; // This will complete the movie.
[mWriter[writer] release]; mWriter[writer] = nil;
[mPBAdaptor[writer] release]; mPBAdaptor[writer] = nil;
}
}
Step 4: Swap! Tear down the current writer and recreate it asynchronously while the other writer is writing.
- (void) swapWriters
{
NSAutoreleasePool * p = [[NSAutoreleasePool alloc] init];
if(++mFrameCount > mCurrentTargetFrameCount)
{
mFrameCount = 0;
int c, n;
c = mCurrentWriter^1;
n = mCurrentWriter; // swap.
[self tearDownWriter:n];
__block VideoCaptureClass* bSelf = self;
dispatch_async(dispatch_get_global_queue(0,0), ^{
[bSelf setupWriter:n];
CMTime time;
time.value = 0;
time.timescale = 15; // or whatever the correct timescale for your movie is
time.epoch = 0;
time.flags = kCMTimeFlags_Valid;
[bSelf->mWriter[n] startWriting];
[bSelf->mWriter[n] startSessionAtSourceTime:time];
});
mCurrentWriter = c;
}
[p release];
}
Note: When starting up you will have to create and start both writers.
Step 5: Capturing output
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
// This method will only work with video; you'll have to check for audio if you're using that.
CMTime time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); // Note: you may have to create your own PTS.
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
[mPBAdaptor[mCurrentWriter] appendPixelBuffer:pixelBuffer withPresentationTime: time];
[self swapBuffers];
}
You can probably skip the pixel buffer adaptor if you don't need it. This should give you an approximate idea of how to do what you want to do. mTargetFrameCount represents how many frames you want the current video to be in length. Audio will probably take additional consideration, you may want to base your length off your audio stream instead of the video stream if you are using audio.

Get Exif data from UIImage - UIImagePickerController [duplicate]

This question already has answers here:
UIImagePickerController and extracting EXIF data from existing photos
(18 answers)
Closed 7 years ago.
How can we get Exif information from UIImage selected from UIImagePickerController?
I had done much R&D for this and got many replies but still failed to implement this.
I had gone through this this and this link
Please help me to solve this problem.
Thanks in advance..
Interesting question! I came up with the following solution working for images picked from your photo library (note my code is using ARC):
Import AssetsLibrary.framework and ImageIO.framework.
Then include the needed classes inside your .h-file:
#import <AssetsLibrary/ALAsset.h>
#import <AssetsLibrary/ALAssetRepresentation.h>
#import <ImageIO/CGImageSource.h>
#import <ImageIO/CGImageProperties.h>
And put this inside your imagePickerController:didFinishPickingMediaWithInfo: delegate method:
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:[info objectForKey:UIImagePickerControllerReferenceURL]
resultBlock:^(ALAsset *asset) {
ALAssetRepresentation *image_representation = [asset defaultRepresentation];
// create a buffer to hold image data
uint8_t *buffer = (Byte*)malloc(image_representation.size);
NSUInteger length = [image_representation getBytes:buffer fromOffset: 0.0 length:image_representation.size error:nil];
if (length != 0) {
// buffer -> NSData object; free buffer afterwards
NSData *adata = [[NSData alloc] initWithBytesNoCopy:buffer length:image_representation.size freeWhenDone:YES];
// identify image type (jpeg, png, RAW file, ...) using UTI hint
NSDictionary* sourceOptionsDict = [NSDictionary dictionaryWithObjectsAndKeys:(id)[image_representation UTI] ,kCGImageSourceTypeIdentifierHint,nil];
// create CGImageSource with NSData
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((__bridge CFDataRef) adata, (__bridge CFDictionaryRef) sourceOptionsDict);
// get imagePropertiesDictionary
CFDictionaryRef imagePropertiesDictionary;
imagePropertiesDictionary = CGImageSourceCopyPropertiesAtIndex(sourceRef,0, NULL);
// get exif data
CFDictionaryRef exif = (CFDictionaryRef)CFDictionaryGetValue(imagePropertiesDictionary, kCGImagePropertyExifDictionary);
NSDictionary *exif_dict = (__bridge NSDictionary*)exif;
NSLog(#"exif_dict: %#",exif_dict);
// save image WITH meta data
NSString *documentsDirectory = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSURL *fileURL = nil;
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(sourceRef, 0, imagePropertiesDictionary);
if (![[sourceOptionsDict objectForKey:#"kCGImageSourceTypeIdentifierHint"] isEqualToString:#"public.tiff"])
{
fileURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/%#.%#",
documentsDirectory,
#"myimage",
[[[sourceOptionsDict objectForKey:#"kCGImageSourceTypeIdentifierHint"] componentsSeparatedByString:#"."] objectAtIndex:1]
]];
CGImageDestinationRef dr = CGImageDestinationCreateWithURL ((__bridge CFURLRef)fileURL,
(__bridge CFStringRef)[sourceOptionsDict objectForKey:#"kCGImageSourceTypeIdentifierHint"],
1,
NULL
);
CGImageDestinationAddImage(dr, imageRef, imagePropertiesDictionary);
CGImageDestinationFinalize(dr);
CFRelease(dr);
}
else
{
NSLog(#"no valid kCGImageSourceTypeIdentifierHint found …");
}
// clean up
CFRelease(imageRef);
CFRelease(imagePropertiesDictionary);
CFRelease(sourceRef);
}
else {
NSLog(#"image_representation buffer length == 0");
}
}
failureBlock:^(NSError *error) {
NSLog(#"couldn't get asset: %#", error);
}
];
One thing I noticed is, that iOS will ask the user to allow location services – if he denies, you won't be abled to get the image data …
EDIT
Added code to save the image including its meta data. It's a quick approach, so maybe there is a better way, but it works!
These answers all seem extremely complex. If the image has been saved to the Camera Roll, and you have the ALAsset (either from UIImagePicker or ALAssetLibrary) you can get the metadata like so:
asset.defaultRepresentation.metadata;
If you want to save that image from camera roll to another location (say in Sandbox/Documents) simply do:
CGImageDestinationRef imageDestinationRef = CGImageDestinationCreateWithURL((__bridge CFURLRef)urlToSaveTo, kUTTypeJPEG, 1, NULL);
CFDictionaryRef imagePropertiesRef = (__bridge CFDictionaryRef)asset.defaultRepresentation.metadata;
CGImageDestinationAddImage(imageDestinationRef, asset.defaultRepresentation.fullResolutionImage, imagePropertiesRef);
if (!CGImageDestinationFinalize(imageDestinationRef)) NSLog(#"Failed to copy photo on save to %#", urlToSaveTo);
CFRelease(imageDestinationRef);
I had found solution and got answer from here
From here We can get GPS info as well..
Amazing and thanks all for helping me to solve this problem.
UPDATE
This is another function that I had created myself, also return Exif data as well as GPS data and in this function we doesn't need any third party library.. but you have to turn on location services for this. and use current latitude and longitude for that. so have to use CoreLocation.framework
//FOR CAMERA IMAGE
-(NSMutableData *)getImageWithMetaData:(UIImage *)pImage
{
NSData* pngData = UIImagePNGRepresentation(pImage);
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)pngData, NULL);
NSDictionary *metadata = (NSDictionary *) CGImageSourceCopyPropertiesAtIndex(source, 0, NULL);
NSMutableDictionary *metadataAsMutable = [[metadata mutableCopy]autorelease];
[metadata release];
//For GPS Dictionary
NSMutableDictionary *GPSDictionary = [[[metadataAsMutable objectForKey:(NSString *)kCGImagePropertyGPSDictionary]mutableCopy]autorelease];
if(!GPSDictionary)
GPSDictionary = [NSMutableDictionary dictionary];
[GPSDictionary setValue:[NSNumber numberWithDouble:currentLatitude] forKey:(NSString*)kCGImagePropertyGPSLatitude];
[GPSDictionary setValue:[NSNumber numberWithDouble:currentLongitude] forKey:(NSString*)kCGImagePropertyGPSLongitude];
NSString* ref;
if (currentLatitude <0.0)
ref = #"S";
else
ref =#"N";
[GPSDictionary setValue:ref forKey:(NSString*)kCGImagePropertyGPSLatitudeRef];
if (currentLongitude <0.0)
ref = #"W";
else
ref =#"E";
[GPSDictionary setValue:ref forKey:(NSString*)kCGImagePropertyGPSLongitudeRef];
[GPSDictionary setValue:[NSNumber numberWithFloat:location.altitude] forKey:(NSString*)kCGImagePropertyGPSAltitude];
//For EXIF Dictionary
NSMutableDictionary *EXIFDictionary = [[[metadataAsMutable objectForKey:(NSString *)kCGImagePropertyExifDictionary]mutableCopy]autorelease];
if(!EXIFDictionary)
EXIFDictionary = [NSMutableDictionary dictionary];
[EXIFDictionary setObject:[NSDate date] forKey:(NSString*)kCGImagePropertyExifDateTimeOriginal];
[EXIFDictionary setObject:[NSDate date] forKey:(NSString*)kCGImagePropertyExifDateTimeDigitized];
//add our modified EXIF data back into the image’s metadata
[metadataAsMutable setObject:EXIFDictionary forKey:(NSString *)kCGImagePropertyExifDictionary];
[metadataAsMutable setObject:GPSDictionary forKey:(NSString *)kCGImagePropertyGPSDictionary];
CFStringRef UTI = CGImageSourceGetType(source);
NSMutableData *dest_data = [NSMutableData data];
CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)dest_data, UTI, 1, NULL);
if(!destination)
dest_data = [[pngData mutableCopy] autorelease];
else
{
CGImageDestinationAddImageFromSource(destination, source, 0, (CFDictionaryRef) metadataAsMutable);
BOOL success = CGImageDestinationFinalize(destination);
if(!success)
dest_data = [[pngData mutableCopy] autorelease];
}
if(destination)
CFRelease(destination);
CFRelease(source);
return dest_data;
}
//FOR PHOTO LIBRARY IMAGE
-(NSMutableData *)getImagedataPhotoLibrary:(NSDictionary *)pImgDictionary andImage:(UIImage *)pImage
{
NSData* data = UIImagePNGRepresentation(pImage);
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)data, NULL);
NSMutableDictionary *metadataAsMutable = [[pImgDictionary mutableCopy]autorelease];
CFStringRef UTI = CGImageSourceGetType(source);
NSMutableData *dest_data = [NSMutableData data];
//For Mutabledata
CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef)dest_data, UTI, 1, NULL);
if(!destination)
dest_data = [[data mutableCopy] autorelease];
else
{
CGImageDestinationAddImageFromSource(destination, source, 0, (CFDictionaryRef) metadataAsMutable);
BOOL success = CGImageDestinationFinalize(destination);
if(!success)
dest_data = [[data mutableCopy] autorelease];
}
if(destination)
CFRelease(destination);
CFRelease(source);
return dest_data;
}
and We will retrieve that data like this
//FOR CAMERA IMAGE
NSData *originalImgData = [self getImageWithMetaData:imgOriginal];
//FOR PHOTO LIBRARY IMAGE
[self getImagedataPhotoLibrary:[[myasset defaultRepresentation] metadata] andImage:imgOriginal];
For all of this you should have to Import AssetsLibrary.framework and ImageIO.framework.
I have used this method for getting the exifdata dictionary from image , I hope this will also work for you
-(void)getExifDataFromImage:(UIImage *)currentImage
{
NSData* pngData = UIImageJPEGRepresentation(currentImage, 1.0);
CGImageSourceRef mySourceRef = CGImageSourceCreateWithData((CFDataRef)pngData, NULL);
//CGImageSourceRef mySourceRef = CGImageSourceCreateWithURL((__bridge CFURLRef)myURL, NULL);
if (mySourceRef != NULL)
{
NSDictionary *myMetadata = (__bridge NSDictionary *)CGImageSourceCopyPropertiesAtIndex(mySourceRef,0,NULL);
NSDictionary *exifDic = [myMetadata objectForKey:(NSString *)kCGImagePropertyExifDictionary];
NSDictionary *tiffDic = [myMetadata objectForKey:(NSString *)kCGImagePropertyTIFFDictionary];
NSLog(#"exifDic properties: %#", myMetadata); //all data
float rawShutterSpeed = [[exifDic objectForKey:(NSString *)kCGImagePropertyExifExposureTime] floatValue];
int decShutterSpeed = (1 / rawShutterSpeed);
NSLog(#"Camera %#",[tiffDic objectForKey:(NSString *)kCGImagePropertyTIFFModel]);
NSLog(#"Focal Length %#mm",[exifDic objectForKey:(NSString *)kCGImagePropertyExifFocalLength]);
NSLog(#"Shutter Speed %#", [NSString stringWithFormat:#"1/%d", decShutterSpeed]);
NSLog(#"Aperture f/%#",[exifDic objectForKey:(NSString *)kCGImagePropertyExifFNumber]);
NSNumber *ExifISOSpeed = [[exifDic objectForKey:(NSString*)kCGImagePropertyExifISOSpeedRatings] objectAtIndex:0];
NSLog(#"ISO %ld",[ExifISOSpeed integerValue]);
NSLog(#"Taken %#",[exifDic objectForKey:(NSString*)kCGImagePropertyExifDateTimeDigitized]);
}
}
You need ALAssetsLibrary to actually retrieve the EXIF info from an image. The EXIF is added to an image only when it is saved to the Photo Library. Even if you use ALAssetLibrary to get an image asset from the library, it will lose all EXIF info if you set it to a UIImage.
I have tried to insert GPS coordinates into image metadata picked by iPad Camera as it was suggested by Mehul.
It Works, Thank you for your post.
P.S.
Who intends to use that code, just substitude the two geolocations at the top of the function -(NSMutableData *)getImageWithMetaData:(UIImage *)pImage {
double currentLatitude = [locationManager location].coordinate.latitude;
double currentLongitude = [locationManager location].coordinate.longitude;
...
By supposing that you have already initializied somewhere locationManager in your code, like this:
locationManager = [[CLLocationManager alloc] init];
[locationManager setDesiredAccuracy:kCLLocationAccuracyBest];
[locationManager setDelegate:self]; // Not necessary in this case
[locationManager startUpdatingLocation]; // Not neccessary in this case
and by importing CoreLocation/CoreLocation.h and ImageIO/ImageIO.h headers with associated frameworks.

Skip over frames while processing video on iOS

I'm trying to process a local video file and simply do some analysis on the pixel data. Nothing is being output. My current code iterates through each frame of the video but I'd actually like to skip ~15 frames at a time to speed things up. Is there a way to skip over frames without decoding them?
In Ffmpeg, I could simply call av_read_frame without calling avcodec_decode_video2.
Thanks in advance! Here's my current code:
- (void) readMovie:(NSURL *)url
{
[self performSelectorOnMainThread:#selector(updateInfo:) withObject:#"scanning" waitUntilDone:YES];
startTime = [NSDate date];
AVURLAsset * asset = [AVURLAsset URLAssetWithURL:url options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler:
^{
dispatch_async(dispatch_get_main_queue(),
^{
AVAssetTrack * videoTrack = nil;
NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
if ([tracks count] == 1)
{
videoTrack = [tracks objectAtIndex:0];
videoDuration = CMTimeGetSeconds([videoTrack timeRange].duration);
NSError * error = nil;
// _movieReader is a member variable
_movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
if (error)
NSLog(#"%#", error.localizedDescription);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_420YpCbCr8Planar];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
AVAssetReaderTrackOutput* output = [AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:videoTrack
outputSettings:videoSettings];
output.alwaysCopiesSampleData = NO;
[_movieReader addOutput:output];
if ([_movieReader startReading])
{
NSLog(#"reading started");
[self readNextMovieFrame];
}
else
{
NSLog(#"reading can't be started");
}
}
});
}];
}
- (void) readNextMovieFrame
{
//NSLog(#"readNextMovieFrame called");
if (_movieReader.status == AVAssetReaderStatusReading)
{
//NSLog(#"status is reading");
AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
if (sampleBuffer)
{ // I'm guessing this is the expensive part that we can skip if we want to skip frames
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information of the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// do my pixel analysis
// Unlock the image buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CFRelease(sampleBuffer);
[self readNextMovieFrame];
}
else
{
NSLog(#"could not copy next sample buffer. status is %d", _movieReader.status);
NSTimeInterval scanDuration = -[startTime timeIntervalSinceNow];
float scanMultiplier = videoDuration / scanDuration;
NSString* info = [NSString stringWithFormat:#"Done\n\nvideo duration: %f seconds\nscan duration: %f seconds\nmultiplier: %f", videoDuration, scanDuration, scanMultiplier];
[self performSelectorOnMainThread:#selector(updateInfo:) withObject:info waitUntilDone:YES];
}
}
else
{
NSLog(#"status is now %d", _movieReader.status);
}
}
- (void) updateInfo: (id*)message
{
NSString* info = [NSString stringWithFormat:#"%#", message];
[infoTextView setText:info];
}
If you want less accurate frame processing (not frame by frame) you should use AVAssetImageGenerator.
This class returns a frame for a specified time you asked.
Specifically, build an Array filled with times between the clip's duration with 0.5s difference between each time (iPhone films at about 29.3 fps if you want every 15 frames its about frame for every 30 seconds) and let the image generator returns your frames.
For each frame you can see the time you requested and the actual time of the frame. It's default value is around 0.5s tolerance from the time you asked but you can also change that by changing the properties:
requestedTimeToleranceBefore
and
requestedTimeToleranceAfter
I hope I answered your question,
Good luck.

Problem with CGImageDestination and file naming

I am capturing images from the camera, using AVCapture as I have need of speed and the standard kit stuff is way too slow.
I have problem whereby the file that is being output (an animated GIF) is having it's file name mangles by the CGImageDestination functions...
When I output the NSURL (cast to a CFURLRef) to the log I get the path/filename I intended:
2011-09-04 20:40:25.914 Mover[3558:707] Path as string:.../Documents/91B2C5E8-F925-47F3-B539-15185F640828-3558-000003327A227485.gif
However, once the file is created and saved it actually lists the filename as this:
2011-09-04 20:40:25.960 Mover[3558:707] file: .91B2C5E8-F925-47F3-B539-15185F640828-3558-000003327A227485.gif-TtNT
See the difference? the period at the start and the 4 character suffix?
Whats really wierd is that it doesn't always do it, about 40% of the time it works OK. However it's preventing the code working further down the line where I'm listing them with previews in a table view.
Does anyone know why and how to stop it doing this?
Here's the code:
- (void)exportAnimatedGif{
NSString *guidPath = [[NSProcessInfo processInfo] globallyUniqueString];
NSString *tmpPath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:guidPath];
NSString *path = [tmpPath stringByAppendingPathExtension:#"gif"];
NSLog(#"Path as string:%#", path);
CGImageDestinationRef destination = CGImageDestinationCreateWithURL((CFURLRef)[NSURL fileURLWithPath:path], kUTTypeGIF, [captureArray count], NULL);
NSDictionary *frameProperties = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:testDisplay3.displayValue] forKey:(NSString *)kCGImagePropertyGIFDelayTime]
forKey:(NSString *)kCGImagePropertyGIFDictionary];
NSDictionary *gifProperties = [NSDictionary dictionaryWithObject:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:2], (NSString *)kCGImagePropertyGIFLoopCount,
[NSNumber numberWithFloat:testDisplay3.displayValue], (NSString*)kCGImagePropertyGIFDelayTime,
[NSNumber numberWithFloat:testDisplay3.displayValue], (NSString*)kCGImagePropertyGIFUnclampedDelayTime,
nil]
forKey:(NSString *)kCGImagePropertyGIFDictionary];
for (int ii = 0; ii < [captureArray count]; ii++)
{
UIImage *tmpImg = [[UIImage alloc] init];
tmpImg = [captureArray objectAtIndex:ii];
CGImageDestinationAddImage(destination, tmpImg.CGImage, (CFDictionaryRef)frameProperties);
}
CGImageDestinationSetProperties(destination, (CFDictionaryRef)gifProperties);
CGImageDestinationFinalize(destination);
CFRelease(destination);
//TEST OUTPUT GENERATED FILES
NSArray *contents = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject] error:nil];
for (int xx = 0; xx < [contents count]; xx++)
{
NSLog(#"file: %#", [contents objectAtIndex:xx]);
}
//END TEST CODE
[captureArray removeAllObjects];
}
AFAIK this is a temporary file that CGImageDestinationFinalize makes, the reason you see them is that CGImageDestinationFinalize failed. I think that if you check the file sizes you'll see that the ones with mangled names have a file size of 0.
I started check for succes after I got these files :)
bool success = CGImageDestinationFinalize(destination);
CFRelease(destination);
if (success) {
NSLog(#"animated GIF file created at %#", path);
} else {
NSLog(#"failed to create gif at %#", path);
}

Can I get an audio session / apply audio units to playback from MPMusicPlayerController?

I'd like to take control of the audio coming from MPMusicPlayerController (i.e., playing from the iPod library). For example, I'd like to apply EQ to it or do DSP, reverb, that kind of thing.
Is this possible? Is there an audio session that I can grab a handle on? Or, perhaps is there some way to play back files from the iPod library using an AVAudioPlayer?
MPMusicPLayerController does not work "nicely" with the AV Framework
I managed to get some DSP Using the MPMusicPlayerController to get the media item then get url for that item. then use the AVURLAsset
and AVAssetReader.
something like this:
MPMediaItem *currentSong = [myMusicController nowPlayingItem];
NSURL *currentSongURL = [currentSong valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:currentSongURL options:nil];
NSError *error = nil;
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
AVAssetTrack* track = [[songAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
NSMutableDictionary* audioReadSettings = [NSMutableDictionary dictionary];
[audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM]
forKey:AVFormatIDKey];
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:audioReadSettings];
[reader addOutput:readerOutput];
[reader startReading];
CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer];
while( sample != NULL )
{
sample = [readerOutput copyNextSampleBuffer];
if( sample == NULL )
continue;
CMBlockBufferRef buffer = CMSampleBufferGetDataBuffer( sample );
CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples(sample);
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sample,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&buffer
);
for (int bufferCount=0; bufferCount < audioBufferList.mNumberBuffers; bufferCount++) {
SInt16* samples = (SInt16 *)audioBufferList.mBuffers[bufferCount].mData;
for (int i=0; i < numSamplesInBuffer; i++) {
NSLog(#"%i", samples[i]);
}
}
//Release the buffer when done with the samples
//(retained by CMSampleBufferGetAudioBufferListWithRetainedblockBuffer)
CFRelease(buffer);
CFRelease( sample );