I am writing an application to show stats on the light conditions as seen by the iphone camera. I take an image every second, and the performing calculations on it.
To capture an image, I am using the following method:
-(void) captureNow
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in captureManager.stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[captureManager.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
latestImage = [[UIImage alloc] initWithData:imageData];
}];
}
However, the captureStillImageAsynhronously.... method causes the 'shutter' sound to be played by the phone, which is no good for my application, as it will be capturing images constantly.
I have read that it is not possible to disable this sound effect. Instead, I want to capture frames from the video input for the phone:
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];
and hopefully turn these into UIImage objects.
How would I achieve this? I don't know that much about how the AVFoundation stuff is working - I downloaded some example code and modified it for my purposes.
Don't use a still camera for this. Instead, grab from the video camera of the device and process the data contained within the pixel buffer you get in response to being an AVCaptureVideoDataOutputSampleBufferDelegate.
You can set up a video connection using code like the following:
// Grab the back-facing camera
AVCaptureDevice *backFacingCamera = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == AVCaptureDevicePositionBack)
{
backFacingCamera = device;
}
}
// Create the capture session
captureSession = [[AVCaptureSession alloc] init];
// Add the video input
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
if ([captureSession canAddInput:videoInput])
{
[captureSession addInput:videoInput];
}
// Add the video frame output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if ([captureSession canAddOutput:videoOutput])
{
[captureSession addOutput:videoOutput];
}
else
{
NSLog(#"Couldn't add video output");
}
// Start capturing
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
if (![captureSession isRunning])
{
[captureSession startRunning];
};
You'll then need to process these frames in a delegate method that looks like the following:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
// Process pixel buffer bytes here
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
The raw pixel bytes for your BGRA image will be contained within the array starting at CVPixelBufferGetBaseAddress(cameraFrame). You can iterate over those to obtain your desired values.
However, you'll find that any operation performed over the entire image on the CPU will be a little slow. You can use Accelerate to help with an average color operation, like you want here. I've used vDSP_meanv() in the past to average luminance values, once you have those in an array. For something like that, you might be best served to grab YUV planar data from the camera instead of the BGRA values I pull down here.
I've also written an open source framework to process video using OpenGL ES, although I don't yet have whole-image reduction operations in there like you'd need for the kind of image analysis here. My histogram generator is probably the closest I have to what you're trying to do.
I am developing a project, where the requirements are:
- User will open the camera through the application
- Upon capturing an Image, some data will be appended to the captured image's metadata.
I have gone through some of the forums. I tried to code this logic. I guess, I have reached to the point, but something is missing as I am not able to see the metadata that I am appending to the image.
My code is:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)image editingInfo:(NSDictionary *)dictionary
{
[picker dismissModalViewControllerAnimated:YES];
NSData *dataOfImageFromGallery = UIImageJPEGRepresentation (image,0.5);
NSLog(#"Image length: %d", [dataOfImageFromGallery length]);
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)dataOfImageFromGallery, NULL);
NSDictionary *metadata = (NSDictionary *) CGImageSourceCopyPropertiesAtIndex(source, 0, NULL);
NSMutableDictionary *metadataAsMutable = [[metadata mutableCopy]autorelease];
[metadata release];
NSMutableDictionary *EXIFDictionary = [[[metadataAsMutable objectForKey:(NSString *)kCGImagePropertyExifDictionary]mutableCopy]autorelease];
NSMutableDictionary *GPSDictionary = [[[metadataAsMutable objectForKey:(NSString *)kCGImagePropertyGPSDictionary]mutableCopy]autorelease];
if(!EXIFDictionary)
{
//if the image does not have an EXIF dictionary (not all images do), then create one for us to use
EXIFDictionary = [NSMutableDictionary dictionary];
}
if(!GPSDictionary)
{
GPSDictionary = [NSMutableDictionary dictionary];
}
//Setup GPS dict -
//I am appending my custom data just to test the logic……..
[GPSDictionary setValue:[NSNumber numberWithFloat:1.1] forKey:(NSString*)kCGImagePropertyGPSLatitude];
[GPSDictionary setValue:[NSNumber numberWithFloat:2.2] forKey:(NSString*)kCGImagePropertyGPSLongitude];
[GPSDictionary setValue:#"lat_ref" forKey:(NSString*)kCGImagePropertyGPSLatitudeRef];
[GPSDictionary setValue:#"lon_ref" forKey:(NSString*)kCGImagePropertyGPSLongitudeRef];
[GPSDictionary setValue:[NSNumber numberWithFloat:3.3] forKey:(NSString*)kCGImagePropertyGPSAltitude];
[GPSDictionary setValue:[NSNumber numberWithShort:4.4] forKey:(NSString*)kCGImagePropertyGPSAltitudeRef];
[GPSDictionary setValue:[NSNumber numberWithFloat:5.5] forKey:(NSString*)kCGImagePropertyGPSImgDirection];
[GPSDictionary setValue:#"_headingRef" forKey:(NSString*)kCGImagePropertyGPSImgDirectionRef];
[EXIFDictionary setValue:#"xml_user_comment" forKey:(NSString *)kCGImagePropertyExifUserComment];
//add our modified EXIF data back into the image’s metadata
[metadataAsMutable setObject:EXIFDictionary forKey:(NSString *)kCGImagePropertyExifDictionary];
[metadataAsMutable setObject:GPSDictionary forKey:(NSString *)kCGImagePropertyGPSDictionary];
CFStringRef UTI = CGImageSourceGetType(source);
NSMutableData *dest_data = [NSMutableData data];
CGImageDestinationRef destination = CGImageDestinationCreateWithData((CFMutableDataRef) dest_data, UTI, 1, NULL);
if(!destination)
{
NSLog(#"--------- Could not create image destination---------");
}
CGImageDestinationAddImageFromSource(destination, source, 0, (CFDictionaryRef) metadataAsMutable);
BOOL success = NO;
success = CGImageDestinationFinalize(destination);
if(!success)
{
NSLog(#"-------- could not create data from image destination----------");
}
UIImage * image1 = [[UIImage alloc] initWithData:dest_data];
UIImageWriteToSavedPhotosAlbum (image1, self, nil, nil);
}
Kindly, help me to do this and get something positive.
Look at the last line, am I saving the image with my metadata in it?
The image is getting saved at that point, but the metadata that I am appending to it, is not getting saved.
Thanks in advance.
Apple has updated their article addressing this issue (Technical Q&A QA1622). If you're using an older version of Xcode, you may still have the article that says, more or less, tough luck, you can't do this without low-level parsing of the image data.
https://developer.apple.com/library/ios/#qa/qa1622/_index.html
I adapted the code there as follows:
- (void) saveImage:(UIImage *)imageToSave withInfo:(NSDictionary *)info
{
// Get the assets library
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
// Get the image metadata (EXIF & TIFF)
NSMutableDictionary * imageMetadata = [[info objectForKey:UIImagePickerControllerMediaMetadata] mutableCopy];
// add GPS data
CLLocation * loc = <•••>; // need a location here
if ( loc ) {
[imageMetadata setObject:[self gpsDictionaryForLocation:loc] forKey:(NSString*)kCGImagePropertyGPSDictionary];
}
ALAssetsLibraryWriteImageCompletionBlock imageWriteCompletionBlock =
^(NSURL *newURL, NSError *error) {
if (error) {
NSLog( #"Error writing image with metadata to Photo Library: %#", error );
} else {
NSLog( #"Wrote image %# with metadata %# to Photo Library",newURL,imageMetadata);
}
};
// Save the new image to the Camera Roll
[library writeImageToSavedPhotosAlbum:[imageToSave CGImage]
metadata:imageMetadata
completionBlock:imageWriteCompletionBlock];
[imageMetadata release];
[library release];
}
and I call this from
imagePickerController:didFinishPickingMediaWithInfo:
which is the delegate method for the image picker.
I use a helper method (adapted from GusUtils) to build a GPS metadata dictionary from a location:
- (NSDictionary *) gpsDictionaryForLocation:(CLLocation *)location
{
CLLocationDegrees exifLatitude = location.coordinate.latitude;
CLLocationDegrees exifLongitude = location.coordinate.longitude;
NSString * latRef;
NSString * longRef;
if (exifLatitude < 0.0) {
exifLatitude = exifLatitude * -1.0f;
latRef = #"S";
} else {
latRef = #"N";
}
if (exifLongitude < 0.0) {
exifLongitude = exifLongitude * -1.0f;
longRef = #"W";
} else {
longRef = #"E";
}
NSMutableDictionary *locDict = [[NSMutableDictionary alloc] init];
[locDict setObject:location.timestamp forKey:(NSString*)kCGImagePropertyGPSTimeStamp];
[locDict setObject:latRef forKey:(NSString*)kCGImagePropertyGPSLatitudeRef];
[locDict setObject:[NSNumber numberWithFloat:exifLatitude] forKey:(NSString *)kCGImagePropertyGPSLatitude];
[locDict setObject:longRef forKey:(NSString*)kCGImagePropertyGPSLongitudeRef];
[locDict setObject:[NSNumber numberWithFloat:exifLongitude] forKey:(NSString *)kCGImagePropertyGPSLongitude];
[locDict setObject:[NSNumber numberWithFloat:location.horizontalAccuracy] forKey:(NSString*)kCGImagePropertyGPSDOP];
[locDict setObject:[NSNumber numberWithFloat:location.altitude] forKey:(NSString*)kCGImagePropertyGPSAltitude];
return [locDict autorelease];
}
So far this is working well for me on iOS4 and iOS5 devices.
Update: and iOS6/iOS7 devices. I built a simple project using this code:
https://github.com/5teev/MetaPhotoSave
The function: UIImageWriteToSavePhotosAlbum only writes the image data.
You need to read up on the ALAssetsLibrary
The method you ultimately want to call is:
ALAssetsLibrary *library = [[ALAssetsLibrary alloc]
[library writeImageToSavedPhotosAlbum:metadata:completionBlock];
For anyone who comes here trying to take a photo with the camera in your app and saving the image file to the camera roll with GPS metadata, I have a Swift solution that uses the Photos API since ALAssetsLibrary is deprecated as of iOS 9.0.
As mentioned by rickster on this answer, the Photos API does not embed location data directly into a JPG image file even if you set the .location property of the new asset.
Given a CMSampleBuffer sample buffer buffer, some CLLocation location, and using Morty’s suggestion to use CMSetAttachments in order to avoid duplicating the image, we can do the following. The gpsMetadata method extending CLLocation can be found here.
if let location = location {
// Get the existing metadata dictionary (if there is one)
var metaDict = CMCopyDictionaryOfAttachments(nil, buffer, kCMAttachmentMode_ShouldPropagate) as? Dictionary<String, Any> ?? [:]
// Append the GPS metadata to the existing metadata
metaDict[kCGImagePropertyGPSDictionary as String] = location.gpsMetadata()
// Save the new metadata back to the buffer without duplicating any data
CMSetAttachments(buffer, metaDict as CFDictionary, kCMAttachmentMode_ShouldPropagate)
}
// Get JPG image Data from the buffer
guard let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(buffer) else {
// There was a problem; handle it here
}
// Now save this image to the Camera Roll (will save with GPS metadata embedded in the file)
self.savePhoto(withData: imageData, completion: completion)
The savePhoto method is below. Note that the handy addResource:with:data:options method is available only in iOS 9. If you are supporting an earlier iOS and want to use the Photos API, then you must make a temporary file and then create an asset from the file at that URL if you want to have the GPS metadata properly embedded (PHAssetChangeRequest.creationRequestForAssetFromImage:atFileURL). Only setting PHAsset’s .location will NOT embed your new metadata into the actual file itself.
func savePhoto(withData data: Data, completion: (() -> Void)? = nil) {
// Note that using the Photos API .location property on a request does NOT embed GPS metadata into the image file itself
PHPhotoLibrary.shared().performChanges({
if #available(iOS 9.0, *) {
// For iOS 9+ we can skip the temporary file step and write the image data from the buffer directly to an asset
let request = PHAssetCreationRequest.forAsset()
request.addResource(with: PHAssetResourceType.photo, data: data, options: nil)
request.creationDate = Date()
} else {
// Fallback on earlier versions; write a temporary file and then add this file to the Camera Roll using the Photos API
let tmpURL = URL(fileURLWithPath: NSTemporaryDirectory(), isDirectory: true).appendingPathComponent("tempPhoto").appendingPathExtension("jpg")
do {
try data.write(to: tmpURL)
let request = PHAssetChangeRequest.creationRequestForAssetFromImage(atFileURL: tmpURL)
request?.creationDate = Date()
} catch {
// Error writing the data; photo is not appended to the camera roll
}
}
}, completionHandler: { _ in
DispatchQueue.main.async {
completion?()
}
})
}
Aside:
If you are just wanting to save the image with GPS metadata to your temporary files or documents (as opposed to the camera roll/photo library), you can skip using the Photos API and directly write the imageData to a URL.
// Write photo to temporary files with the GPS metadata embedded in the file
let tmpURL = URL(fileURLWithPath: NSTemporaryDirectory(), isDirectory: true).appendingPathComponent("tempPhoto").appendingPathExtension("jpg")
do {
try data.write(to: tmpURL)
// Do more work here...
} catch {
// Error writing the data; handle it here
}
A piece of this involves generating the GPS metadata. Here's a category on CLLocation to do just that:
https://gist.github.com/phildow/6043486
Getting meta data from cam captured image within an application:
UIImage *pTakenImage= [info objectForKey:#"UIImagePickerControllerOriginalImage"];
NSMutableDictionary *imageMetadata = [[NSMutableDictionary alloc] initWithDictionary:[info objectForKey:UIImagePickerControllerMediaMetadata]];
now to save image to library with extracted metadata:
ALAssetsLibrary* library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[sourceImage CGImage] metadata:imageMetadata completionBlock:Nil];
[library release];
or want to save to local directory
CGImageDestinationAddImageFromSource(destinationPath,sourceImage,0, (CFDictionaryRef)imageMetadata);
The problem we are trying to solve is: the user has just taken a picture with the UIImagePickerController camera. What we get is a UIImage. How do we fold metadata into that UIImage as we save it into the camera roll (photo library), now that we don't have the AssetsLibrary framework?
The answer (as far as I can make out) is: use the ImageIO framework. Extract the JPEG data from the UIImage, use it as a source and write it and the metadata dictionary into the destination, and save the destination data as a PHAsset into the camera roll.
In this example, im is the UIImage and meta is the metadata dictionary:
let jpeg = UIImageJPEGRepresentation(im, 1)!
let src = CGImageSourceCreateWithData(jpeg as CFData, nil)!
let data = NSMutableData()
let uti = CGImageSourceGetType(src)!
let dest = CGImageDestinationCreateWithData(data as CFMutableData, uti, 1, nil)!
CGImageDestinationAddImageFromSource(dest, src, 0, meta)
CGImageDestinationFinalize(dest)
let lib = PHPhotoLibrary.shared()
lib.performChanges({
let req = PHAssetCreationRequest.forAsset()
req.addResource(with: .photo, data: data as Data, options: nil)
})
A good way to test — and a common use case — is to receive the photo metadata from the UIImagePickerController delegate info dictionary thru the UIImagePickerControllerMediaMetadata key and fold it into the PHAsset as we save it into the photo library.
There are many frameworks that deals with image and metadata.
Assets Framework is deprecated, and replaced by Photos Library framework. If you implemented AVCapturePhotoCaptureDelegate to capture photos, you can do so:
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
var metadata = photo.metadata
metadata[kCGImagePropertyGPSDictionary as String] = gpsMetadata
photoData = photo.fileDataRepresentation(withReplacementMetadata: metadata,
replacementEmbeddedThumbnailPhotoFormat: photo.embeddedThumbnailPhotoFormat,
replacementEmbeddedThumbnailPixelBuffer: nil,
replacementDepthData: photo.depthData)
...
}
The metadata is a dictionary of dictionaries, and you have to refer to CGImageProperties.
I wrote about this topic here.
Here is a slight variation of #matt answer.
The following code use only one CGImageDestination and more interesting allow to save in HEIC format on iOS11+.
Notice that the compression quality is added to the metadata before adding the image. 0.8 is roughly the compression quality of native camera save.
//img is the UIImage and metadata the metadata received from the picker
NSMutableDictionary *meta_plus = metadata.mutableCopy;
//with CGimage, one can set compression quality in metadata
meta_plus[(NSString *)kCGImageDestinationLossyCompressionQuality] = #(0.8);
NSMutableData *img_data = [NSMutableData new];
NSString *type;
if (#available(iOS 11.0, *)) type = AVFileTypeHEIC;
else type = #"public.jpeg";
CGImageDestinationRef dest = CGImageDestinationCreateWithData((__bridge CFMutableDataRef)img_data, (__bridge CFStringRef)type, 1, nil);
CGImageDestinationAddImage(dest, img.CGImage, (__bridge CFDictionaryRef)meta_plus);
CGImageDestinationFinalize(dest);
CFRelease(dest); //image is in img_data
//go for the PHLibrary change request
I´m trying to make a library for iPhone, so I´m trying to init the camera just with a call.
The problem comes when I call "self" in this declaration:
"[captureOutput setSampleBufferDelegate:self queue:queue];"
because the compiler says:" self was not declared in this scope", what Do I need to do to set the same class as a "AVCaptureVideoDataOutputSampleBufferDelegate"?. At least point me in the right direction :P.
Thank you !!!
here is the complete function:
bool VideoCamera_Init(){
//Init Capute from the camera and show the camera
/*We setup the input*/
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput
deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]
error:nil];
/*We setupt the output*/
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
/*While a frame is processes in -captureOutput:didOutputSampleBuffer:fromConnection: delegate methods no other frames are added in the queue.
If you don't want this behaviour set the property to NO */
captureOutput.alwaysDiscardsLateVideoFrames = YES;
/*We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting
in the queue because it can cause memory issues). It is similar to the inverse of the maximum framerate.
In this example we set a min frame duration of 1/10 seconds so a maximum framerate of 10fps. We say that
we are not able to process more than 10 frames per second.*/
captureOutput.minFrameDuration = CMTimeMake(1, 20);
/*We create a serial queue to handle the processing of our frames*/
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
variableconnombrealeatorio= [[VideoCameraThread alloc] init];
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
/*And we create a capture session*/
AVCaptureSession * captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset= AVCaptureSessionPresetMedium;
/*We add input and output*/
[captureSession addInput:captureInput];
[captureSession addOutput:captureOutput];
/*We start the capture*/
[captureSession startRunning];
return TRUE;
}
I also did the next class, but the buffer is empty:
"
#import "VideoCameraThread.h"
CMSampleBufferRef bufferCamara;
#implementation VideoCameraThread
(void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
bufferCamera=sampleBuffer;
}
"
You are writing a C function, which has no concept of Objective C classes, objects or the self identifier. You will need to modify your function to take a parameter to accept the sampleBufferDelegate that you want to use:
bool VideoCamera_Init(id<AVCaptureAudioDataOutputSampleBufferDelegate> sampleBufferDelegate) {
...
[captureOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue];
...
}
Or you could write your library with an Objective C object-oriented interface rather than a C-style interface.
You also have problems with memory management in this function. For instance, you are allocating an AVCaptureSession and assigning it to a local variable. After this function returns you will have no way of retrieving that AVCaptureSession so that you can release it.
So I setup an Audio session
AudioSessionInitialize(NULL, NULL, NULL, NULL);
AudioSessionSetActive(true);
UInt32 audioCategory = kAudioSessionCategory_MediaPlayback; //for output audio
OSStatus tErr = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,sizeof(audioCategory),&audioCategory);
Then setup either an AudioQueue or RemoteIO setup to play back some audio straight from a file.
AudioQueueStart(mQueue, NULL);
Once my audio is playing I can see the 'Play Icon' in the status bar of my app. I next setup an AVAssetReader.
AVAssetTrack* songTrack = [songURL.tracks objectAtIndex:0];
NSDictionary* outputSettingsDict = [[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
// [NSNumber numberWithInt:AUDIO_SAMPLE_RATE],AVSampleRateKey, /*Not Supported*/
// [NSNumber numberWithInt: 2],AVNumberOfChannelsKey, /*Not Supported*/
[NSNumber numberWithInt:16],AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsNonInterleaved,
nil];
NSError* error = nil;
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:songURL error:&error];
// {
// AVAssetReaderTrackOutput* output = [[AVAssetReaderTrackOutput alloc] initWithTrack:songTrack outputSettings:outputSettingsDict];
// [reader addOutput:output];
// [output release];
// }
{
AVAssetReaderAudioMixOutput * readaudiofile = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:(songURL.tracks) audioSettings:outputSettingsDict];
[reader addOutput:readaudiofile];
[readaudiofile release];
}
return reader;
and when I called [reader startReading] the Audio stops playing. In both the RemoteIO and AudioQueue case the callback stops getting called.
If I add the mixing option:
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof (UInt32), &(UInt32) {0});
Then the 'Play Icon' no longer appears when the audio after AudioQueueStart is called. I am also locked out of other features since the phone doesn't view me as the primary audio source.
Does anyone know a way I can use the AVAssetReader and still remain the Primary audio source?
As of iOS 5 this has been fixed. It still does not work in iOS 4 or below. MixWithOthers is not needed (can be set to false) and the AudioQueue/RemoteIO will continue to receive callbacks even if an AVAssetReader is reading.
Exposure values from camera can be acquired when you take picture (without saving it to SavedPhotos). A light meter application on iPhone does this, probably by using some private API.
That application does it on iPhone 3GS only, so I guess it may be somehow related to EXIF data which is populated with this information when the image is created.
This all applies to 3GS.
Has anything changed with iPhone OS 4.0?
Is there a regular way to get these values now?
Does anyone have a working code example for taking these camera/photo setting values?
Thank you
If you want realtime* exposure information, you can capture a video using AVCaptureVideoDataOutput. Each frame CMSampleBuffer is full of interesting data describing the current state of the camera.
*up to 30 fps
With AVFoundation in iOS 4.0 you can mess with exposure, refer specifically to AVCaptureDevice, here is a link AVCaptureDevice ref. Not sure if its exactly what you want but you can look around AVFoundation and probably find some useful stuff
I think I finally found the lead to the real EXIF data. It'll be a while before I have actual code to post, but I figured this should be publicized in the meantime.
Google captureStillImageAsynchronouslyFromConnection. It's a function of AVCaptureStillImageOutput and following is an excerpt from the documentation (long sought for):
imageDataSampleBuffer -
The data that was captured.
The buffer attachments may contain metadata appropriate to the image data format. For example, a buffer containing JPEG data may carry a kCGImagePropertyExifDictionary as an attachment. See ImageIO/CGImageProperties.h for a list of keys and value types.
For an example of working with AVCaptureStillImageOutput see WWDC 2010 sample code, under AVCam.
Peace,
O.
Here is the complete solution. Dont forget to import appropriate frameworks and headers.
In the exifAttachments var in capturenow method you'll find all data you are looking for.
#import <AVFoundation/AVFoundation.h>
#import <ImageIO/CGImageProperties.h>
AVCaptureStillImageOutput *stillImageOutput;
AVCaptureSession *session;
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCaptureSession];
// Do any additional setup after loading the view, typically from a nib.
}
-(void)captureNow{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *__strong error) {
CFDictionaryRef exifAttachments = CMGetAttachment( imageDataSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
}];
}
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPreset352x288;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
[device lockForConfiguration:nil];
device.whiteBalanceMode = AVCaptureWhiteBalanceModeLocked;
device.focusMode = AVCaptureFocusModeLocked;
[device unlockForConfiguration];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
stillImageOutput = [AVCaptureStillImageOutput new];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
if ([session canAddOutput:stillImageOutput])
[session addOutput:stillImageOutput];
// Start the session running to start the flow of data
[session startRunning];
[self captureNow];
}