Camera view as a subview in iOS - iphone

I'll start with saying that I am new to objective-c and iOS-programming.
What I want to do is to display camera as a part of a view, like a rectangle in the upper part of a screen, where should I start?
(What GUI-component for the "camera view"? AVCamCaptureManager or UIImagePickerController?)

You can use the AVFoundation to do that. A good starting point is to see the WWDC videos (since 2011) related with AVFoundation and Camera.
The Apple's example code for AVCam project is a very good starting point.
Here's an example of what you can do.
First you need to instantiate a capture session:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPreset1280x720;
Then you must create the input and add it to the session in order to get images from your device camera:
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
NSLog(#"Couldn't create video capture device");
}
[session addInput:input];
Then you make use of AVCaptureVideoPreviewLayer to present in a Layer the images from your device camera:
AVCaptureVideoPreviewLayer *newCaptureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
Finally you just need to set the frame (portion of UI you need) of that specific layer and add it to the desired view and start the session capture.

Related

How can I use AVCaptureVideoDataOutput with low resolution preview profile and take photos (while previewing) with high resolution

I want to use the AVFoundation Famework for previewing and capturing photos.
I created a AVCaptureSession and added AVCaptureVideoDataOutput, AVCaptureStillImageOutput to this session. I set the preset to AVCaptureSessionPresetLow.
Now I want to take a Photo in full Resolution. But within captureStillImageAsynchronouslyFromConnection the resolution is the same as in my preview delegate.
Here is my Code:
AVCaptureSession* cameraSession = [[AVCaptureSession alloc] init];
cameraSession.sessionPreset = AVCaptureSessionPresetLow;
AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init];
[cameraSession addOutput:output];
AVCaptureStillImageOutput* cameraStillImage = [[AVCaptureStillImageOutput alloc] init];
[cameraSession addOutput:cameraStillImage];
// delegation
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
[cameraSession startRunning];
Take Photo:
//[cameraSession beginConfiguration];
//[cameraSession setSessionPreset:AVCaptureSessionPresetPhoto]; <-- slow
//[cameraSession commitConfiguration];
[cameraStillImage captureStillImageAsynchronouslyFromConnection:photoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
...
}];
I tried it by changing the preset to photo just befor capturing the image. But this is very slow (it takes 2-3 seconds to change the preset). I do not want to have such a big delay.
How can I do this? Thanks.

Live camera in UIImageView

Does anybody know how I can get a live camera feed into an UIImageView?
I have a custom UI where I need to show the camera feed (front facing camera) so I cannot use the UIImagePickerControl.
You need to create a capture session, and start it running. Once that's done you can add the layer from the capture session to your view:
- (void)setupCaptureSession
{
NSError* error = nil;
// Create the session
_captureSession = [[AVCaptureSession alloc] init];
_captureSession.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureDevice* device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
[_captureSession addInput:input];
AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init];
[_captureSession addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myCameraOutputQueue", NULL);
//If you want to sebsequently use the data, then implement the delegate.
[output setSampleBufferDelegate:self queue:queue];
}
Having done that, you can create a preview layer as follows:
_previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_captureSession];
[_captureSession startRunning];
And then add the preview layer to your view:
_myView.layer addSubLayer:_previewLayer];

AVFoundation Camera Preview Wrong Orientation

I'm making a custom camera with a small preview area inside of the iPad, however, the stream in that previewing is rotated clockwise.
I have looked at both AVCam demo and the SquareCam demo on Apple and I don't see a solution in either. All of the AVFoundation orientation threads on StackOverflow are talking specifically about output orientation, not input orientation.
Here is the session code I'm using:
AVCaptureDevice *frontalCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
_session = [[AVCaptureSession alloc] init];
[_session beginConfiguration];
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:frontalCamera error:&error];
[_session addInput:input];
_output = [[AVCaptureStillImageOutput alloc] init];
[_output setOutputSettings:[[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil]];
[_session addOutput:_output];
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
previewLayer.frame = self.imageViewCamera.bounds;
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.imageViewCamera.layer addSublayer:previewLayer];
_session.sessionPreset = AVCaptureSessionPreset640x480;
[_session commitConfiguration];
[_session startRunning];
Any help would be much appreciated!
The Apple docs for iOS 6 say to use videoOrientation (AVCaptureConnection) instead.
The layer’s orientation. (Deprecated in iOS 6.0. Use videoOrientation
(AVCaptureConnection) instead.)
so use:
[[AVCaptureVideoPreviewLayer connection] setVideoOrientation: AVCaptureVideoOrientationLandscapeRight];
or
AVCaptureVideoPreviewLayer.connection.videoOrientation= AVCaptureVideoOrientationLandscapeRight;
This has been deprecated but you can change the orientation of your previewLayer counter-clockwise.
previewLayer.orientation = AVCaptureVideoOrientationLandscapeRight;
I'm not sure what the non-deprecated solution is though.

Getting a still image from the video output on the iphone?

I am writing an application to show stats on the light conditions as seen by the iphone camera. I take an image every second, and the performing calculations on it.
To capture an image, I am using the following method:
-(void) captureNow
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in captureManager.stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[captureManager.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
latestImage = [[UIImage alloc] initWithData:imageData];
}];
}
However, the captureStillImageAsynhronously.... method causes the 'shutter' sound to be played by the phone, which is no good for my application, as it will be capturing images constantly.
I have read that it is not possible to disable this sound effect. Instead, I want to capture frames from the video input for the phone:
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];
and hopefully turn these into UIImage objects.
How would I achieve this? I don't know that much about how the AVFoundation stuff is working - I downloaded some example code and modified it for my purposes.
Don't use a still camera for this. Instead, grab from the video camera of the device and process the data contained within the pixel buffer you get in response to being an AVCaptureVideoDataOutputSampleBufferDelegate.
You can set up a video connection using code like the following:
// Grab the back-facing camera
AVCaptureDevice *backFacingCamera = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == AVCaptureDevicePositionBack)
{
backFacingCamera = device;
}
}
// Create the capture session
captureSession = [[AVCaptureSession alloc] init];
// Add the video input
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
if ([captureSession canAddInput:videoInput])
{
[captureSession addInput:videoInput];
}
// Add the video frame output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if ([captureSession canAddOutput:videoOutput])
{
[captureSession addOutput:videoOutput];
}
else
{
NSLog(#"Couldn't add video output");
}
// Start capturing
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
if (![captureSession isRunning])
{
[captureSession startRunning];
};
You'll then need to process these frames in a delegate method that looks like the following:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
// Process pixel buffer bytes here
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
The raw pixel bytes for your BGRA image will be contained within the array starting at CVPixelBufferGetBaseAddress(cameraFrame). You can iterate over those to obtain your desired values.
However, you'll find that any operation performed over the entire image on the CPU will be a little slow. You can use Accelerate to help with an average color operation, like you want here. I've used vDSP_meanv() in the past to average luminance values, once you have those in an array. For something like that, you might be best served to grab YUV planar data from the camera instead of the BGRA values I pull down here.
I've also written an open source framework to process video using OpenGL ES, although I don't yet have whole-image reduction operations in there like you'd need for the kind of image analysis here. My histogram generator is probably the closest I have to what you're trying to do.

Getting exposure values from camera on iPhone OS 4.0

Exposure values from camera can be acquired when you take picture (without saving it to SavedPhotos). A light meter application on iPhone does this, probably by using some private API.
That application does it on iPhone 3GS only, so I guess it may be somehow related to EXIF data which is populated with this information when the image is created.
This all applies to 3GS.
Has anything changed with iPhone OS 4.0?
Is there a regular way to get these values now?
Does anyone have a working code example for taking these camera/photo setting values?
Thank you
If you want realtime* exposure information, you can capture a video using AVCaptureVideoDataOutput. Each frame CMSampleBuffer is full of interesting data describing the current state of the camera.
*up to 30 fps
With AVFoundation in iOS 4.0 you can mess with exposure, refer specifically to AVCaptureDevice, here is a link AVCaptureDevice ref. Not sure if its exactly what you want but you can look around AVFoundation and probably find some useful stuff
I think I finally found the lead to the real EXIF data. It'll be a while before I have actual code to post, but I figured this should be publicized in the meantime.
Google captureStillImageAsynchronouslyFromConnection. It's a function of AVCaptureStillImageOutput and following is an excerpt from the documentation (long sought for):
imageDataSampleBuffer -
The data that was captured.
The buffer attachments may contain metadata appropriate to the image data format. For example, a buffer containing JPEG data may carry a kCGImagePropertyExifDictionary as an attachment. See ImageIO/CGImageProperties.h for a list of keys and value types.
For an example of working with AVCaptureStillImageOutput see WWDC 2010 sample code, under AVCam.
Peace,
O.
Here is the complete solution. Dont forget to import appropriate frameworks and headers.
In the exifAttachments var in capturenow method you'll find all data you are looking for.
#import <AVFoundation/AVFoundation.h>
#import <ImageIO/CGImageProperties.h>
AVCaptureStillImageOutput *stillImageOutput;
AVCaptureSession *session;
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCaptureSession];
// Do any additional setup after loading the view, typically from a nib.
}
-(void)captureNow{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *__strong error) {
CFDictionaryRef exifAttachments = CMGetAttachment( imageDataSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
}];
}
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPreset352x288;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
[device lockForConfiguration:nil];
device.whiteBalanceMode = AVCaptureWhiteBalanceModeLocked;
device.focusMode = AVCaptureFocusModeLocked;
[device unlockForConfiguration];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
stillImageOutput = [AVCaptureStillImageOutput new];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
if ([session canAddOutput:stillImageOutput])
[session addOutput:stillImageOutput];
// Start the session running to start the flow of data
[session startRunning];
[self captureNow];
}