I'm trying to build a replacement for UIImagePickerController, using AVCaptureSession with AVCaptureDeviceInput and AVCaptureStillImageOutput, as input/output respectively.
To preview the camera stream I'm using AVCaptureVideoPreviewLayer.
It's now working correctly for capturing and storing photos just like the default camera.
However, I found 3 problems I was unable to solve:
photos captured don't get the same quality the default camera provides
the viewing/capture angle is shortened, just like using the video capture on the default camera
no way to control camera specific options like flash
Is there any way to get to the level of UIImagePickerController using a more customizable approach (i.e. AVFoundation or any other)?
Check out "Session 409 - Using the Camera with AV Foundation" in the WWDC 2010 videos. Based on the video, it looks like you can resolve all three of your issues with AVFoundation.
Hope this helps!
Related
Im building an app that allows the user to record a video (in app) by pressing a button on the main screen. I don't want the user to be taken to the photo app because the video will only be able to be viewed on the app (Max of 15 seconds) and I can't quite get it. Anyone have the code to do this? A good example of what i want the camera to do is the camera in the app Cinemagram. Thanks for any help.
If you plan on saving the movie to the user's photo library, then you can use UIImagePickerController. In particular, you should read the guide that accompanies the class.
However, if you only want the video to be temporary, then you will probably want to use AVFoundation. You would then need to configure an AVCaptureSession with an AVCaptureMovieFileOutput to write the video to disk. Then, when you are ready to play the video, create an AVURLAsset with the file url that you just wrote, use that to create an AVPlayer to play the video, and add an AVPlayerLayer to your view, with said player, to display the video.
Either way, I would recommend studying the examples that Apple provides.
AVCam and
AVPlayerDemo should be more than enough to get you started (especially the AVCam example project).
So i've been using UIImagepickercontroller to access the camera for photo and video capture, then i wanted to apply filters on those 2 sources, i succeeded with filtering token photos but i'am having trouble finding the solution for the rest, all i need is to access the raw image data : the live image feed that the camera is showing , apply the filter and then show the filtered ones instead. Any help or advice will be appreciated.
UIImagePickerController doesn't give you low level access to the camera buffer.
You should setup a AVCaptureSession and use the delegate to process the CMSampleBufferRef
Take a look at the AVCam & SquareCam demos from Apple, they give a good introduction to video capture.
http://developer.apple.com/library/ios/#samplecode/AVCam/Introduction/Intro.html
http://developer.apple.com/library/ios/#samplecode/SquareCam/Introduction/Intro.html
An easier solution is to use https://github.com/BradLarson/GPUImage
Thanks
Adam
I am a newbie trying to capture camera video images using AVFoundation and
want to render the captured frames without using AVCaptureVideoPreviewLayer. I
want a slider control to be able to slow down or speed up the rate of display of
camera images.
Using other peoples code as examples, I can capture images and using an NSTimer,
with my slider control can define on the fly how often to display them, but I
can't convert the image to something I can display. I want to move these
images into a UIView or UIImageView and render them in the timer Fire function.
I have looked at Apples AVCam app, (which uses an AVCaptureVideoPreviewLayer)
but because it has its own built in AVCaptureSession, I can't adjust how often the
images are displayed. (well, you can adjust the preview layer frame rate but
that can't be done on the fly)
I have looked at the AVFoundation programming guide, which talks about AVAssets
and AVPlayer, etc. but I can't see how a camera image can be turned into an
AVAsset. When I look at the AVFoundation guide, and other demos which show how
to define an AVAsset, it only gives me choices of using http stream data to
create the asset, or a url to define an asset using an existing file. I can't
figure out how to make my captured UIImage into an AVAsset, in which case I guess
I could use an AVPlayer, AVPlayerItems and AVAssetTracks to show the image with
an observeValueForKeyPath function checking status and doing [myPlayer play].
(I also studied the WWDC session 405 "Exploring AV Foundation" to see how that
is done)
I have tried similar code as in the WWDC Session 409 "Using the Camera on iPhone."
Like that myCone demo, I can set up the device, the input, the capture session,
the output, the setting up of a callback function to a CMSampleBuffer, and I
can collect UIImages and size them, etc. At this point I want to send that image
to a UIView or UIimageView. The session 409 just talks about doing it with
CFShow(sampleBuffer). This wasn't explained, and I guess its just assuming a
knowledge of Core Foundation I don't yet have. I think I am turning the captured
output in the sample buffer into a UIImage, but I can't figure out how to render
it. I created an IBOutlet UIImageView in my nib file, but when I try to stuff
the image into that view, nothing gets displayed. Do I need an AVPlayerLayer?
I have looked at the UIImagePickerViewController as an alternate method of
controlling how often I display captured camera images, and I dont see that I
can change the time on the fly to display images using that controller either.
So, as you can see, I am learning this stuff with the Apple development forum and
their documentation, the WWDC videos, and various websites such as
stackoverflow.com but have yet to see any examples of doing camera to screen
without using AVCaptureVideoPreviewLayer, UIImagePickderViewController or by
using an AVAsset that isnt already a file or http stream.
Can anybody make a suggestion? Thanks in advance.
Please help me with my question.
Is there any way to get image from camera without UIImagePickerController?
I need to render current image(from camera) into image on my view and update it by timer.
May be AVCaptureStillImageOutput? I didn't find any examples.
Any ideas?
Yes, you can do it easily using AVCamCaptureManager and AVCamRecorder classes. Apple has a demo program build on its developer site here. It is named AVCam. In simple words what it does is when you click to open the camera, it calls the classes and methods which are responsible for opening the iPhone's camera and record video or capture audio. It calls the same classes which are called by UIImagePickerController.
I hope it helps.
My app is currently using AVFoundation to take the raw camera data from the rear camera of an iPhone and display it on an AVCaptureVideoPreviewLayer in real time.
My goal is to to conditionally apply simple image filters to the preview layer. The images aren't saved, so I do not need to capture the output. For example, I would like to toggle a setting that converts the video coming in on the preview layer to Black & White.
I found a question here that seems to accomplish something similar by capturing the individual video frames in a buffer, applying the desired transformations, then displaying each frame as an UIImage. For several reasons, this seems like overkill for my project and I'd like to avoid any performance issues this may cause.
Is this the only way to accomplish my goal?
As I mentioned, I am not looking to capture any of the AVCaptureSession's video, merely preview it.
Probably the most performant way of handling this would be to use OpenGL ES for filtering and display of these video frames. You won't be able to do much with an AVCaptureVideoPreviewLayer directly, aside from adjusting its opacity when overlaid with another view or layer.
I have a sample application here where I grab frames from the camera and apply OpenGL ES 2.0 shaders to process the video in realtime for display. In this application (explained in detail here), I was using color-based filtering to track objects in the camera view, but others have modified this code to do some neat video processing effects. All GPU-based filters in this application that display to the screen run at 60 FPS on my iPhone 4.
The only iOS device out there that supports video, yet doesn't have an OpenGL ES 2.0 capable GPU, is the iPhone 3G. If you need to target that device as well, you might be able to take the base code for video capture and generation of OpenGL ES textures, and then use the filter code from Apple's GLImageProcessing sample application. That application is built around OpenGL ES 1.1, support for which is present on all iOS devices.
However, I highly encourage looking at the use of OpenGL ES 2.0 for this, because you can pull off many more kinds of effect using shaders than you can with the fixed function OpenGL ES 1.1 pipeline.
(Edit: 2/13/2012) As an update on the above, I've now created an open source framework called GPUImage that encapsulates this kind of custom image filtering. It also handles capturing video and displaying it to the screen after being filtered, requiring as few as six lines of code to set all of this up. For more on the framework, you can read my more detailed announcement.
I would recommend looking at the Rosy Writer example from the ios development library. Brad Larson's GPUImage Library is pretty awesome but it seems a little overkill for this question.
If you are just interested in adding OpenGL Shaders (aka Filters) to a AVCaptureVideoPreviewLayer the workflow is to send the output of the capture session to an OpenGL view for rendering.
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
videoOut.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(_renderer.inputPixelFormat) };
[videoOut setSampleBufferDelegate:self queue:_videoDataOutputQueue];
Then in the captureOutput: delegate send the sample buffer to OpenGL Renderer
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef sourcePixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
_renderer copyRenderedPixelBuffer:sourcePixelBuffer];
}
In OpenGL Renderer attach the sourcePixelBuffer to a texture and you can filter it within the OpenGL Shaders. The shader is a program that is run on a perpixel base. The Rosy Writer example also shows examples of using different filtering techniques other than OpenGL.
Apple's example AVCamFilter does it all