Capture video without displaying the actual video feed - iphone

So I have an application that can currently capture video with the front facing iphone camera and then do some processing on the video feed real-time. What I'm trying to do, however, is make this process run in the background and put other controls onscreen. So for example, say I'd like to run the camera and process the image feed, but I want the user to see a black screen with some buttons on it. Any ideas on how to do this?

Just so we get terminology right, by "in the background", you mean running the camera capture while your application is in the foreground, but not displaying the actual video feed. This is possible, but I wanted to make clear that if you move your whole application into the background you will not have access to the camera then.
There are a few ways to do this, but the one that I've spent the most time with is grabbing frames of video (or photos) via AV Foundation. Using an AVCaptureDevice and AVCaptureSession, you can grab the frames of video and route them to an encoder for saving to disk or for processing using your own custom code. None of this requires the camera feed to be displayed onscreen, so you can put up whatever interface you like and do this video recording or photo capture without any onscreen indication.
I would caution that you should make it explicit to your users what you are doing, so that you do not run the risk of violating someone's privacy. Apple does not react kindly to those who do this (for good reason).
I encapsulate a lot of this within my open source GPUImage video and photo processing framework, so you could look at the code for the GPUImageVideoCamera class there to see how I configure the capture inputs. I hand the video frames off to OpenGL ES for the application of filters and other processing operations, but you could ignore that portion of it if you just wanted to do your own encoding or processing.

Heres an exemple code from Apple's doc:
http://developer.apple.com/library/ios/#samplecode/PhotoPicker/Introduction/Intro.html
there is also the way to customize the camera interface.

Related

iOS: core animation to video

I'm making a photo gallery app. I use the photos as slides and shows them one by one with transactions made by core animation.
Now I need to turn this process in to a video so that I can share it on Youtube, but I don't know where to start or which reference should I read.
To make video in iphone you can AVAssetWriter.
Also have a look this post. You can capture your animation as images and make the video.
The AVAsset Apple APIs is used to do this sort of thing is. But be warned, it is not easy to use. If you would like to check out a working example Xcode project, see AVDecodeEncode. This example shows decoding from h.264 and then encoding the result back to h.264. You could write all you own code to do this, but just be aware that it is not for the faint of heart.

How do I pause video at the exact moment I capture a photo?

I am using AVFoundation to display a Video in my UIView via an AVCaptureVideoPreviewOverlay.
I then use AVStillImageOutput's -captureStillImageAsynchronouslyFromConnection: to capture a still Image from the Video with the AVCaptureSessionPresetPhoto preset.
I am freezing the video using AVCaptureSession's -stopRunning in the -captureStillImageAsynchronouslyFromConnection completion block mentioned earlier. However, it's too late and the video has continued running while the still image is taken, so the freeze is a second or two later. When I display the image there is a jump.
How can I freeze the video at the exact moment the photo is taken?
Almost a year later...Your approach is all wrong. Instead of trying to pause the video at the precise moment that the image is captured why don't you pause the video and then capture that paused image. To a user it makes no difference, to a developer you don't have to worry about exact precision.
To reiterate my idea, if you pause a video and flash white visual and play a click the user will think you have just captured that frame regardless if you are or not. Actually, you could consider pausing video the same as capturing an image without saving it.

How do I create an AVAsset with a UIImage captured from a camera?

I am a newbie trying to capture camera video images using AVFoundation and
want to render the captured frames without using AVCaptureVideoPreviewLayer. I
want a slider control to be able to slow down or speed up the rate of display of
camera images.
Using other peoples code as examples, I can capture images and using an NSTimer,
with my slider control can define on the fly how often to display them, but I
can't convert the image to something I can display. I want to move these
images into a UIView or UIImageView and render them in the timer Fire function.
I have looked at Apples AVCam app, (which uses an AVCaptureVideoPreviewLayer)
but because it has its own built in AVCaptureSession, I can't adjust how often the
images are displayed. (well, you can adjust the preview layer frame rate but
that can't be done on the fly)
I have looked at the AVFoundation programming guide, which talks about AVAssets
and AVPlayer, etc. but I can't see how a camera image can be turned into an
AVAsset. When I look at the AVFoundation guide, and other demos which show how
to define an AVAsset, it only gives me choices of using http stream data to
create the asset, or a url to define an asset using an existing file. I can't
figure out how to make my captured UIImage into an AVAsset, in which case I guess
I could use an AVPlayer, AVPlayerItems and AVAssetTracks to show the image with
an observeValueForKeyPath function checking status and doing [myPlayer play].
(I also studied the WWDC session 405 "Exploring AV Foundation" to see how that
is done)
I have tried similar code as in the WWDC Session 409 "Using the Camera on iPhone."
Like that myCone demo, I can set up the device, the input, the capture session,
the output, the setting up of a callback function to a CMSampleBuffer, and I
can collect UIImages and size them, etc. At this point I want to send that image
to a UIView or UIimageView. The session 409 just talks about doing it with
CFShow(sampleBuffer). This wasn't explained, and I guess its just assuming a
knowledge of Core Foundation I don't yet have. I think I am turning the captured
output in the sample buffer into a UIImage, but I can't figure out how to render
it. I created an IBOutlet UIImageView in my nib file, but when I try to stuff
the image into that view, nothing gets displayed. Do I need an AVPlayerLayer?
I have looked at the UIImagePickerViewController as an alternate method of
controlling how often I display captured camera images, and I dont see that I
can change the time on the fly to display images using that controller either.
So, as you can see, I am learning this stuff with the Apple development forum and
their documentation, the WWDC videos, and various websites such as
stackoverflow.com but have yet to see any examples of doing camera to screen
without using AVCaptureVideoPreviewLayer, UIImagePickderViewController or by
using an AVAsset that isnt already a file or http stream.
Can anybody make a suggestion? Thanks in advance.

iPhone Dev: Process each frame of a live recording movie for AR app?

I'm doing research into AR on the iPhone and am trying to figure out how people are getting each frame of video? I'm wanting to figure out AR using computer vision( OpenCV ). So basically I will have a pattern on a piece of paper that I will find using OpenCV and place a graphic on top of the pattern.
I know about the movie class UIImagePickerController, but am unsure how you would go about getting to each frame.
Can someone point me in the right direction?
UIImagePickerController is the means for displaying a camera view and taking single pictures with a camera-like front end. It's not what you're looking for.
Instead you need to look into AVFoundation, particularly the classes surrounding AVCaptureSession. You'll want to acquire a meaningful AVCaptureDevice (which can be the front or back camera on the iPhone 4 and current iPod Touch), create an AVCaptureDeviceInput that references it and add that as an input to an AVCaptureSession. Then just create an AVCaptureVideoDataOutput and set it up with a meaningful delegate and a Grand Central Dispatch dispatch queue.
When you start the session going, you'll receive delegate callbacks on the queue you created providing CMSampleBufferRefs, from which you can pull a CVImageBufferRef and hence the pixel data.

How to record a video automatically in Iphone app without user interaction

I am working an iphone app that needs to record a vedio automatically.
I used mobile coreservices framework and using that. I made it to came into video mode and clicking on record option its start capturing a vedio. But I want it automatically that is.. I should able to record a video without clicking on record option. That is when video mode comes up its automatically start record video.
Could any one help?
You can look at UIImagePickerControllers startVideoCapture method which is used to start taking video from the camera, this is to be used when you arent using the camera standard controlors and you provide and overlay view. Here is a reference UIImagePickerCOntroller ref. If this is not enough for you, you might want to look into AVFoundation framework which gives you a lot more control over video capturing process...hope that helps