I am trying to develop an iPhone app that processes/filters and records video.
I have two sample apps that have aspects of what I need and am trying to combine them.
AVCamDemo from the WWDC10 sample code package (Apple Developer ID required)
This deals with capturing/recording video.
Brad Larson's ColorTracking sample app referenced here
This deals with live processing of video using OpenGL ES
I get stuck when trying to combine the two.
What I have been trying to do is to use AVCaptureVideoOutput and the AVCaptureVideoDataOutputSampleBufferProtocol to process/filter the video frames through OpenGL ES (as in -2-), and at the same time somehow use AVCaptureMovieFileOutput to record the processed video (as in -1-).
Is this approach possible? If so, how would I need to set up the connections within the AVSession?
Or do I need to use the AVCaptureVideoDataOutputSampleBufferProtocol to process/filter the video AND then recombine the individual frames back into a movie - without using AVCaptureMovieFileOutput to save the movie file?
Any suggestions for the best approach to accomplish this are much appreciated!
Related
I am new to iPhone Development. I need to capture video. While I'm capturing video it display on server too. Something like live streaming.
Anyone have idea from where I should have to start for this functionality?
Thanks in Advance.
Your question seems similar to this
Xcode ios: Streaming of video file while recording and removed redundant personal statements
First Half Solution
Using AVFoundation you can get video Buffer/frames while recording.
Second Half
But for uploading i didn't find any solution
There is Input Stream option there in iOS APIs but it need some file path. but as video is not recorded we didn't have any path.
Edit 1
Here is Best Example for AVFoundation provided by Apple, you can start with
I recommend you to use wowza wowza.com/https://www.wowza.com, it has all the features, from live stream, video on demand and etc.
On iOS, is it possible to get the user's audio stream in a decompressed format? For example, the MP3 is returned as a WAV that can be used for audio analysis? I'm relatively new to the iOS platform, and I remember seeing that this wasn't possible in older iOS versions. I read that iOS 4 brought in some advanced APIs but I'm not sure where I can find documentations/samples for these.
If you don't mind using API for iOS 4.1 and above, you could try using the AVAssetReader class and friends. In this similar question you have a full example on how to extract video frames. I would expect the same to work for audio, and the nice thing is that the reader deals with all the details of decompression. You can even do composition with AVComposition to merge several streams.
These classes are part of the AVFramework, which allows not only reading but also creating your own content.
Apple has an OpenAL example at http://developer.apple.com/library/mac/#samplecode/OpenALExample/Introduction/Intro.html where Scene.m should interest you.
The Apple documentation has this picture where the Core Audio framework clearly shows that it gives you MP3 out. It also states that you can access audio units in a more radical way if you so need.
The same Core Audio document gives also some information about using MIDI if it may help you.
Edit:
You're in luck today.
In this example an audio file is loaded and fed into an AudioUnit graph. You could fairly easily write an AudioUnit of your own to put into this graph and which analyzes the PCM stream as you see fit. You can even do it in the callback function, although that's probably not a good idea because callbacks are encouraged to be as simple as possible.
I have a question about video stream processing. Is it possible to get access and modify real time video stream during recording (f.e. I want to add some text to video)? I can do this as a preview by getting separate frames, but I'm looking for tool which will allow me to store video with my text in video frames.
Probably there is already some libraries/tools available (but I haven't found any yet).
Try GPUIMAGE library. It can help you.
You should check AVCam sample code by apple. That might be a starting point.
Is it possible to record video on iphone 3g with ios 4.1 using AVFoundation?
I m able to record audio and can pick still image using AVFoundation but not able to record video.
If you see the AVFoundation reference documentation its stated that
You should typically use the
highest-level abstraction available
that allows you to perform the tasks
you want. For example:
If you simply want to play movies, you can use the Media Player
Framework (MPMoviePlayerController or
MPMoviePlayerViewController), or for
web-based media you could use a
UIWebView object.
To record video when you need only minimal control over format, use the
UIKit framework
(UIImagePickerController).
As long as a device has a camera, you should be able to record using AVFoundation, check out the AVCam demo code from WWDC 2010 to see how you can do that.
We have a requirement to 'read' an LED Pulse lamp using the iPhone video camera. The LED lamp emits the light based on some load conditions.
Is there any related iPhone API to help achieve this goal?
Thanks much.
You can use the AVFoundation framework to read and process the live video stream from the camera. WWDC 2010 Session 405 gives you a good overview of AVFoundation.
There are iOS AV APIs to get raw pixel bitmaps from the video camera(s). Detecting any specific image or brightness within these raw bitmaps has to be done in your own code.