When looking at the GLCameraRipple example, the AVCaptureVideoDataOutput is setup in such a way that a callback is called (captureOutput) whenever a new frame arrives from the iphone camera.
However, putting a "sleep(1)" at the beginning of the "drawInRect" function (that is used for OpenGL drawing), this callback gets called only 1 time per second, instead of 30 times per second.
Can anyone tell me why the framerate of the iphone camera is linked with the framerate of the OpenGL draw call?
Update: Steps to reproduce
Download the GLCameraRipple sample from here: http://developer.apple.com/library/ios/#samplecode/GLCameraRipple/Introduction/Intro.html
In RippleViewController.m => captureOutput, add a
NSLog(#"Got Frame");. Running it will generate a lot of "Got Frame" messages (about 30 per second)
In RippleViewController.m => drawInRect, add a sleep(1); at the very beginning of the function. Only one message per second appears now.
When AVCaptureVideoDataOutput call delegate method captureOutput:didOutputSampleBuffer:fromConnection: to make programer able to edit or record image from camera, this method called from main thread. and, normally we should program the code that interact with user interface directly by main thread and that why OpenGL liked with AVCaptureVideoDataOutput because method from camera and draw to screen are run in main thread.
and AVCaptureVideoDataOutput class can drop the image if iPhone cannot process the captureOutput:didOutputSampleBuffer:fromConnection: finished in time like the process time more than 1/30 second next frame will be ignore that you can collect the data with captureOutput: didDropSampleBuffer: fromConnection: method
Related
I am developing a game for the iPhone and iPad using cocos2d, and I need to be able to play a sound exactly when another one completes.
I have a soundtrack that is chopped up in smaller pieces, and there are no room for the tinyest gap between playback when one finishes and one starts.
Btw. I cannot glue the sounds together into a single file and just play that since the order of the files will be rearranged runtime.
How can I achieve this?
With CocosDenshion you can register a delegate with
[[CDAudioManager sharedManager] setBackgroundMusicCompletionListener:self
selector:#selector(musicDidFinish)];
CDAudioManager class reference
This delegate will be called whenever the background music ends. This of course only works if you play your sound files as background music (with the playBackgroundMusic method).
If that doesn't work for you, have a look at ObjectAL. You'll have more options and greater flexibility. For example, with ALSource you can queue multiple ALBuffer objects which represent sound files. That means whenever the source's buffer count decreases to 1 you just queue the next buffer to achieve uninterrupted, sequential playback of multiple sound files (any format).
Because ObjectAL is so awesome (well, I think so :) ) it's included and ready to use in Kobold2D.
You can use a single Audio Queue or the RemoteIO Audio Unit, and just fill the callback buffers with raw/PCM audio samples from any file in any order.
I am Creating application for coaching. I struck with the marking on video. So I choose ffmpeg for converting video to image frame. That make me time delay as well as memory issues. I need to provide the user play the video slowly as frame by frame. Is there any other way to do that with out image conversion. V1 Golf did that process very quick manner. Please help me.
I would try converting video frame in separate thread and I would extract a few frames ahead as images in the background when user gets into 'slow motion mode'.
Here is example for one frame, so you should be quick with others: Video frame capture by NSOperation.
This should reduce delays and frames could be converted while user is eye-consuming subsequent frames.
I'm doing research into AR on the iPhone and am trying to figure out how people are getting each frame of video? I'm wanting to figure out AR using computer vision( OpenCV ). So basically I will have a pattern on a piece of paper that I will find using OpenCV and place a graphic on top of the pattern.
I know about the movie class UIImagePickerController, but am unsure how you would go about getting to each frame.
Can someone point me in the right direction?
UIImagePickerController is the means for displaying a camera view and taking single pictures with a camera-like front end. It's not what you're looking for.
Instead you need to look into AVFoundation, particularly the classes surrounding AVCaptureSession. You'll want to acquire a meaningful AVCaptureDevice (which can be the front or back camera on the iPhone 4 and current iPod Touch), create an AVCaptureDeviceInput that references it and add that as an input to an AVCaptureSession. Then just create an AVCaptureVideoDataOutput and set it up with a meaningful delegate and a Grand Central Dispatch dispatch queue.
When you start the session going, you'll receive delegate callbacks on the queue you created providing CMSampleBufferRefs, from which you can pull a CVImageBufferRef and hence the pixel data.
My goal is to write a custom camera view controller that:
Can take photos in all four interface orientations with both the back and, when available, front camera.
Properly rotates and scales the preview "video" as well as the full resolution photo.
Allows a (simple) effect to be applied to BOTH the preview "video" and full resolution photo.
Implementation (on iOS 4.2 / Xcode 3.2.5):
Due to requirement (3), I needed to drop down to AVFoundation.
I started with Technical Q&A QA1702 and made these changes:
Changed the sessionPreset to AVCaptureSessionPresetPhoto.
Added an AVCaptureStillImageOutput as an additional output before starting the session.
The issue that I am having is with the performance of processing the preview image (a frame of the preview "video").
First, I get the UIImage result of imageFromSampleBuffer: on the sample buffer from captureOutput:didOutputSampleBuffer:fromConnection:. Then, I scale and rotate it for the screen using a CGGraphicsContext.
At this point, the frame rate is already under the 15 FPS that is specified in the video output of the session and when I add in the effect, it drops to under or around 10. Quickly the app crashes due to low memory.
I have had some success with dropping the frame rate to 9 FPS on the iPhone 4 and 8 FPS on the iPod Touch (4th gen).
I have also added in some code to "flush" the dispatch queue, but I am not sure how much it is actually helping. Basically, every 8-10 frames, a flag is set that signals captureOutput:didOutputSampleBuffer:fromConnection: to return right away rather than process the frame. The flag is reset after a sync operation on the output dispatch queue finishes.
At this point I don't even mind the low frame rates, but obviously we can't ship with the low memory crashes. Anyone have any idea how to take action to prevent the low memory conditions in this case (and/or a better way to "flush" the dispatch queue)?
To prevent the memory issues, simply create an autorelease pool in captureOutput:didOutputSampleBuffer:fromConnection:.
This makes sense since imageFromSampleBuffer: returns an autoreleased UIImage object. Plus it frees up any autoreleased objects created by image processing code right away.
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
< Add your code here that uses the image >
[pool release];
}
My testing has shown that this will run without memory warnings on an iPhone 4 or iPod Touch (4th gen) even if requested FPS is very high (e.g. 60) and image processing is very slow (e.g. 0.5+ secs).
OLD SOLUTION:
As Brad pointed out, Apple recommends image processing be on a background thread so as to not interfere with the UI responsiveness. I didn't notice much lag in this case, but best practices are best practices, so use the above solution with autorelease pool instead of running this on the main dispatch queue / main thread.
To prevent the memory issues, simply use the main dispatch queue instead of creating a new one.
This also means that you don't have to switch to the main thread in captureOutput:didOutputSampleBuffer:fromConnection: when you want to update the UI.
In setupCaptureSession, change FROM:
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
TO:
// we want our dispatch to be on the main thread
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
A fundamentally better approach would be to use OpenGL to handle as much of the image-related heavy lifting for you (as I see you're trying in your latest attempt). However, even then you might have issues with building up frames to be processed.
While it seems strange that you'd be running into memory accumulation when processing frames (in my experience, you just stop getting them if you can't process them fast enough), Grand Central Dispatch queues can get jammed up if they are waiting on I/O.
Perhaps a dispatch semaphore would let you throttle the addition of new items to the processing queues. For more on this, I highly recommend Mike Ash's "GCD Practicum" article, where he looks at optimizing an I/O bound thumbnail processing operation using dispatch semaphores.
I made an app which plays the song on clicking on the image of artist.(see image attached). Each artist image is implemented on button and on clicking this button, a function is being called which first downloads and then plays the song. I passed this method(function) in a thread but problem is that every time when I click on the image of artist(button) new threads starts running and then multiple songs gets started playing concurrently. How can I use "NSOperation and NSOperationQueue" so that only one song will run at a time . Please help.
Thanks in advance
NSOperation and NSOperationQueue aren't going to directly solve your problem.
If I were pursuing a dead simple approach, I would have a global AudioPlayer object that has a method startPlaying: whose argument is the song to play (represented however needed; URL, NSData, whatever you need).
In that method, I'd stop playing whatever is currently playing and start the new track.
If I remember correctly, I don't think you even need a thread for this; the audio APIs are generally quite adept at taking care of playback in the background.
In any case, if you do need a thread, then I'd hide that thread in my AudioPlayer object and let it take care of telling the music to stop/start playing in said thread. A queue of some kind -- operation or GCD -- could be used for that, yes.