How to use AVCaptureSession to stream live preview video, then take a photo, then return to streaming - iphone

I have an application that creates its own live preview prior to taking a still photo. The app needs to run some processing on the image data and thus is not able to rely on AVCaptureVideoPreviewLayer. Getting the initial stream to work is going quite well, using Apple's example code. The problem comes when I try to switch to the higher quality image to take the snapshot. In response to a button press I attempt to reconfigure the session for taking a full resolution photo. I've tried many variations but here is my latest example (which still does not work):
- (void)sessionSetupForPhoto
{
[session beginConfiguration];
session.sessionPreset = AVCaptureSessionPresetPhoto;
AVCaptureStillImageOutput *output = [[[AVCaptureStillImageOutput alloc] init] autorelease];
for (AVCaptureOutput *output in [session outputs]) {
[session removeOutput:output];
}
if ([session canAddOutput:output]){
[session addOutput:output];
} else {
NSLog(#"Not able to add an AVCaptureStillImageOutput");
}
[session commitConfiguration];
}
I am consistently getting an error message just after the commitConfiguration line that looks like this:
(that is to say, I am getting an AVCaptureSessionRuntimeErrorNotification sent to my registered observer)
Received an error:
NSConcreteNotification 0x19d870 {name
= AVCaptureSessionRuntimeErrorNotification;
object = ;
userInfo = {
AVCaptureSessionErrorKey = "Error Domain=AVFoundationErrorDomain
Code=-11800 \"The operation
couldn\U2019t be completed.
(AVFoundationErrorDomain error
-11800.)\" UserInfo=0x19d810 {}";
The documentation in XCode ostensibly provides more information for the error number (-11800), "AVErrorUnknown - Reason for the error is unknown.";
Previously I had also tried calls to stopRunning and startRunning, but no longer do that after watching WWDC Session 409, where it is discouraged. When I was stopping and starting, I was getting a different error message -11819, which corresponds to "AVErrorMediaServicesWereReset - The operation could not be completed because media services became unavailable.", which is much nicer than simply "unknown", but not necessarily any more helpful.
It successfully adds the AVCaptureStillImageOutput (i.e., does NOT emit the log message).
I am testing on an iPhone 3g (w/4.1) and iPhone 4.
This call is happening in the main thread, which is also where my original AVCaptureSession setup took place.
How can I avoid the error? How can I switch to the higher resolution to take the photo?
Thank you!

Since you're processing the video data coming out of the AVCaptureSession, I'm assuming you have an AVCaptureVideoDataOutput connected to it prior to calling sessionSetupForPhoto.
If so, can you elaborate on what you're doing in captureOutput:didOutputSampleBuffer:? Without being able to see more, I'm guessing there may be a problem with removing the old outputs and subsequently setting the photo quality preset.
Also, the output variable you're using as an iterator when you remove your outputs is hiding the still image output. Not a problem, but it makes the code a little harder to read.

There is no need to switch sessions. Just add AVCaptureStillImageOutput to your session on initialization and call the following when you are about to capture the image and use the CMSampleBufferRef accordingly:
captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
}

Related

iOS - CMSampleBufferRef is not being released from captureOutput:didOutputSampleBuffer:fromConnection

I am capturing frames from the camera using the code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
if(delegate && [delegate respondsToSelector:#selector(captureManagerCapturedFrame:withFrameImage:withFrameBuffer:)]) {
[delegate captureManagerCapturedFrame:self withFrameImage:image withFrameBuffer:sampleBuffer];
}
}
I am doing this because in the delegate method captureManagerCapturedFrame:withFrameImage:withFrameBuffer: I have a flag which tells the app to use either the returned uiimage OR the returned sampleBuffer.
The delegate method is:
- (void) captureManagerCapturedFrame:(AVCamCaptureManager *)captureManager
withFrameImage:(UIImage *)image
withFrameBuffer:(CMSampleBufferRef)frameBuffer {
if(_screen1) {
NSLog(#"Only display camera image\n");
}
else if(_screen2) {
//Enable IR
NSLog(#"Display AND Process camera image\n");
[self imageReconigitionProcessFrame:frameBuffer];
}
}
where imageReconigitionProcessFrame: is:
-(void)imageReconigitionProcessFrame:(CMSampleBufferRef)frameBuffer {
//CFRetain(frameBuffer);
MSImage *qry = [[MSImage alloc] initWithBuffer:frameBuffer orientation:AVCaptureVideoOrientationPortrait]; //MEMORY LEAK HERE???
qry = nil;
//CFRelease(frameBuffer);
}
This code effectively works. But here is my problem. When this code is run and profiled in instruments, I see a rapid increase in the overall bytes used, but the allocations profiler doesn't appear to increase. Nor do a see any 'leaks' using the leaks tool. But clearly, there is a rapid memory gain each time imageReconigitionProcessFrame: is called and the app crashes after a few seconds. When I set frameBuffer to nil, there is NO increase in memory (or course I also don't have the frame buffer to do any processing with).
I have tried transfering ownership of frameBuffer using CFRetain and CFRelease (commented out in the above code), but these don't seem to do anything either.
Does anyone have any idea where I could be leaking memory inside this function???
The method [[MSImage alloc] initWithBuffer: is form a third party SDK (Moodstocks, which is an awesome image recognition SDK) and it works just fine in their demos, so I don't think the problem is inside this function.
First of all, thanks for mentioning Moodstocks (I work for them): we're happy that you find our SDK useful!
To answer your question, I guess your code does indeed contain a leak: at the end of the imageReconigitionProcessFrame method, you should call [qry release]. The rule in Obj-C is quite simple: whenever you manually call alloc on an object, it should also be manually released!
That's BTW what is done in the Moodstocks SDK wrapper: if you look at the [MSScannerSession session: didOutputSampleBuffer:] method, you'll see that we do manually release the MSImage object after it's been processed.
As to why the profiler doesn't find this leak, I guess that it's due to the fact that leaks are analyzed every 10 seconds by default: in this case, the memory leak is so heavy (1280x720 frames, at 15+ FPS if you're on an iPhone 5, for 10 seconds: at least 130 MB leaked) that the code must crash before the first 10 seconds are reached.
Hope this helps!

Geocoding address into coordinates in iPhone

I am trying to geocode address into coordinates using following code:
CLGeocoder *geocoder = [[CLGeocoder alloc] init];
[geocoder geocodeAddressString:#"6138 Bollinger Road, San Jose, United States" completionHandler:^(NSArray* placemarks, NSError* error){
for (CLPlacemark* aPlacemark in placemarks)
{
// Process the placemark.
NSString *latDest1 = [NSString stringWithFormat:#"%.4f",aPlacemark.location.coordinate.latitude];
NSString *lngDest1 = [NSString stringWithFormat:#"%.4f",aPlacemark.location.coordinate.longitude];
lblDestinationLat.text = latDest1;
lblDestinationLng.text = lngDest1;
}
}];
I have tried it many times but the debugger never enters the block and I am not able to get the location. What can I try next?
All right I found my mistake.
The code is correct and works perfect. All the while I was working on it was through debugger and was trying to figure out why the debugger did not enter the block. But now I have found out debugger does not enter the block at that moment. It takes little in getting the location values. It is done asynchronously so I was not able to find it and I was getting crash because of no values just after the crash. I have moved my code post block to inside the block and everything works fine for me now.
I just ran that exact code and it worked as expected. Make sure that you have an active internet connection.
Try adding a NSLog on the strings and see if it gets called.
NSLog(#"lat: %#, lng: %#", latDest1, lngDest1);
Are you running it in the simulator or the device?
Blocks are new features to Objective C from iOS4.0 onwards. A block you can imagine as a delegate method working in same functional block. As for any delegate method it takes time to invoke, depending upon the condition, same way block executes the code inside it, when it completes its work of geocoding. You can read more about Block in apples documentation or read http://www.raywenderlich.com/9438/how-to-use-blocks-in-ios-5-tutorial-part-2.
You can also have look into my repository on GITHUB https://github.com/Mangesh20/geocoding

Calling ALAssetReprsentation's metadata method several hundred times will fail

I am writing a tiny iPhone app to retrieve metadata, such as EXIF info, for all photos stored in the iPhone, and ran into a weird issue when calling the Assets Library Framework API. Basically, if I am calling ALAssetReprsentation's metadata method (http://developer.apple.com/library/ios/documentation/AssetsLibrary/Reference/ALAssetRepresentation_Class/Reference/Reference.html#//apple_ref/occ/instm/ALAssetRepresentation/metadata) for several hundred times (even for the same ALAssetReprsentation object), the API will report an error and return null instead of photo's metadata.
Here is the code to reproduce this issue:
ALAsset *photo = ... // fetch a photo asset via Assets Library Framework
int i = 0;
ALAssetRepresentation *representation = [photo defaultRepresentation];
NSDictionary *metadata;
while (i<600) {
i++;
metadata = [representation metadata];
NSLog(#"photo %d indexed %#", i, metadata);
}
Here is the output for the code above. In the beginning of the output, everything is okay, but after 500+ times, the metadata API will report error like "ImageIO: CGImageSourceCreateWithData data parameter is nil".
...
2011-12-29 21:46:17.106 MyApp[685:707] photo 578 indexed {
ColorModel = RGB;
DPIHeight = 72;
DPIWidth = 72;
...
}
...
ImageIO: <ERROR> CGImageSourceCreateWithData data parameter is nil
2011-12-29 21:46:17.151 MyApp[685:707] photo 579 indexed (null)
ImageIO: <ERROR> CGImageSourceCreateWithData data parameter is nil
2011-12-29 21:46:17.177 MyApp[685:707] photo 580 indexed (null)
I am testing in an iPhone 3GS with iOS 5.0.1. And I am developing with Xcode 4.2 with ARC (automatical reference counting) enabled. And I can only reproduce this issue when deploying the app to the iPhone 3GS device, but cannot reproduce this issue when using iOS simulator with the same code (at least I don't reproduce this issue after calling the API over 1800 times in iOS simulator).
Any help is appreciated. Thanks.
It is possible that you are running out of memory. The method [representation metadata] returns an autoreleased object and possibly creates more autoreleased objects when it executes. All these instances are added to the autorelease pool, waiting to be finally released (and their memory freed) when the ARP gets the chance to drain itself.
The problem is that this won't happen until your code returns control to the run loop. So for the duration of your loop, at least 600 large dictionaries (and possibly many more objects) end up being allocated and not deallocated. Depending on the size of these objects, memory usage can increase tremendously.
This is true whether you are using ARC or not.
To avoid this issue, try creating a fresh autorelease pool on every iteration of the loop. That way, the ARP gets drained on every iteration:
while (i<600) {
#autoreleasepool {
i++;
metadata = [representation metadata];
NSLog(#"photo %d indexed %#", i, metadata);
}
}
This is not necessarily the best solution from a performance perspective but at least it will tell you whether the problem is memory related.
PS: Your code doesn't make much sense at the moment. Why retrieve the metadata for the same asset 600 times in a row?
enter code hereMake sure you retain the ALAssetLibrary until you are finished access related assets. From Apple's docs:
The lifetimes of objects you get back from a library instance are tied
to the lifetime of the library instance.

GLKTextureLoader fails when loading a certain texture the first time, but succeeds the second time

I'm making an iPhone application with OpenGL ES 2.0 using the GLKit. I'm using GLKTextureLoader to load textures synchronously.
The problem is that for a certain texture, it fails to load it the first time. It gives this error:
The operation couldn’t be completed. (GLKTextureLoaderErrorDomain error 8.)
For this error code, the apple documentation says the following:
GLKTextureLoaderErrorUncompressedTextureUpload
An uncompressed texture could not be uploaded.
Available in iOS 5.0 and later.
Declared in GLKTextureLoader.h.
(not very much).
Could I be trying to load the texture while the opengl context is in some busy state or something like that?
Notes:
Before getting to load this texture I load other textures and those work on the first try.
Also, the exact same texture file will load ok on the second try.
There should be enough free video memory as I have only a couple of textures loaded before this one.
The texture is an uncompressed PNG with alpha, but I also tried with TGA (24bit & 32bit) with no luck.
Any ideas are welcomed, thanks
EDIT:
More info:
the opengl context is shared between all my screens. I'm doing this to keep my shaders and textures loaded between screens.
the problem above happens when I go to my second screen. In the first screen I draw textured stuff with no problems (other textures though).
The problem above happens when I load my content (game entities) in the game world. Each entity tries to load the texture. I have a simple caching system that loads the texture only once and then returns the same id for all other entities. I'm loading the entities synchronously, in one method. The first entity fails to load the texture then comes the second and succeeds and then the third one gets the cached id.
I am calling the load entities method in viewDidAppear and I've tried to add a sleep for 2 seconds before I load any entities but nothing changed.
EDIT:
Texture loading code:
- (GLKTextureInfo *)loadTextureAtPath:(NSString*)path ofType:(NSString*)type withKey:(NSString *)key
{
GLKTextureInfo* tex;
tex = [self textureWithKey:key];
if (tex)
return tex;
NSDictionary * options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO],
GLKTextureLoaderOriginBottomLeft,
nil];
NSError * error;
NSString *bundlepath = [[NSBundle mainBundle] pathForResource:path ofType:type];
tex = [GLKTextureLoader textureWithContentsOfFile:bundlepath options:options error:&error];
if (tex == nil)
DLOG_LOCAL(#"Error loading texture: %#", [error localizedDescription]);
else
[textures setObject:tex forKey:key];
return tex;
}
I was also getting
The operation couldn’t be completed. (GLKTextureLoaderErrorDomain error 8.)
when loading a texture late in runtime while several previous textures had loaded successfully closer to launch. I was able to solve the problem by inserting the following line of code before the GLKTextureLoader call:
NSLog(#"GL Error = %u", glGetError());
Sure enough, GL was reporting an error, but it did not require me to address the error in order for GLKTextureLoader to work. Merely getting the GL Error was enough.
I got this when enabling textures before loading the texture. Simply moved glEnable(GL_TEXTURE) after the loading and the issue was gone.
Maybe you've resolved this, but are you using multiple contexts? maybe you should be loading your texture asynchronously with sharegroup.
so instead of using tex = [GLKTextureLoader textureWithContentsOfFile:bundlepath options:options error:&error];
use something like:
GLKTextureLoader *textureloader = [[GLKTextureLoader alloc] initWithSharegroup:self.eaglContext.sharegroup];
GLKTextureInfo *myTexture;
[textureloader textureWithCGImage:_currentForegroundImage.CGImage options:nil queue:nil completionHandler:^(GLKTextureInfo *textureInfo, NSError *error) {
myTexture = textureInfo;
if(error) {
// log stuff
}
// do something
}];
I had a similar problem. Is was caused by a texture that had width / height not power of 2. GLKTextureLoader failed on loading this and the following images. Checking glGetError() after each texture load revealed the troublemakers :-).
Ok, I'll try this one again as I ran into the error again. What appears to happen is that if there is some other glError that has not been processed, then you will have trouble with the texture loading on the first time.
Before you load that texture that fails, check for a glError and then track down where that error occurred. Or you can capture an opengl frame prior to where the texture is loaded and see if a glError is being thrown prior. This happened to me both times when I ran into the error 8, and both times this error disappeared once I fixed the error that had occurred earlier.
I ran into the same problem. I'm not exactly sure as to why it occurred exactly other than that it appeared that there were multiple file operations going on at the same time. For example, performing a file load (for model data) right AFTER using the texture loader for the first time would cause error 8 to pop up. I fixed it in my program by having some other operations occur after the texture loader is called for the first time.
I've also found that you get this error when trying to create a 2D texture with an image larger than the maximum texture size. For the max size you can see Apple's Open GL ES Platform Notes, although those do not appear correct for newer devices so the best bet is to get the value directly.
I had very similar problem and it has been solved by calling setCurrentContext.
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:self.context];

AVAudioPlayer - Metering - Want to build a waveform (graph)

I need to build a visual graph that represents voice levels (dB) in a recorded file. I tried to do it this way:
NSError *error = nil;
AVAudioPlayer *meterPlayer = [[AVAudioPlayer alloc]initWithContentsOfURL:[NSURL fileURLWithPath:self.recording.fileName] error:&error];
if (error) {
_lcl_logger(lcl_cEditRecording, lcl_vError, #"Cannot initialize AVAudioPlayer with file %# due to: %# (%#)", self.recording.fileName, error, error.userInfo);
} else {
[meterPlayer prepareToPlay];
meterPlayer.meteringEnabled = YES;
for (NSTimeInterval i = 0; i <= meterPlayer.duration; ++i) {
meterPlayer.currentTime = i;
[meterPlayer updateMeters];
float averagePower = [meterPlayer averagePowerForChannel:0];
_lcl_logger(lcl_cEditRecording, lcl_vTrace, #"Second: %f, Level: %f dB", i, averagePower);
}
}
[meterPlayer release];
It would be cool if it worked out however it didn't. I always get -160 dB. Any other ideas on how to implement that?
UPD: Here is what I got finally:
alt text http://img22.imageshack.us/img22/5778/waveform.png
I just want to help the others who have come into this same question and used a lot of time to search. To save your time, I put out my answer. I dislike somebody here who treat this as kind of secret...
After search around the articles about extaudioservice, audio queue and avfoundation.
I realised that i should use AVFoundation, reason is simple, it is the latest bundle and it is Objective C but not so cpp style.
So the steps to do it is not complicated:
Create AVAsset from the audio file
Create avassetreader from the avasset
Create avassettrack from avasset
Create avassetreadertrackoutput from avassettrack
Add the avassetreadertrackoutput to the previous avassetreader to start reading out the audio data
From the avassettrackoutput you can copyNextSampleBuffer one by one (it is a loop to read all data out).
Each copyNextSampleBuffer gives you a CMSampleBufferRef which can be used to get AudioBufferList by CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer. AudioBufferList is array of AudioBuffer. AudioBuffer is the a bunch of audio data which is stored in its mData part.
You can implement the above in extAudioService as well. But i think the above avfoundation approach is easier.
So next question, what to do with the mData? Note that when you get the avassetreadertrackoutput, you can specify its output format, so we specify the output is lpcm.
Then the mData you finally get is actually a float format amplitude value.
Easy right? Though i used a lot of time to organise this from piece here and there.
Two useful resource for share:
Read this article to know basic terms and conceptions: https://www.mikeash.com/pyblog/friday-qa-2012-10-12-obtaining-and-interpreting-audio-data.html
Sample code: https://github.com/iluvcapra/JHWaveform
You can copy most of the above mentioned code from this sample directly and used for your own purpose.
I haven't used it myself, but Apple's avTouch iPhone sample has bar graphs powered by AVAudioPlayer, and you can easily check to see how they do it.
I don't think you can use AVAudioPlayer based on your constraints. Even if you could get it to "start" without actually playing the sound file, it would only help you build a graph as fast as the audio file would stream. What you're talking about is doing static analysis of the sound, which will require a much different approach. You'll need to read in the file yourself and parse it manually. I don't think there's a quick solution using anything in the SDK.
Ok guys, seems I'm going to answer my own question again: http://www.supermegaultragroovy.com/blog/2009/10/06/drawing-waveforms/ No a lot of concretics, but at least you will know what Apple docs to read.