Configure iOS camera parameters to avoid banding in video preview - ios5

I have and application OpenCV based to process images.
I need a clean image to process data. When I´m in a zone with fluorescent light, it appear in the image some banding. In Android, I solved that problem configuring the camera parameter ANTIBANDING_50HZ, here is the reference, and it looks and process right.
But in apple reference, I cannot find a way to avoid this problem. I have been adjusting some options to improve the image, but their are not solving the banding.
My camera is configured using this code:
- (BOOL) setupCaptureSessionParameters
{
NSLog(#"--- Configure Camera options...");
/*
* Create capture session with optimal size to OpenCV processing
*/
captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset = AVCaptureSessionPreset640x480;
AVCaptureDevice *cameraBack =[self videoDeviceWithPosition:AVCaptureDevicePositionBack];
if ([cameraBack lockForConfiguration:nil])
{
NSLog(#"lockForConfiguration...");
// No autofocus
if ( [cameraBack isFocusModeSupported:AVCaptureFocusModeLocked])
{
cameraBack.focusMode = AVCaptureFocusModeLocked;
}
// Focus center image always
if ( [cameraBack isFocusPointOfInterestSupported])
{
cameraBack.focusPointOfInterest = CGPointMake(0.5, 0.5);
}
// Autoexpose color is have a several change of lights
if ( [cameraBack isExposurePointOfInterestSupported] )
{
cameraBack.exposureMode = AVCaptureExposureModeContinuousAutoExposure;
}
// Auto adjust white balance is user aim to a reflectant surface
if ( [cameraBack isWhiteBalanceModeSupported:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance])
{
cameraBack.whiteBalanceMode = AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance;
}
// Only Focus far
if ( [cameraBack isAutoFocusRangeRestrictionSupported])
{
cameraBack.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionFar;
}
// Choose best rate depending preset
AVCaptureDeviceFormat *bestFormat = nil;
AVFrameRateRange *bestFrameRateRange = nil;
for ( AVCaptureDeviceFormat *format in [cameraBack formats] )
{
for ( AVFrameRateRange *range in format.videoSupportedFrameRateRanges )
{
if ( range.maxFrameRate > bestFrameRateRange.maxFrameRate )
{
bestFormat = format;
bestFrameRateRange = range;
}
}
}
if (bestFormat)
{
cameraBack.activeFormat = bestFormat;
cameraBack.activeVideoMinFrameDuration = bestFrameRateRange.minFrameDuration;
cameraBack.activeVideoMaxFrameDuration = bestFrameRateRange.maxFrameDuration;
}
[cameraBack unlockForConfiguration];
NSLog(#"unlockForConfiguration!");
}
}
Pictures:

Follow Apple´s documentation about CIFilter
Android and IOS are very different about Camera Parameters. Android allows you to customise camera parameter to avoid banding. In th other hand, in IOS you need to use to work with CIFilter class. Android and IOS works in different directions.

Related

ARKit plane visualization

I want to be able to visualize the planes that my ARKit app detects. How do I do that?
This is what I want to be able to do
Create a new AR project in Xcode with SceneKit and Obj-C, then add these to ViewController.m:
//as a class or global variable:
NSMapTable *planes;
//add to viewWillAppear:
configuration.planeDetection = ARPlaneDetectionHorizontal;
//to viewDidLoad:
planes = [NSMapTable mapTableWithKeyOptions:NSMapTableStrongMemory
valueOptions:NSMapTableWeakMemory];
//new functions:
- (void)renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor {
if( [anchor isKindOfClass:[ARPlaneAnchor class]] ){
[planes setObject:anchor forKey:node];
ARPlaneAnchor *pa = anchor;
SCNNode *pn = [SCNNode node];
[node addChildNode:pn];
pn.geometry = [SCNPlane planeWithWidth:pa.extent.x height:pa.extent.z];
SCNMaterial *m = [SCNMaterial material];
m.emission.contents = UIColor.blueColor;
m.transparency = 0.1;
pn.geometry.materials = #[m];
pn.position = SCNVector3Make(pa.center.x, -0.002, pa.center.z);
pn.transform = SCNMatrix4MakeRotation(-M_PI / 2.0, 1, 0, 0);
}
}
- (void)renderer:(id<SCNSceneRenderer>)renderer didUpdateNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor {
if( [anchor isKindOfClass:[ARPlaneAnchor class]] ){
[planes setObject:anchor forKey:node];
ARPlaneAnchor *pa = anchor;
SCNNode *pn = [node childNodes][0];
SCNPlane *pg = pn.geometry;
pg.width = pa.extent.x;
pg.height = pa.extent.z;
pn.position = SCNVector3Make(pa.center.x, -0.002, pa.center.z);
}
}
- (void)renderer:(id<SCNSceneRenderer>)renderer didRemoveNode:(nonnull SCNNode *)node forAnchor:(nonnull ARAnchor *)anchor{
[planes removeObjectForKey:node];
}
You'll see translucent planes, give m.emission.contents a texture if you feel so.
Alternatively get the Example App from Apple in Swift

Convert CVImageBufferRef to CVPixelBufferRef

I am new to iOS programming and multimedia and I was going through a sample project named RosyWriter provided by apple at this link. Here I saw that in the code there is a function named captureOutput:didOutputSampleBuffer:fromConnection in the code given below:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
if ( connection == videoConnection ) {
// Get framerate
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
[self calculateFramerateAtTimestamp:timestamp];
// Get frame dimensions (for onscreen display)
if (self.videoDimensions.width == 0 && self.videoDimensions.height == 0)
self.videoDimensions = CMVideoFormatDescriptionGetDimensions( formatDescription );
// Get buffer type
if ( self.videoType == 0 )
self.videoType = CMFormatDescriptionGetMediaSubType( formatDescription );
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Synchronously process the pixel buffer to de-green it.
[self processPixelBuffer:pixelBuffer];
// Enqueue it for preview. This is a shallow queue, so if image processing is taking too long,
// we'll drop this frame for preview (this keeps preview latency low).
OSStatus err = CMBufferQueueEnqueue(previewBufferQueue, sampleBuffer);
if ( !err ) {
dispatch_async(dispatch_get_main_queue(), ^{
CMSampleBufferRef sbuf = (CMSampleBufferRef)CMBufferQueueDequeueAndRetain(previewBufferQueue);
if (sbuf) {
CVImageBufferRef pixBuf = CMSampleBufferGetImageBuffer(sbuf);
[self.delegate pixelBufferReadyForDisplay:pixBuf];
CFRelease(sbuf);
}
});
}
}
CFRetain(sampleBuffer);
CFRetain(formatDescription);
dispatch_async(movieWritingQueue, ^{
if ( assetWriter ) {
BOOL wasReadyToRecord = (readyToRecordAudio && readyToRecordVideo);
if (connection == videoConnection) {
// Initialize the video input if this is not done yet
if (!readyToRecordVideo)
readyToRecordVideo = [self setupAssetWriterVideoInput:formatDescription];
// Write video data to file
if (readyToRecordVideo && readyToRecordAudio)
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeVideo];
}
else if (connection == audioConnection) {
// Initialize the audio input if this is not done yet
if (!readyToRecordAudio)
readyToRecordAudio = [self setupAssetWriterAudioInput:formatDescription];
// Write audio data to file
if (readyToRecordAudio && readyToRecordVideo)
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeAudio];
}
BOOL isReadyToRecord = (readyToRecordAudio && readyToRecordVideo);
if ( !wasReadyToRecord && isReadyToRecord ) {
recordingWillBeStarted = NO;
self.recording = YES;
[self.delegate recordingDidStart];
}
}
CFRelease(sampleBuffer);
CFRelease(formatDescription);
});
}
Here a function named pixelBufferReadyForDisplay is called which expects a parameter of type CVPixelBufferRef
Prototype of pixelBufferReadyForDisplay
- (void)pixelBufferReadyForDisplay:(CVPixelBufferRef)pixelBuffer;
But in the code above while calling this function it passes the variable pixBuf which is of type CVImageBufferRef
So my question is that isn't it required to use any function or typecasting to convert a CVImageBufferRef to CVPixelBufferRef or is this done implicitly by the compiler?
Thanks.
If you do a search on CVPixelBufferRef in the Xcode docs, you'll find the following:
typedef CVImageBufferRef CVPixelBufferRef;
So a CVImageBufferRef is a synonym for a CVPixelBufferRef. They are interchangeable.
You are looking at some pretty gnarly code. RosyWriter, and another sample app called "Chromakey" do some pretty low-level processing on pixel buffers. If you're new to iOS development AND new to multimedia you might not want to dig so deep, so fast. It's a bit like a first year medical student trying to perform a heart-lung transplant.

iOS Accelerometer-Based Gesture Recognition

I want to create a project that reads the user's gesture (accelerometer-based) and recognise it, I searched a lot but all I found was too old, I neither have problems in classifying nor in recognition, I will use 1 dollar recogniser or HMM, I just want to know how to read the user's gesture using the accelerometer.
Is the accelerometer data (x,y,z values) enough or should i use other data with it like Attitude data (roll, pitch, yaw), Gyro data or magnitude data, I don't even understand anyone of them so explaining what does these sensors do will be useful.
Thanks in advance !
Finally i did it, i used userAcceleration data which is device acceleration due to device excluding gravity, i found a lot of people use the normal acceleration data and do a lot of math to remove gravity from it, now it's already done by iOS 6 in userAcceleration.
And i used 1$ recognizer which is a 2D recongnizer (i.e. point(5, 10), no Z).Here's a link for 1$ recognizer, there's a c++ version of it in the downloads section.
Here are the steps of my code...
Read userAcceleration data with frequancy 50 HZ.
Apply low pass filter on it.
Take a point into consideration only if its x or y values are greater than 0.05 to reduce noise. (Note: The next step depends on your code and on the recognizer you use).
Save x and y points into array.
Create a 2D path from this array.
Send this path to the recognizer to weather train it or recongize it.
Here's my code...
#implementation MainViewController {
double previousLowPassFilteredAccelerationX;
double previousLowPassFilteredAccelerationY;
double previousLowPassFilteredAccelerationZ;
CGPoint position;
int numOfTrainedGestures;
GeometricRecognizer recognizer;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
previousLowPassFilteredAccelerationX = previousLowPassFilteredAccelerationY = previousLowPassFilteredAccelerationZ = 0.0;
recognizer = GeometricRecognizer();
//Note: I let the user train his own gestures, so i start up each time with 0 gestures
numOfTrainedGestures = 0;
}
#define kLowPassFilteringFactor 0.1
#define MOVEMENT_HZ 50
#define NOISE_REDUCTION 0.05
- (IBAction)StartAccelerometer
{
CMMotionManager *motionManager = [CMMotionManager SharedMotionManager];
if ([motionManager isDeviceMotionAvailable])
{
[motionManager setDeviceMotionUpdateInterval:1.0/MOVEMENT_HZ];
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMDeviceMotion *motion, NSError *error)
{
CMAcceleration lowpassFilterAcceleration, userAcceleration = motion.userAcceleration;
lowpassFilterAcceleration.x = (userAcceleration.x * kLowPassFilteringFactor) + (previousLowPassFilteredAccelerationX * (1.0 - kLowPassFilteringFactor));
lowpassFilterAcceleration.y = (userAcceleration.y * kLowPassFilteringFactor) + (previousLowPassFilteredAccelerationY * (1.0 - kLowPassFilteringFactor));
lowpassFilterAcceleration.z = (userAcceleration.z * kLowPassFilteringFactor) + (previousLowPassFilteredAccelerationZ * (1.0 - kLowPassFilteringFactor));
if (lowpassFilterAcceleration.x > NOISE_REDUCTION || lowpassFilterAcceleration.y > NOISE_REDUCTION)
[self.points addObject:[NSString stringWithFormat:#"%.2f,%.2f", lowpassFilterAcceleration.x, lowpassFilterAcceleration.y]];
previousLowPassFilteredAccelerationX = lowpassFilterAcceleration.x;
previousLowPassFilteredAccelerationY = lowpassFilterAcceleration.y;
previousLowPassFilteredAccelerationZ = lowpassFilterAcceleration.z;
// Just viewing the points to the user
self.XLabel.text = [NSString stringWithFormat:#"X : %.2f", lowpassFilterAcceleration.x];
self.YLabel.text = [NSString stringWithFormat:#"Y : %.2f", lowpassFilterAcceleration.y];
self.ZLabel.text = [NSString stringWithFormat:#"Z : %.2f", lowpassFilterAcceleration.z];
}];
}
else NSLog(#"DeviceMotion is not available");
}
- (IBAction)StopAccelerometer
{
[[CMMotionManager SharedMotionManager] stopDeviceMotionUpdates];
// View all the points to the user
self.pointsTextView.text = [NSString stringWithFormat:#"%d\n\n%#", self.points.count, [self.points componentsJoinedByString:#"\n"]];
// There must be more that 2 trained gestures because in recognizing, it gets the closest one in distance
if (numOfTrainedGestures > 1) {
Path2D path = [self createPathFromPoints]; // A method to create a 2D path from pointsArray
if (path.size()) {
RecognitionResult recongnitionResult = recognizer.recognize(path);
self.recognitionLabel.text = [NSString stringWithFormat:#"%s Detected with Prob %.2f !", recongnitionResult.name.c_str(),
recongnitionResult.score];
} else self.recognitionLabel.text = #"Not enough points for gesture !";
}
else self.recognitionLabel.text = #"Not enough templates !";
[self releaseAllVariables];
}

AVCaptureSession specify custom resolution

does anybody know how to set a custom resolution in a custom iOS camera using AVFoundation (AVCaptureStillImageOutput)?
I know you can select a preset using AVCaptureSession, but I need an output resolution of 920x920 (not provided by a preset). Currently I use AVCaptureSessionPresetHigh and resize the UIImage after but that seems silly and a lot extra, unnecessary processing.
You can't, but what you can do is iterate through all the AVCaptureDeviceFormats looking for the one closer to your resolution.
To get a complete list of all the available formats, just query the capture device using the property -formats.
This example from Apple shows how to pick the best frame rate:
- (void)configureCameraForHighestFrameRate:(AVCaptureDevice *)device
{
AVCaptureDeviceFormat *bestFormat = nil;
AVFrameRateRange *bestFrameRateRange = nil;
for ( AVCaptureDeviceFormat *format in [device formats] ) {
for ( AVFrameRateRange *range in format.videoSupportedFrameRateRanges ) {
if ( range.maxFrameRate > bestFrameRateRange.maxFrameRate ) {
bestFormat = format;
bestFrameRateRange = range;
}
}
}
if ( bestFormat ) {
if ( [device lockForConfiguration:NULL] == YES ) {
device.activeFormat = bestFormat;
device.activeVideoMinFrameDuration = bestFrameRateRange.minFrameDuration;
device.activeVideoMaxFrameDuration = bestFrameRateRange.minFrameDuration;
[device unlockForConfiguration];
}
}
}
Please, check out one of my old questions that I answered myself, it may help you.
Capture picture from video using AVFoundation

How do you set the framerate when recording video on the iPhone?

I would like to write a camera application where you record video using the iPhone's camera, but I can't find a way to alter the framerate of the recorded video. For example, I'd like to record at 25 frames per second instead of the default 30.
Is it possible to set this framerate in any way, and if yes how?
You can use AVCaptureConnection's videoMaxFrameDuration and videoMinFrameDuration properties. See http://developer.apple.com/library/ios/#DOCUMENTATION/AVFoundation/Reference/AVCaptureConnection_Class/Reference/Reference.html#//apple_ref/doc/uid/TP40009522
Additionally, there is an SO question that addresses this (with a good code example):
I want to throttle video capture frame rate in AVCapture framework
As far as I could tell, you can't set the FPS for recording. Look at the WWDC 2010 video for AVFoundation. It seems to suggest that you can but, again, as far as I can tell, that only works for capturing frame data.
I'd love to be proven wrong, but I'm pretty sure that you can't. Sorry!
You will need AVCaptureDevice.h
Here is working code here:
- (void)attemptToConfigureFPS
{
NSError *error;
if (![self lockForConfiguration:&error]) {
NSLog(#"Could not lock device %# for configuration: %#", self, error);
return;
}
AVCaptureDeviceFormat *format = self.activeFormat;
double epsilon = 0.00000001;
int desiredFrameRate = 30;
for (AVFrameRateRange *range in format.videoSupportedFrameRateRanges) {
NSLog(#"Pre Minimum frame rate: %f Max = %f", range.minFrameRate, range.maxFrameRate);
if (range.minFrameRate <= (desiredFrameRate + epsilon) &&
range.maxFrameRate >= (desiredFrameRate - epsilon)) {
NSLog(#"Setting Frame Rate.");
self.activeVideoMaxFrameDuration = (CMTime){
.value = 1,
.timescale = desiredFrameRate,
.flags = kCMTimeFlags_Valid,
.epoch = 0,
};
self.activeVideoMinFrameDuration = (CMTime){
.value = 1,
.timescale = desiredFrameRate,
.flags = kCMTimeFlags_Valid,
.epoch = 0,
};
// self.activeVideoMinFrameDuration = self.activeVideoMaxFrameDuration;
// NSLog(#"Post Minimum frame rate: %f Max = %f", range.minFrameRate, range.maxFrameRate);
break;
}
}
[self unlockForConfiguration];
// Audit the changes
for (AVFrameRateRange *range in format.videoSupportedFrameRateRanges) {
NSLog(#"Post Minimum frame rate: %f Max = %f", range.minFrameRate, range.maxFrameRate);
}
}