is it possible to catch the error "ImageIO: JPEGMaximum supported image dimension is 65500 pixels"
I have wrote try,catch section for the corresponding method.However the catch block didn't catch the ImageIO exception!!
The below code catch block will never catch it. any help on this is Appreciated.
-(void)captureToPhotoAlbum {
#try {
UIImage *image = [self glToUIImage]; //to get an image
if (image != nil) {
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
[self eraseView];
[self showAnAlert:#"Saved Successfully" Message:#""];
}
else {
[self showAnAlert:#"Can't Save!!" Message:#"An error occurred "];
}
}
#catch (NSException *ex) {
[self showAnAlert:#"Can't Save!!" Message:#"An error occurred"];
}
}
-(UIImage *) glToUIImage {
UIImage *myImage = nil;
#try {
NSInteger myDataLength = 800 * 960 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 800, 960, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 960; y++)
{
for(int x = 0; x < 800 * 4; x++)
{
buffer2[(959 - y) * 800 * 4 + x] = buffer[y * 4 * 800 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 800;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(800, 444444444, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
myImage = [UIImage imageWithCGImage:imageRef];
}
#catch (NSException *ex) {
#throw;
}
#finally {
return myImage;
}
}
the method does not throw an exception, so there is nothing to catch. if you want to receive an error message, you should pass a callback as the third argument:
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
then you can implement the following method, which will be called once the image is saved:
- (void) image: (UIImage *) image
didFinishSavingWithError: (NSError *) error
contextInfo: (void *) contextInfo;
Related
I want to compare 2 images taken by iPhone camera in my project. I am using OpenCV for doing that. Is there any other better way to do that?
If i got the % similarity, It will be great.
I am using OpenCV following code for image comparison:
-(void)opencvImageCompare{
NSMutableArray *valuesArray=[[NSMutableArray alloc]init];
IplImage *img = [self CreateIplImageFromUIImage:imageView.image];
// always check camera image
if(img == 0) {
printf("Cannot load camera img");
}
IplImage *res;
CvPoint minloc, maxloc;
double minval, maxval;
double values;
UIImage *imageTocompare = [UIImage imageNamed:#"MyImageName"];
IplImage *imageTocompareIpl = [self CreateIplImageFromUIImage:imageTocompare];
// always check server image
if(imageTocompareIpl == 0) {
printf("Cannot load serverIplImageArray image");
}
if(img->width-imageTocompareIpl->width<=0 && img->height-imageTocompareIpl->height<=0){
int balWidth=imageTocompareIpl->width-img->width;
int balHeight=imageTocompareIpl->height-img->height;
img->width=img->width+balWidth+100;
img->height=img->height+balHeight+100;
}
CvSize size = cvSize(
img->width - imageTocompareIpl->width + 1,
img->height - imageTocompareIpl->height + 1
);
res = cvCreateImage(size, IPL_DEPTH_32F, 1);
// CV_TM_SQDIFF CV_TM_SQDIFF_NORMED
// CV_TM_CCORR CV_TM_CCORR_NORMED
// CV_TM_CCOEFF CV_TM_CCOEFF_NORMED
cvMatchTemplate(img, imageTocompareIpl, res,CV_TM_CCOEFF);
cvMinMaxLoc(res, &minval, &maxval, &minloc, &maxloc, 0);
printf("\n value %f", maxval-minval);
values=maxval-minval;
NSString *valString=[NSString stringWithFormat:#"%f",values];
[valuesArray addObject:valString];
weedObject.values=[valString doubleValue];
printf("\n------------------------------");
cvReleaseImage(&imageTocompareIpl);
cvReleaseImage(&res);
}
cvReleaseImage(&img);
}
For the same image I am getting non zero result (14956...) and if I pass different image its crash.
try this code, It compares images bit by bit, ie 100%
UIImage *img1 = // Some photo;
UIImage *img2 = // Some photo;
NSData *imgdata1 = UIImagePNGRepresentation(img1);
NSData *imgdata2 = UIImagePNGRepresentation(img2);
if ([imgdata1 isEqualToData:imgdata2])
{
NSLog(#"Same Image");
}
Try this code - it compare images pixel by pixel
-(void)removeindicator :(UIImage *)image
{
for (int i =0; i < [imageArray count]; i++)
{
CGFloat width =100.0f;
CGFloat height=100.0f;
CGSize newSize1 = CGSizeMake(width, height); //whaterver size
UIGraphicsBeginImageContext(newSize1);
[[UIImage imageNamed:[imageArray objectAtIndex:i]] drawInRect:CGRectMake(0, 0, newSize1.width, newSize1.height)];
UIImage *newImage1 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *imageview_camera=(UIImageView *)[self.view viewWithTag:-3];
CGSize newSize2 = CGSizeMake(width, height); //whaterver size
UIGraphicsBeginImageContext(newSize2);
[[imageview_camera image] drawInRect:CGRectMake(0, 0, newSize2.width, newSize2.height)];
UIImage *newImage2 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
float numDifferences = 0.0f;
float totalCompares = width * height;
NSArray *img1RGB=[[NSArray alloc]init];
NSArray *img2RGB=[[NSArray alloc]init];
for (int yCoord = 0; yCoord < height; yCoord += 1)
{
for (int xCoord = 0; xCoord < width; xCoord += 1)
{
img1RGB = [self getRGBAsFromImage:newImage1 atX:xCoord andY:yCoord];
img2RGB = [self getRGBAsFromImage:newImage2 atX:xCoord andY:yCoord];
if (([[img1RGB objectAtIndex:0]floatValue] - [[img2RGB objectAtIndex:0]floatValue]) == 0 || ([[img1RGB objectAtIndex:1]floatValue] - [[img2RGB objectAtIndex:1]floatValue]) == 0 || ([[img1RGB objectAtIndex:2]floatValue] - [[img2RGB objectAtIndex:2]floatValue]) == 0)
{
//one or more pixel components differs by 10% or more
numDifferences++;
}
}
}
// It will show result in percentage at last
CGFloat percentage_similar=((numDifferences*100)/totalCompares);
NSString *str=NULL;
if (percentage_similar>=10.0f)
{
str=[[NSString alloc]initWithString:[NSString stringWithFormat:#"%i%# Identical", (int)((numDifferences*100)/totalCompares),#"%"]];
UIAlertView *alertview=[[UIAlertView alloc]initWithTitle:#"i-App" message:[NSString stringWithFormat:#"%# Images are same",str] delegate:nil cancelButtonTitle:#"Ok" otherButtonTitles:nil];
[alertview show];
break;
}
else
{
str=[[NSString alloc]initWithString:[NSString stringWithFormat:#"Result: %i%# Identical",(int)((numDifferences*100)/totalCompares),#"%"]];
UIAlertView *alertview=[[UIAlertView alloc]initWithTitle:#"i-App" message:[NSString stringWithFormat:#"%# Images are not same",str] delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alertview show];
}
}
}
-(NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy
{
//NSArray *result = [[NSArray alloc]init];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
// for (int ii = 0 ; ii < count ; ++ii)
// {
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
//CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
// UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
//}
free(rawData);
NSArray *result = [NSArray arrayWithObjects:
[NSNumber numberWithFloat:red],
[NSNumber numberWithFloat:green],
[NSNumber numberWithFloat:blue],nil];
return result;
}
I'm trying to call UIImagePNGRepresentation in other than main thread, but I'm getting only the EXC_BAD_ACCESS exception.
Here is code
UIImage *imageToSave = [UIImage imageWithCGImage:imageRef];
dispatch_queue_t taskQ = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(taskQ,
^{
NSData *binaryImage = UIImagePNGRepresentation(imageToSave);
if (![binaryImage writeToFile:destinationFilePath atomically:YES]) {
SCASSERT(assCodeCannotCreateFile, #"Write of image file failed!");
};
});
Exception occours when trying to access imageToSave variable.
EDIT:
I want to save OpenGL sceene to PNG file, there is code of whole function
- (void)saveImage:(NSString *)destinationFilePath {
NSUInteger width = (uint)self.frame.size.width;
NSUInteger height = (uint)self.frame.size.height;
NSUInteger myDataLength = width * height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *)malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *)malloc(myDataLength);
for (int y = 0; y <height; y++) {
for (int x = 0; x <width * 4; x++) {
buffer2[((int)height - 1 - y) * (int)width * 4 + x] = buffer[y * 4 * (int)width + x];
}
}
JFFree(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
UIImage *imageToSave = [UIImage imageWithCGImage:imageRef];
dispatch_queue_t taskQ = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(taskQ,
^{
NSData *binaryImage = UIImagePNGRepresentation(imageToSave);
if (![binaryImage writeToFile:destinationFilePath atomically:YES]) {
SCASSERT(assCodeCannotCreateFile, #"Write of image file failed!");
};
});
// release resources
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
JFFree(buffer2);
}
leave out the CGImageRelease call because the UIImage doesnt retain the CGImageRef
see imageWithCGImage and memory
How to save the 3D transformed image on the photo album? I am using CATransform3DRotate for change the transform. I am not able to save the image. Image saving code.
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
Is it possible to save the 3D Transformed image? Please help me.Thanks in advance.
-(UIImage *) glToUIImage {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
-(void)captureToPhotoAlbum {
UIImage *image = [self glToUIImage];
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
}
also see full tutorial for save openGL Image in photoAlbum from this link
and also see my blog with this post.. captureimagescreenshot-of-view
2 . also use ALAssetsLibrary to save image
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)[image imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
// TODO: error handling
} else {
// TODO: success handling
}
}];
[library release];
UIImageWriteToSavedPhotosAlbum(UIImage *image, id completionTarget, SEL completionSelector, void *contextInfo);
You only need completionTarget, completionSelector and contextInfo if you want to be notified when the image is done saving, otherwise you can pass in nil.
ok then try like this..
UIGraphicsBeginImageContext(YOUR_VIEW.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
It will capture your view as like screenshot and save to photo album
YOUR_VIEW = paas your edited image's superview..
I am making video of screen but crashes on this line.
CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels);
Note: It will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data
So I am providing my code that grabs frames from the camera - maybe it will help After grabbing the data, I put it on a queue for further processing. I had to remove some of the code as it was not relevant to you - so what you see here has pieces you should be able to use.
- (void)captureOutput:(AVCaptureVideoDataOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//NSLog(#"PE: value=%lld timeScale=%d flags=%x", prStamp.value, prStamp.timescale, prStamp.flags);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
NSRange captureRange;
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Note Apple sample code cheats big time - the phone is big endian so this reverses the "apparent" order of bytes
CGContextRef newContext = CGBitmapContextCreate(NULL, width, captureRange.length, 8, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little); // Video in ARGB format
assert(newContext);
uint8_t *newPtr = (uint8_t *)CGBitmapContextGetData(newContext);
size_t offset = captureRange.location * bytesPerRow;
memcpy(newPtr, baseAddress + offset, captureRange.length * bytesPerRow);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CMTime prStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); // when it was taken?
//CMTime deStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer); // now?
NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys:
[NSValue valueWithBytes:&saveState objCType:#encode(saveImages)], kState,
[NSValue valueWithNonretainedObject:(__bridge id)newContext], kImageContext,
[NSValue valueWithBytes:&prStamp objCType:#encode(CMTime)], kPresTime,
nil ];
dispatch_async(imageQueue, ^
{
// could be on any thread now
OSAtomicDecrement32(&queueDepth);
if(!isCancelled) {
saveImages state; [(NSValue *)[dict objectForKey:kState] getValue:&state];
CGContextRef context; [(NSValue *)[dict objectForKey:kImageContext] getValue:&context];
CMTime stamp; [(NSValue *)[dict objectForKey:kPresTime] getValue:&stamp];
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImageOrientation orient = state == saveOne ? UIImageOrientationLeft : UIImageOrientationUp;
UIImage *image = [UIImage imageWithCGImage:newImageRef scale:1.0 orientation:orient]; // imageWithCGImage: UIImageOrientationUp UIImageOrientationLeft
CGImageRelease(newImageRef);
NSData *data = UIImagePNGRepresentation(image);
// NSLog(#"STATE:[%d]: value=%lld timeScale=%d flags=%x", state, stamp.value, stamp.timescale, stamp.flags);
{
NSString *name = [NSString stringWithFormat:#"%d.png", num];
NSString *path = [[wlAppDelegate snippetsDirectory] stringByAppendingPathComponent:name];
BOOL ret = [data writeToFile:path atomically:NO];
//NSLog(#"WROTE %d err=%d w/time %f path:%#", num, ret, (double)stamp.value/(double)stamp.timescale, path);
if(!ret) {
++errors;
} else {
dispatch_async(dispatch_get_main_queue(), ^
{
if(num) [delegate progress:(CGFloat)num/(CGFloat)(MORE_THAN_ONE_REV * SNAPS_PER_SEC) file:path];
} );
}
++num;
}
} else NSLog(#"CANCELLED");
} );
}
}
I use AVCaptureSessionPhoto to allow the user to take high-resolution photos. Upon taking a photo, I use the captureOutput:didOutputSampleBuffer:fromConnection: method to retrieve a thumbnail at the time of capture. However, although I try to do minimal work in the delegate method, the app becomes sort of laggy (I say sort of because it is still useable). Also, the iPhone tends to run hot.
Is there some way of reducing the amount of work the iPhone has to do?
I set up the AVCaptureVideoDataOutput by doing the following:
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoDataOutput.alwaysDiscardsLateVideoFrames = YES;
// Specify the pixel format
dispatch_queue_t queue = dispatch_queue_create("com.myapp.videoDataOutput", NULL);
[self.videoDataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
self.videoDataOutput.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
Here's my captureOutput:didOutputSampleBuffer:fromConnection (and assisting imageRefFromSampleBuffer method):
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
if (videoDataOutputConnection == nil) {
videoDataOutputConnection = connection;
}
if (getThumbnail > 0) {
getThumbnail--;
CGImageRef tempThumbnail = [self imageRefFromSampleBuffer:sampleBuffer];
UIImage *image;
if (self.prevLayer.mirrored) {
image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationLeftMirrored];
}
else {
image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationRight];
}
[self.cameraThumbnailArray insertObject:image atIndex:0];
dispatch_async(dispatch_get_main_queue(), ^{
self.freezeCameraView.image = image;
});
CFRelease(tempThumbnail);
}
sampleBuffer = nil;
[pool release];
}
-(CGImageRef)imageRefFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return newImage;
}
minFrameDuration is deprecated, this may work:
AVCaptureConnection *stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
stillImageConnection.videoMinFrameDuration = CMTimeMake(1, 10);
To improve, we should setup our AVCaptureVideoDataOutput by:
output.minFrameDuration = CMTimeMake(1, 10);
We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting in the queue because it can cause memory issues). It is similar to the inverse of the maximum frame-rate. In this example we set a min frame duration of 1/10 seconds so a maximum frame-rate of 10fps. We say that we are not able to process more than 10 frames per second.
Hope that help!