Crash when converting cv::Mat to UIImage opencv 2.4 - iphone

I'm attempting to convert a UIImage to a cv::Mat and then back to a UIImage and insert that UIImage into a UIImageView.
This is the code I'm using to convert:
UIImage * imageFromMat(const cv::Mat& cvMat){
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols,
cvMat.rows,
8,
8 * cvMat.elemSize(),
cvMat.step[0],
colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault);
UIImage *image = [[[UIImage alloc] initWithCGImage:imageRef] autorelease];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return image;
}
It converts fine but as soon as I insert it into a UIImageView, I get a crash, so I'm assuming the problem lies there.
I've noticed that if I retain the original image(the one before converting to cv::Mat) the crash doesn't happen, but I get a leak.
Any thoughts on what the problem could be?

Turns out I was over releasing CGImageRef.

Related

Iphone Converting IplImage to UIImage images become bluish

In face detection first I converted UIImage to Iplimage by using the code
- (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
CGImageRef imageRef;
CGColorSpaceRef colorSpaceRef;
CGContextRef context;
IplImage * iplImage;
IplImage * returnImage;
imageRef = image.CGImage;
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
iplImage = cvCreateImage( cvSize( image.size.width, image.size.height ), IPL_DEPTH_8U, 4 );
context = CGBitmapContextCreate
(
iplImage->imageData,
iplImage->width,
iplImage->height,
iplImage->depth,
iplImage->widthStep,
colorSpaceRef,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault
);
CGContextDrawImage( context, CGRectMake( 0, 0, image.size.width, image.size.height ), imageRef );
CGContextRelease( context );
CGColorSpaceRelease( colorSpaceRef );
returnImage = cvCreateImage( cvGetSize( iplImage ), IPL_DEPTH_8U, 3 );
cvCvtColor( iplImage, returnImage, CV_RGBA2BGR);
cvReleaseImage( &iplImage );
return returnImage;
}
And then after detecting the facial features converted the iplimage to uiimage using the code:
- (UIImage *)UIImageFromIplImage:(IplImage *)image {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();//CGColorSpaceCreateDeviceGray();
// Allocating the buffer for CGImage
NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
// Creating CGImage from chunk of IplImage
CGImageRef imageRef = CGImageCreate(image->width, image->height, image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault);
// Getting UIImage from CGImage
UIImage *ret = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return ret;
}
but when it shown in the imageview it shows bluish. And if I use colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
Then the whole image becomes grayed. The original color becomes totally lost in both way.
Thanks in advance.
On changing the function UIImageFromIplImage the bluish or grayish problem is solved
- (UIImage *)UIImageFromIplImage:(IplImage *)image {
CGColorSpaceRef colorSpace;
if (image->nChannels == 1)
{
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
cvCvtColor(image, image, CV_BGR2RGB);
}
NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData(( CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width,
image->height,
image->depth,
image->depth * image->nChannels,
image->widthStep,
colorSpace,
kCGImageAlphaNone|kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
UIImage *ret = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return ret;
}

image size on device is different after uploaing image on server

I am doing following operation before uploading image. If I check size of image before uploading and after uploading it gets double(i.e if I have uploaded 2 MB image I can see 4 MB image on server).
NSTimeInterval timeInterval = [assetDate timeIntervalSince1970];
ALAssetRepresentation *rep = [temp defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
StrPath = [StrPath stringByAppendingFormat:#"%d.%#",(int)timeInterval,strImageType];
UIImage *image =[UIImage imageWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]];
NSData *dataObj = nil;
dataObj = UIImageJPEGRepresentation(image, 1.0);
NSString* StrFileData = [Base64 encode:dataObj];
NSString* strFileHash = [dataObj md5Test];
Use following method for specific hight and width with image
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(int)width withHeight:(int)height
{
CGSize newSize = CGSizeMake(width, height);
float widthRatio = newSize.width/image.size.width;
float heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This method return NewImage, It will not be resiz :)
I have few suggestions. If you note these.. It should help you
You can get the ALAsset image NSData directly from ALAsset defaultRepresentation
getBytes:fromOffset:length:error: method.
Use ALAsset thumbnail image instead of retrieving full resolution image
CGImageRef iref = [myasset aspectRatioThumbnail];
ALAsset Libraay block will execute in seperate thread. So do the server upload in main thread
dispatch_async(dispatch_get_global_queue(0, 0), ^{
// do the server upload
});

How to save the image on the photo album after perform the 3D Transform?

How to save the 3D transformed image on the photo album? I am using CATransform3DRotate for change the transform. I am not able to save the image. Image saving code.
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
Is it possible to save the 3D Transformed image? Please help me.Thanks in advance.
-(UIImage *) glToUIImage {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
-(void)captureToPhotoAlbum {
UIImage *image = [self glToUIImage];
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
}
also see full tutorial for save openGL Image in photoAlbum from this link
and also see my blog with this post.. captureimagescreenshot-of-view
2 . also use ALAssetsLibrary to save image
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)[image imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
// TODO: error handling
} else {
// TODO: success handling
}
}];
[library release];
UIImageWriteToSavedPhotosAlbum(UIImage *image, id completionTarget, SEL completionSelector, void *contextInfo);
You only need completionTarget, completionSelector and contextInfo if you want to be notified when the image is done saving, otherwise you can pass in nil.
ok then try like this..
UIGraphicsBeginImageContext(YOUR_VIEW.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
It will capture your view as like screenshot and save to photo album
YOUR_VIEW = paas your edited image's superview..

Loading retina/normal images using core graphics

+(UIImage*) createCGImageFromFile:(NSString*)inName
{
if(!inName || [inName length]==0)
return NULL;
// use the data provider to get a CGImage; release the data provider
CGImageRef image= NULL;
CGFloat scale = 1.0f;
CGDataProviderRef dataProvider = NULL;
NSString *fileName = nil;
NSString *path = nil;
if([Utilities isRetinaDevice])
{
NSString *extenstion = [inName pathExtension];
NSString *stringWithoutPath = [inName stringByDeletingPathExtension];
fileName = [NSString stringWithFormat:#"/Images/%##2x.%#",stringWithoutPath,extenstion];
path = [[[NSBundle mainBundle] resourcePath] stringByAppendingString:fileName];
dataProvider = CGDataProviderCreateWithFilename([path UTF8String]);
if(dataProvider)
{
image = CGImageCreateWithPNGDataProvider(dataProvider, NULL, NO,kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
if(image)
{
scale = 2.0f;
}
}
}
if(image == NULL) //Try normal image
{
fileName = [NSString stringWithFormat:#"/Images/%#",inName];
path = [[[NSBundle mainBundle] resourcePath] stringByAppendingString:fileName];
dataProvider = CGDataProviderCreateWithFilename([path UTF8String]);
if(dataProvider)
{
image = CGImageCreateWithPNGDataProvider(dataProvider, NULL, NO,kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
}
}
// make a bitmap context of a suitable size to draw to, forcing decode
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
unsigned char *imageBuffer = (unsigned char *)malloc(width*height*4);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext =
CGBitmapContextCreate(imageBuffer, width, height, 8, width*4, colourSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGColorSpaceRelease(colourSpace);
// draw the image to the context, release it
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
// now get an image ref from the context
CGImageRef cgImageRef = CGBitmapContextCreateImage(imageContext);
CGContextRelease(imageContext);
free(imageBuffer);
UIImage *outImage = nil;
if(cgImageRef)
{
outImage = [[UIImage alloc] initWithCGImage:cgImageRef scale:scale orientation:UIImageOrientationUp];
CGImageRelease(cgImageRef);
}
return outImage;//Need to release by the receiver
}
Earlier the function was not loading retina images since i was not appending #2x to the image path, i thought that apple itself will take care.
Now I modified the code to load #2x images and created UIImage using scale factor, am I missing something in above code? Is it fine?
P.S. Currently I am not facing any problem using this code but I am not 100% sure
I would use UIImage to load it (it will correct the path)
That will greatly simply your code
THEN get a CGImage from it (UIImage is basically just a thin wrapper around CG so thats no real overhead)
then proceed with your code - it looks ok to me

UIImage to byte array

I'm creating an app that uploads an image to a server. It must send the byte array on a XML. How do I get the byte array into a NSString?
Thanks!
You can convert the UIImage to a NSData object and then extract the byte array from there. Here is some sample code:
UIImage *image = [UIImage imageNamed:#"image.png"];
NSString *byteArray = [UIImagePNGRepresentation(image) base64EncodedStringWithOptions:NSDataBase64Encoding64CharacterLineLength];
If you are using a PNG Image you can use the UIImagePNGRepresentation function as shown above or if you are using a JPEG Image, you can use the UIImageJPEGRepresentation function. Documentation is available on the UIImage Class Reference
Here is a simple function for iOS to convert from UIImage to unsigned char* byte array -->
+ (unsigned char*)UIImageToByteArray:(UIImage*)image; {
unsigned char *imageData = (unsigned char*)(malloc( 4*image.size.width*image.size.height));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = [image CGImage];
CGContextRef bitmap = CGBitmapContextCreate( imageData,
image.size.width,
image.size.height,
8,
image.size.width*4,
colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage( bitmap, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease( bitmap);
CGColorSpaceRelease( colorSpace);
return imageData;
}
using NSData *data = UIImagePNGRepresentation(image); you can convert image into data , now convert dat to bytes by using getBytes:length: or getBytes:range: