UIImage to byte array - iphone

I'm creating an app that uploads an image to a server. It must send the byte array on a XML. How do I get the byte array into a NSString?
Thanks!

You can convert the UIImage to a NSData object and then extract the byte array from there. Here is some sample code:
UIImage *image = [UIImage imageNamed:#"image.png"];
NSString *byteArray = [UIImagePNGRepresentation(image) base64EncodedStringWithOptions:NSDataBase64Encoding64CharacterLineLength];
If you are using a PNG Image you can use the UIImagePNGRepresentation function as shown above or if you are using a JPEG Image, you can use the UIImageJPEGRepresentation function. Documentation is available on the UIImage Class Reference

Here is a simple function for iOS to convert from UIImage to unsigned char* byte array -->
+ (unsigned char*)UIImageToByteArray:(UIImage*)image; {
unsigned char *imageData = (unsigned char*)(malloc( 4*image.size.width*image.size.height));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = [image CGImage];
CGContextRef bitmap = CGBitmapContextCreate( imageData,
image.size.width,
image.size.height,
8,
image.size.width*4,
colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage( bitmap, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease( bitmap);
CGColorSpaceRelease( colorSpace);
return imageData;
}

using NSData *data = UIImagePNGRepresentation(image); you can convert image into data , now convert dat to bytes by using getBytes:length: or getBytes:range:

Related

Iphone Converting IplImage to UIImage images become bluish

In face detection first I converted UIImage to Iplimage by using the code
- (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
CGImageRef imageRef;
CGColorSpaceRef colorSpaceRef;
CGContextRef context;
IplImage * iplImage;
IplImage * returnImage;
imageRef = image.CGImage;
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
iplImage = cvCreateImage( cvSize( image.size.width, image.size.height ), IPL_DEPTH_8U, 4 );
context = CGBitmapContextCreate
(
iplImage->imageData,
iplImage->width,
iplImage->height,
iplImage->depth,
iplImage->widthStep,
colorSpaceRef,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault
);
CGContextDrawImage( context, CGRectMake( 0, 0, image.size.width, image.size.height ), imageRef );
CGContextRelease( context );
CGColorSpaceRelease( colorSpaceRef );
returnImage = cvCreateImage( cvGetSize( iplImage ), IPL_DEPTH_8U, 3 );
cvCvtColor( iplImage, returnImage, CV_RGBA2BGR);
cvReleaseImage( &iplImage );
return returnImage;
}
And then after detecting the facial features converted the iplimage to uiimage using the code:
- (UIImage *)UIImageFromIplImage:(IplImage *)image {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();//CGColorSpaceCreateDeviceGray();
// Allocating the buffer for CGImage
NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
// Creating CGImage from chunk of IplImage
CGImageRef imageRef = CGImageCreate(image->width, image->height, image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault);
// Getting UIImage from CGImage
UIImage *ret = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return ret;
}
but when it shown in the imageview it shows bluish. And if I use colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
Then the whole image becomes grayed. The original color becomes totally lost in both way.
Thanks in advance.
On changing the function UIImageFromIplImage the bluish or grayish problem is solved
- (UIImage *)UIImageFromIplImage:(IplImage *)image {
CGColorSpaceRef colorSpace;
if (image->nChannels == 1)
{
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
cvCvtColor(image, image, CV_BGR2RGB);
}
NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData(( CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width,
image->height,
image->depth,
image->depth * image->nChannels,
image->widthStep,
colorSpace,
kCGImageAlphaNone|kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
UIImage *ret = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return ret;
}

image size on device is different after uploaing image on server

I am doing following operation before uploading image. If I check size of image before uploading and after uploading it gets double(i.e if I have uploaded 2 MB image I can see 4 MB image on server).
NSTimeInterval timeInterval = [assetDate timeIntervalSince1970];
ALAssetRepresentation *rep = [temp defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
StrPath = [StrPath stringByAppendingFormat:#"%d.%#",(int)timeInterval,strImageType];
UIImage *image =[UIImage imageWithCGImage:iref scale:[rep scale] orientation:(UIImageOrientation)[rep orientation]];
NSData *dataObj = nil;
dataObj = UIImageJPEGRepresentation(image, 1.0);
NSString* StrFileData = [Base64 encode:dataObj];
NSString* strFileHash = [dataObj md5Test];
Use following method for specific hight and width with image
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(int)width withHeight:(int)height
{
CGSize newSize = CGSizeMake(width, height);
float widthRatio = newSize.width/image.size.width;
float heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This method return NewImage, It will not be resiz :)
I have few suggestions. If you note these.. It should help you
You can get the ALAsset image NSData directly from ALAsset defaultRepresentation
getBytes:fromOffset:length:error: method.
Use ALAsset thumbnail image instead of retrieving full resolution image
CGImageRef iref = [myasset aspectRatioThumbnail];
ALAsset Libraay block will execute in seperate thread. So do the server upload in main thread
dispatch_async(dispatch_get_global_queue(0, 0), ^{
// do the server upload
});

how can i handling UIImage using a pointer?

i'm studying iOS programming.
i have to handling an image.
i must make an image to BMP file. but iOS doesn't support to make BMP file.
so i think that maybe i use a pointer i can make BMP file.
i make a header for BMP. and put it in an NSData object.
now i left saving image data's to that NSData.(i'll say bmpData)
here's start my wondering.
i have to have image which size is 128 x 64.
i get the UIImage from a context. (i'll say image1)
and i make a CGSize which is 128 x 64.
i make a context2 using that size, drawing an image1
now i get the UIImage from a context2, which size is 128 x 64, resized image1.
and i make an UIImageView and use image2, it works fine. good. image2 is made well.
so i declared a pointer, which is unsigned char *.
unsigned char *bmpDataPointer = (unsigned char *)image2;
and i use for loop
for(int i = 0; i < 64; ++i)
{
for(int j = 0 ; j < 128; ++j)
{
// dataObj and bmpData are different NSData object
// dataObj just contain bitmap data to check my pointer works fine or not.
[dataObj appendBytes:&(bitmapDataPointer[i*1 + j]) length:sizeof(char)];
}
}
and i make a UIImage to check that data is valid, it fails.
UIImage *createdImageUsedByaPointer = [UIImage imageWithData:dataObj];
if(createdImageUsedByaPointer == nil)
{
NSLog(#"nil!");
}
ok run. then string nil will be presented.
why is that? i make context size 128 x 64, so i loop 128 x 64 times.
but it works bad.
how can i fix that??
how can i handling an UIImage to use a pointer??
anybody knows about that please help me.
Don't know why you need .bmp format but ImageMagick will handle export.
Use this lib ImageMagick for iOS and the code below.
UIImage *img9 = [UIImage imageNamed:#"00009_face.jpg"];
MagickWand *bmpWand = NewMagickWand();
NSData *bmpObj = UIImagePNGRepresentation(img9);
MagickSetFormat(bmpWand, "bmp");
MagickReadImageBlob(bmpWand, [bmpObj bytes], [bmpObj length]);
size_t bmp_size;
unsigned char * bmp_image = MagickGetImagesBlob(bmpWand, &bmp_size);
NSData *bmpData = [[[NSData alloc] initWithBytes:bmp_image length:bmp_size] autorelease];
free(bmp_image);
DestroyMagickWand(bmpWand);
NSArray *pathsArray = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString * documentDirectory = [pathsArray objectAtIndex:0];
NSString * bmpPath = [documentDirectory stringByAppendingPathComponent:#"test2.bmp"];
[bmpData writeToFile:bmpPath atomically:NO];
NSLog(#"bmpPath %#", bmpPath);
Why are you going in that much complexity?
You can create bitmap context as follows:
CGContextRef context = CGBitmapContextCreate(data, width, height,
bitsPerComponent,
bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *result = [UIImage imageWithCGImage:imageRef];
here data is ur image's NSData

Crash when converting cv::Mat to UIImage opencv 2.4

I'm attempting to convert a UIImage to a cv::Mat and then back to a UIImage and insert that UIImage into a UIImageView.
This is the code I'm using to convert:
UIImage * imageFromMat(const cv::Mat& cvMat){
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols,
cvMat.rows,
8,
8 * cvMat.elemSize(),
cvMat.step[0],
colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault);
UIImage *image = [[[UIImage alloc] initWithCGImage:imageRef] autorelease];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return image;
}
It converts fine but as soon as I insert it into a UIImageView, I get a crash, so I'm assuming the problem lies there.
I've noticed that if I retain the original image(the one before converting to cv::Mat) the crash doesn't happen, but I get a leak.
Any thoughts on what the problem could be?
Turns out I was over releasing CGImageRef.

NSData -> UIImage -> NSData

I have an NSData object, which contains RGB values for an image. I want to turn that into a UIImage (given the width and the height). Then I want to convert that UIImage back into an NSData object identical to the one I started with.
Please help me I've been trying for hours now.
Here are some things I've looked at/tried though probably didn't too them right cause it didn't work:
CGImageCreate
CGBitmapContextCreateWithData
CGBitmapContextGetData
CGDataProviderCopyData(CGImageGetDataProvider(imageRef))
Thanks!
Here is my current code:
NSMutableData *rgb; //made earlier
double len = (double)[rgb length];
len /= 3;
len += 0.5;
len = (int)len;
int diff = len*3-[rgb length];
NSString *str = #"a";
NSData *a = [str dataUsingEncoding:NSUTF8StringEncoding];
for(int i =0; i < diff; i++) {
[toEncode appendData:a]; //so if my data is RGBRGBR it will turn into RGBRGBR(97)(97)
}
size_t width = (size_t)len;
size_t height = 1;
CGContextRef ctx;
CFDataRef m_DataRef;
m_DataRef = (__bridge CFDataRef)toEncode;
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
vImage_Buffer src;
src.data = m_PixelBuf;
src.width = width;
src.height = height;
src.rowBytes = 3 * width;
vImage_Buffer dst;
dst.width = width;
dst.height = height;
dst.rowBytes = 4 * width;
vImageConvert_RGB888toARGB8888(&src, NULL, 0, &dst, NO, kvImageNoFlags);
// free(m_PixelBuf);
// m_PixelBuf = dst.data;
// NSUInteger lenB = len * (4/3);
/*
UInt8 * m_Pixel = malloc(sizeof(UInt8) * lenB);
int z = 0;
for(int i = 0; i < lenB; i++) {
if(i % 4==0) {
m_Pixel[i] = 0;
} else {
m_Pixel[i] = m_PixelBuf[z];
z++;
}
}*/
// Byte tmpByte;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
/*
ctx = CGBitmapContextCreate(m_PixelBuf,
width,
height,
8,
4*width,
colorSpace,
kCGImageAlphaPremultipliedFirst );
*/
size_t w = (size_t)len;
ctx = CGBitmapContextCreate(dst.data,
w,
height,
8,
4*width,
colorSpace,
kCGImageAlphaNoneSkipFirst );
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
I get this error:<Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 324 for 8 integer bits/component, 3 components, kCGImageAlphaNoneSkipFirst.
TO UIImage from NSData:
[UIImage imageWithData:]
More on UIImage
TO NSData from UIImage:
UIImage *img = [UIImage imageNamed:#"some.png"];
NSData *dataObj = UIImageJPEGRepresentation(img, 1.0);
More on UIImageJPEGRepresentation()
The basic procedure would be to create a bitmap context using CGBitmapContextCreateWithData and then creating a CGImageRef from that with CGBitmapContextCreateImage. The parameters for creating the bitmap context depend on how your raw data is laid out in memory. Not all kinds of raw data are supported by Quartz.
The documentation on CGBitmapContextCreateWithData is quite detailed, and this is the most challenging part, getting the CGImageRef from the context and wrapping that in a UIImage (imageWithCGImage:) is trivial afterwards.
If your data is in RGB format, you will want to create a bitmap using CGBitmapContextCreate with CGColorSpaceCreateDeviceRGB and using kCGImageAlphaNone.
The error stating that rowBytes needs to be at least 324; dividing that by 4 is 81 which implies that 'width' is smaller than 'w', and that w=81. The two values should match.
Try replacing width and w with a small number like 5 to validate this. I would also note that you should be allocating dst.data via malloc prior to calling vImageConvert_RGB888toARGB8888.
Consider using CGImageCreate() instead of creating a bitmapcontext:
// this will automatically free() dst.data when destData is dealloc
NSData *destData = [NSData dataWithBytesNoCopy:dst.data length:4*width*height];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)destData);
CGImageRef imageRef = CGImageCreate(width,
height,
8, //bits per component
8*4, //bits per pixel
4*width, //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNoneSkipFirst,
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);