Save pixels dynamically created and retrieve it back successfully - iphone

I am trying to insert pixels dynamically to a PNG image file and try to retrieve it back without any alteration to pixels I saved using the code below. But I'm not successful in doing so. Can someone help me point where the problem is? Thanks.
// Original pixels for debugging only
NSString *sPixels = #"12345678";
const char *cpPixels8 = sPixels.UTF8String;
char *cpPixelsStore = calloc(sPixels.length + 1, 1);
strncpy(cpPixelsStore, cpPixels8, sPixels.length);
unsigned int r,g,b,a;
for(int j = 0; j < sPixels.length; j += 4)
{
r = cpPixelsStore[j+0];
g = cpPixelsStore[j+1];
b = cpPixelsStore[j+2];
a = cpPixelsStore[j+3];
printf("r:0x%X g:0x%X b:0x%X a:0x%X\n", r, g, b, a);
}
int width = 2;
int height = 1;
int bytesPerRow = 4 * width;
int bitsPerComponent = 8;
int bitsPerPixel = 32;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, cpPixelsStore, (bytesPerRow * height), NULL);
CGImageRef imRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGDataProviderRelease(provider);
UIImage *imNewTemp = [UIImage imageWithCGImage:imRef];
NSData *datPNG = UIImagePNGRepresentation(imNewTemp);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *sFile = [documentsDirectory stringByAppendingPathComponent:#"Pic.png"];
[datPNG writeToFile:sFile atomically:YES];
CGImageRelease(imRef);
// Cross verify save
UIImage *imTemp = [UIImage imageWithContentsOfFile:sFile];
NSData *datImagePixels = (__bridge NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imTemp.CGImage));
unsigned char *ucpPixelBytes = (unsigned char *)[datImagePixels bytes];
for(int j = 0; j < datImagePixels.length; j += 4)
{
r = ucpPixelBytes[j+0];
g = ucpPixelBytes[j+1];
b = ucpPixelBytes[j+2];
a = ucpPixelBytes[j+3];
printf("r:0x%X g:0x%X b:0x%X a:0x%X\n", r, g, b, a);
}
Initial printf returns this during creation:
r:0x31 g:0x32 b:0x33 a:0x34
r:0x35 g:0x36 b:0x37 a:0x38
printf after saving and retrieving the file gives this output:
r:0xA g:0xA b:0xA a:0x31
r:0xC g:0xB b:0xB a:0x35
I'm lost in translation. Please help.

NSData is immutable - you need a NSMutableData object, then you use 'mutableBytes' to get the pointer. NSMutableData *nData = [NSMutableData dataWithData:datImagePixels];
Now you have the data to create a NEW image with your modifications. You can use the Quartz method CGImageCreate() to get a CGImageRef, and from that a UIImage.

Related

Try to save image in background

I'm trying to call UIImagePNGRepresentation in other than main thread, but I'm getting only the EXC_BAD_ACCESS exception.
Here is code
UIImage *imageToSave = [UIImage imageWithCGImage:imageRef];
dispatch_queue_t taskQ = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(taskQ,
^{
NSData *binaryImage = UIImagePNGRepresentation(imageToSave);
if (![binaryImage writeToFile:destinationFilePath atomically:YES]) {
SCASSERT(assCodeCannotCreateFile, #"Write of image file failed!");
};
});
Exception occours when trying to access imageToSave variable.
EDIT:
I want to save OpenGL sceene to PNG file, there is code of whole function
- (void)saveImage:(NSString *)destinationFilePath {
NSUInteger width = (uint)self.frame.size.width;
NSUInteger height = (uint)self.frame.size.height;
NSUInteger myDataLength = width * height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *)malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *)malloc(myDataLength);
for (int y = 0; y <height; y++) {
for (int x = 0; x <width * 4; x++) {
buffer2[((int)height - 1 - y) * (int)width * 4 + x] = buffer[y * 4 * (int)width + x];
}
}
JFFree(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
UIImage *imageToSave = [UIImage imageWithCGImage:imageRef];
dispatch_queue_t taskQ = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(taskQ,
^{
NSData *binaryImage = UIImagePNGRepresentation(imageToSave);
if (![binaryImage writeToFile:destinationFilePath atomically:YES]) {
SCASSERT(assCodeCannotCreateFile, #"Write of image file failed!");
};
});
// release resources
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
JFFree(buffer2);
}
leave out the CGImageRelease call because the UIImage doesnt retain the CGImageRef
see imageWithCGImage and memory

AVCaptureSession Display is White (no Video)

I am using an AVCaptureSession with an output setting of:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
My AVCaptureVideoPreviewLayer is displaying fine but I need more than this since I have had no success getting a screen shot using the AVCaptureVideoPreviewLayer. So when creating a CGContextRef within the captureOutput delegate, I am using these settings
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, width * 4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
I am no longer receiving an 'unsupported parameter combination' warning, but the display is just plain white.
I should add that when I change
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
to
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
Everything works fine. What is my problem?
Take a look to the following code (it uses FullVideoRange instead) which converts "by hand" a bi-planar video frame into a RGB.
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
NSUInteger yOffset = EndianU32_BtoN(bufferInfo->componentInfoY.offset);
NSUInteger yPitch = EndianU32_BtoN(bufferInfo->componentInfoY.rowBytes);
NSUInteger cbCrOffset = EndianU32_BtoN(bufferInfo->componentInfoCbCr.offset);
NSUInteger cbCrPitch = EndianU32_BtoN(bufferInfo->componentInfoCbCr.rowBytes);
uint8_t *rgbBuffer = malloc(width * height * 3);
uint8_t *yBuffer = baseAddress + yOffset;
uint8_t *cbCrBuffer = baseAddress + cbCrOffset;
for(int y = 0; y < height; y++)
{
uint8_t *rgbBufferLine = &rgbBuffer[y * width * 3];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < width; x++)
{
uint8_t y = yBufferLine[x];
uint8_t cb = cbCrBufferLine[x & ~1];
uint8_t cr = cbCrBufferLine[x | 1];
uint8_t *rgbOutput = &rgbBufferLine[x*3];
// from ITU-R BT.601, rounded to integers
rgbOutput[0] = (298 * (y - 16) + 409 * cr - 223) >> 8;
rgbOutput[1] = (298 * (y - 16) + 100 * cb + 208 * cr + 136) >> 8;
rgbOutput[2] = (298 * (y - 16) + 516 * cb - 277) >> 8;
}
}
The following link may be useful as well to better understand this video format:
http://blog.csdn.net/yiheng_l/article/details/3790219#yuvformats_nv12
Take a look at InvasiveCode's tutorial. It shows how to use Accelerate and CoreImage framework to process the Y channel

Crash: UIImage style crash with code for iOS 6?

Here is my code for styling an image. In the iOS4.3 & above version the code works fine, but in iOS 6, it crashes.
-(UIImage *)grayImage:(UIImage *)image
{
CGImageRef img= image.CGImage;//imageSelected.CGImage;//self.originalPhoto.CGImage;
CFDataRef dataref=CGDataProviderCopyData(CGImageGetDataProvider(img));
int length=CFDataGetLength(dataref);
UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref);
for(int index=0;index<length;index+=4){
Byte grayScale =
(Byte)(data[index+3]*.11 +
data[index + 2] * .59 +
data[index + 1] * .3);
//set the new image's pixel to the grayscale version
data[index+1] = grayScale;// Code Crash here , By SIGABRAT (Exe_Bad_Access)
data[index+2] = grayScale;
data[index+3] = grayScale;
}
// .. Take image attributes
size_t width=CGImageGetWidth(img);
size_t height=CGImageGetHeight(img);
size_t bitsPerComponent=CGImageGetBitsPerComponent(img);
size_t bitsPerPixel=CGImageGetBitsPerPixel(img);
size_t bytesPerRow=CGImageGetBytesPerRow(img);
// .. Do the pixel manupulation
CGColorSpaceRef colorspace=CGImageGetColorSpace(img);
CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img);
CFDataRef newData=CFDataCreate(NULL,data,length);
CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData);
// .. Get the Image out of this raw data
CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault);
// .. Prepare the image from raw data
UIImage* rawImage = [[UIImage alloc] initWithCGImage:newImg] ;
// .. done with all,so release the references
CFRelease(newData);
CGImageRelease(newImg);
CGDataProviderRelease(provider);
CFRelease(dataref);
return rawImage;
}
What is wrong in this code?
Please use CFMutableDataRef in place of CFDataRef as below
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
//->write this:
CFMutableDataRef m_DataRef = CFDataCreateMutableCopy(0, 0, CGDataProviderCopyData(CGImageGetDataProvider(inImage)));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
//->write this
UInt8 *m_PixelBuf=(UInt8 *)CFDataGetMutableBytePtr(m_DataRef);

NSData -> UIImage -> NSData

I have an NSData object, which contains RGB values for an image. I want to turn that into a UIImage (given the width and the height). Then I want to convert that UIImage back into an NSData object identical to the one I started with.
Please help me I've been trying for hours now.
Here are some things I've looked at/tried though probably didn't too them right cause it didn't work:
CGImageCreate
CGBitmapContextCreateWithData
CGBitmapContextGetData
CGDataProviderCopyData(CGImageGetDataProvider(imageRef))
Thanks!
Here is my current code:
NSMutableData *rgb; //made earlier
double len = (double)[rgb length];
len /= 3;
len += 0.5;
len = (int)len;
int diff = len*3-[rgb length];
NSString *str = #"a";
NSData *a = [str dataUsingEncoding:NSUTF8StringEncoding];
for(int i =0; i < diff; i++) {
[toEncode appendData:a]; //so if my data is RGBRGBR it will turn into RGBRGBR(97)(97)
}
size_t width = (size_t)len;
size_t height = 1;
CGContextRef ctx;
CFDataRef m_DataRef;
m_DataRef = (__bridge CFDataRef)toEncode;
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
vImage_Buffer src;
src.data = m_PixelBuf;
src.width = width;
src.height = height;
src.rowBytes = 3 * width;
vImage_Buffer dst;
dst.width = width;
dst.height = height;
dst.rowBytes = 4 * width;
vImageConvert_RGB888toARGB8888(&src, NULL, 0, &dst, NO, kvImageNoFlags);
// free(m_PixelBuf);
// m_PixelBuf = dst.data;
// NSUInteger lenB = len * (4/3);
/*
UInt8 * m_Pixel = malloc(sizeof(UInt8) * lenB);
int z = 0;
for(int i = 0; i < lenB; i++) {
if(i % 4==0) {
m_Pixel[i] = 0;
} else {
m_Pixel[i] = m_PixelBuf[z];
z++;
}
}*/
// Byte tmpByte;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
/*
ctx = CGBitmapContextCreate(m_PixelBuf,
width,
height,
8,
4*width,
colorSpace,
kCGImageAlphaPremultipliedFirst );
*/
size_t w = (size_t)len;
ctx = CGBitmapContextCreate(dst.data,
w,
height,
8,
4*width,
colorSpace,
kCGImageAlphaNoneSkipFirst );
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
I get this error:<Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 324 for 8 integer bits/component, 3 components, kCGImageAlphaNoneSkipFirst.
TO UIImage from NSData:
[UIImage imageWithData:]
More on UIImage
TO NSData from UIImage:
UIImage *img = [UIImage imageNamed:#"some.png"];
NSData *dataObj = UIImageJPEGRepresentation(img, 1.0);
More on UIImageJPEGRepresentation()
The basic procedure would be to create a bitmap context using CGBitmapContextCreateWithData and then creating a CGImageRef from that with CGBitmapContextCreateImage. The parameters for creating the bitmap context depend on how your raw data is laid out in memory. Not all kinds of raw data are supported by Quartz.
The documentation on CGBitmapContextCreateWithData is quite detailed, and this is the most challenging part, getting the CGImageRef from the context and wrapping that in a UIImage (imageWithCGImage:) is trivial afterwards.
If your data is in RGB format, you will want to create a bitmap using CGBitmapContextCreate with CGColorSpaceCreateDeviceRGB and using kCGImageAlphaNone.
The error stating that rowBytes needs to be at least 324; dividing that by 4 is 81 which implies that 'width' is smaller than 'w', and that w=81. The two values should match.
Try replacing width and w with a small number like 5 to validate this. I would also note that you should be allocating dst.data via malloc prior to calling vImageConvert_RGB888toARGB8888.
Consider using CGImageCreate() instead of creating a bitmapcontext:
// this will automatically free() dst.data when destData is dealloc
NSData *destData = [NSData dataWithBytesNoCopy:dst.data length:4*width*height];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)destData);
CGImageRef imageRef = CGImageCreate(width,
height,
8, //bits per component
8*4, //bits per pixel
4*width, //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNoneSkipFirst,
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);

Setting the contrast of an image in iPhone

I'm using the following code for setting the contrast of an image based on the slider value. The slider range is from 0.0f to 2.0f. Its runnig fine in simulator but its crashing on device due to low memory. Can any one help me what's wrong in this code.
Thanks in advance....
-(void)contrast:(float)value{
CGImageRef img=refImage.CGImage;
CFDataRef dataref=CopyImagePixels(img);
UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref);
int length=CFDataGetLength(dataref);
for(int index=0;index<length;index+=4){
int alphaCount = data[index+0];
int redCount = data[index+1];
int greenCount = data[index+2];
int blueCount = data[index+3];
alphaCount = ((alphaCount-128)*value ) + 128;
if (alphaCount < 0) alphaCount = 0; if (alphaCount>255) alphaCount =255;
data[index+0] = (Byte) alphaCount;
redCount = ((redCount-128)*value ) + 128;
if (redCount < 0) redCount = 0; if (redCount>255) redCount =255;
data[index+1] = (Byte) redCount;
greenCount = ((greenCount-128)*value ) + 128;
if (greenCount < 0) greenCount = 0; if (greenCount>255) greenCount =255;
data[index+2] = (Byte) greenCount;
blueCount = ((blueCount-128)*value ) + 128;
if (blueCount < 0) blueCount = 0; if (blueCount>255) blueCount =255;
data[index+3] = (Byte) blueCount;
}
size_t width=CGImageGetWidth(img);
size_t height=CGImageGetHeight(img);
size_t bitsPerComponent=CGImageGetBitsPerComponent(img);
size_t bitsPerPixel=CGImageGetBitsPerPixel(img);
size_t bytesPerRow=CGImageGetBytesPerRow(img);
CGColorSpaceRef colorspace=CGImageGetColorSpace(img);
CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img);
CFDataRef newData=CFDataCreate(NULL,data,length);
CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData);
CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault);
[ImgView setImage:[UIImage imageWithCGImage:newImg]];
CGImageRelease(newImg);
CGDataProviderRelease(provider);
}
You might have some memory leaks.
Any function that is CF...Create() will need to have corresponding CFRelease() called on it. The following has no release:
CFDataRef newData=CFDataCreate(NULL,data,length);
I think you need to clean up after copying as well:
CFDataRef dataref=CopyImagePixels(img);
You cleaned up after newImg okay. Can't see any other leaks but check your Create/Copying that you clean up the memory afterwards.