Changing the color of a pixel - iphone

i am able to read a particular pixel at given CGPoint but i have been looking to change the color of a pixel and it would be highly appreciated if anyone can help me out with a code snippet.
My Code is:
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
offset = 4*((w*round(point.y))+round(point.x));
alpha = data[offset];
red = data[offset+1];
green = data[offset+2];
blue = data[offset+3];
color = [UIColor colorWithRed:(red/255.0f)
green:(green/255.0f) blue:(blue/255.0f)
alpha:(alpha/255.0f)];
}

It's not clear what you are trying to do. If you want to change the pixel's color in the original CGImageRef then you would use something like:
// Set the color of the pixel to 50% grey + 50% alpha
data[offset+0] = 128;
data[offset+1] = 128;
data[offset+2] = 128;
data[offset+3] = 128;
// Create a CGBitmapImageContext
CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, CGImageGetBitsPerComponent(), width * 4, CGImageGetColorSpace(), kCGImageAlphPremultipliedFirst);
// Draw the bitmap context back to your original context
CGContextDrawImage(bitmapContext, CGMakeRect(...), cgctx);
You should make all of your changes to the data* at once and then write the modified bitmap buffer back to the original context.

Related

convert image to pixels format and back to UIImage format changes image in iPhone

I have firstly convert the image to raw pixels and again convert the pixels back to UIImage, after converting the image it changes it color and also become some transparent, I have tried a lot but not able to get the problem. Here is my code:
-(UIImage*)markPixels:(NSMutableArray*)pixels OnImage:(UIImage*)image{
CGImageRef inImage = image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
int r = 3;
int p = 2*r+1;
unsigned char* data = CGBitmapContextGetData (cgctx);
int i = 0;
while (data[i]&&data[i+1]) {
// NSLog(#"%d",pixels[i]);
i++;
}
NSLog(#"%d %zd %zd",i,w,h);
NSLog(#"%ld",sizeof(CGBitmapContextGetData (cgctx)));
for(int i = 0; i< pixels.count-1 ; i++){
NSValue*touch1 = [pixels objectAtIndex:i];
NSValue*touch2 = [pixels objectAtIndex:i+1];
NSArray *linePoints = [self returnLinePointsBetweenPointA:[touch1 CGPointValue] pointB:[touch2 CGPointValue]];
for(NSValue *touch in linePoints){
NSLog(#"point = %#",NSStringFromCGPoint([touch CGPointValue]));
CGPoint location = [touch CGPointValue];
for(int i = -r ; i<p ;i++)
for(int j= -r; j<p;j++)
{
if(i<=0 && j<=0 && i>image.size.height && j>image.size.width)
continue;
NSInteger index = (location.y+i) * w*4 + (location.x+j)* 4;
index = 0;
data[index +3] = 125;
}
}
}
// When finished, release the context
CGContextRelease(cgctx);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dp = CGDataProviderCreateWithData(NULL, data, w*h*4, NULL);
CGImageRef img = CGImageCreate(w, h, 8, 32, 4*w, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big, dp, NULL, NO, kCGRenderingIntentDefault);
UIImage* ret_image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
CGColorSpaceRelease(colorSpace);
// Free image data memory for the context
if (data) { free(data); }
return ret_image;
}
First one is original image and second image is after applying this code.
You have to ask the CGImageRef if it uses alpha or not, and the format of the components per pixel - look at all the CGImageGet... functions. Most likely the image is not ARGB but BGRA.
I often create and render pure green images then print out the first pixel to insure I got it right (BGRA -> 0 255 0 255) etc. It really gets confusing with host order etc and alpha first or last (does that mean before host order is applied before or after?)
EDIT: You told the CGDataProviderCreateWithData to use 'kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big', but I don't see you asking the original image for how its configured. My guess is that changing 'kCGBitmapByteOrder32Big' to 'kCGBitmapByteOrder32Little' will fix your problem but the alpha may be wrong too.
Images can have different values for alpha and byte order so you really need to ask the original image how its configured then adapt to that (or remap the bytes in memory to whatever format you want.)

Manipulating individual pixels of a UIImage

I'm currently attempting to port a Java program over to iOs which utilzes BufferedImage's .setRGB method to individually manipulate images' pixels like so
image.setRGB(x, y, color);
In Objective-C I'm using UIImage in place of BufferedImage and what I'd like to know is if there is an equivalent way of doing Java's setRGB , or must I convert the image to a byte array, manipulate the pixels and then use ImageWithData?
You have to convert the image into a bitmap, use setRGB on the bitmap, then create an image with the bitmap and draw it or assign the new image to your image views contents.
Why? The pixels are likely in an opaque format on the other side of the GPU where an app's ARM code can't access them.
You can use following Method to access pixels and manipulating them
-(UIImage*) applyFilterContext:(void*)context
{
CGImageRef inImage = self.CGImage;
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
for (int i=0; i<length; i+=4)
{
filterGreyScale(m_PixelBuf, i, context);
}
//Create Context
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
CGImageGetColorSpace(inImage),
CGImageGetBitmapInfo(inImage)
);
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CFRelease(m_DataRef);
return finalImage;
}
//manipulating pixels to to Greyscale values
void filterGreyScale (UInt8 *pixelBuf, UInt32 offset, void *context)
{
int r = offset;
int g = offset+1;
int b = offset+2;
int red = pixelBuf[r];
int green = pixelBuf[g];
int blue = pixelBuf[b];
uint32_t gray = 0.3 * red + 0.59 * green + 0.11 * blue;
pixelBuf[r] = gray;
pixelBuf[g] = gray;
pixelBuf[b] = gray;
}
Here is an example of how to get the pixel data from an image:
http://developer.apple.com/library/mac/#qa/qa1509/_index.html
Then you could change the color of the data this way:
// Set the color of the pixel to 50% grey + 50% alpha
data[offset+0] = 128;
data[offset+1] = 128;
data[offset+2] = 128;
data[offset+3] = 128;
Here is the code I ended up using to set a pixel
-(void)setPixel:(UInt8*)buffer width:(int)width x:(int)x y:(int)y r:(int)r g:(int)g b:(int)b
{
buffer[x*4 + y*(width*4)] = r;
buffer[x*4 + y*(width*4)+1] = g;
buffer[x*4 + y*(width*4)+2] = b;
}

Why am I getting these weird results when editing the image?

I'm trying to do something very simple:
1) Draw a UIImage into a CG bitmap context
2) Get a pointer to the data of the image
3) iterate over all pixels and just set all R G B components to 0 and alpha to 255. The result should appear pure black.
This is the original image I am using. 200 x 40 pixels, PNG-24 ARGB premultiplied alpha (All alpha values == 255):
This is the result (screenshot from Simulator), when I do not modify the pixels. Looks good:
This is the result, when I do the modifications! It looks like if the modification was incomplete. But the for-loops went over EVERY single pixel. The counter proves it: Console reports modifiedPixels = 8000 which is exactly 200 x 40 pixels. It looks always exactly the same.
Note: The PNG image I use has no alpha < 255. So no transparent pixels.
This is how I create the context. Nothing special...
int bitmapBytesPerRow = (width * 4);
int bitmapByteCount = (bitmapBytesPerRow * imageHeight);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc(bitmapByteCount);
bitmapContext = CGBitmapContextCreate(bitmapData,
width,
height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
CGImageAlphaPremultipliedFirst);
Next, I draw the image into that bitmapContext, and obtain the data like this:
void *data = CGBitmapContextGetData(bitmapContext);
This is the code which iterates over the pixels to modify them:
size_t bytesPerRow = CGImageGetBytesPerRow(img);
NSInteger modifiedPixels = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
long int offset = bytesPerRow * y + 4 * x;
// ARGB
unsigned char alpha = data[offset];
unsigned char red = data[offset+1];
unsigned char green = data[offset+2];
unsigned char blue = data[offset+3];
data[offset] = 255;
data[offset+1] = 0;
data[offset+2] = 0;
data[offset+3] = 0;
modifiedPixels++;
}
}
When done, I obtain a new UIImage from the bitmap context and display it in a UIImageView, to see the result:
CGImageRef imageRef = CGBitmapContextCreateImage(bitmapContext);
UIImage *img = [[UIImage alloc] initWithCGImage:imageRef];
Question:
What am I doing wrong?
Is this happening because I modify the data while iterating over it? Must I duplicate it?
Use CGBitmapContextGetBytesPerRow(bitmapContext) to get bytesPerRow instead getting from image (image has only 3 bytes per pixels if it hasn't alpha informations)
Might you're getting wrong height or width.... and by the way 240x40=9600 not 8000 so that's for sure that you're not iterating over each and every pixel.

split UIImage by colors and create 2 images

I have looked through replacing colors in an image but cannot get it to work how i need because I am trying to do it with every color but one, as well as transparency.
what I am looking for is a way to take in an image and split out a color (say all the pure black) from that image. Then take that split out portion and make a new image with a transparent background and the split out portion.
(here is just an example of the idea, say i want to take a screenshot of this page. make every other color but pure black be transparent, and save that new image to the library, or put it into a UIImageView)
i have looked in to CGImageCreateWithMaskingColors but cant seem to do what I need with the transparent portion, and I dont really understand the colorMasking input other than you can provide it with a {Rmin,Rmax,Gmin,Gmax,Bmin,Bmax} color mask but when I do, it colors everything. any ideas or input would be great.
Sounds like you're going to have to get access to the underlying bytes and write code to process them directly. You can use CGImageGetDataProvider() to get access to the data of an image, but there's no guarantee that the format will be something you know how to handle. Alternately you can create a new CGContextRef using a specific format you know how to handle, then draw the original image into your new context, then process the underlying data. Here's a quick attempt at doing what you want (uncompiled):
- (UIImage *)imageWithBlackPixels:(UIImage *)image {
CGImageRef cgImage = image.CGImage;
// create a premultiplied ARGB context with 32bpp
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
size_t bpc = 8; // bits per component
size_t bpp = bpc * 4 / 8; // bytes per pixel
size_t bytesPerRow = bpp * width;
void *data = malloc(bytesPerRow * height);
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bpc, bytesPerRow, colorspace, bitmapInfo);
CGColorSpaceRelease(colorspace);
if (ctx == NULL) {
// couldn't create the context - double-check the parameters?
free(data);
return nil;
}
// draw the image into the context
CGContextDrawImage(ctx, CGRectMake(0, 0, width, height), cgImage);
// replace all non-black pixels with transparent
// preserve existing transparency on black pixels
for (size_t y = 0; y < height; y++) {
size_t rowStart = bytesPerRow * y;
for (size_t x = 0; x < width; x++) {
size_t pixelOffset = rowStart + x*bpp;
// check the RGB components of the pixel
if (data[pixelOffset+1] != 0 || data[pixelOffset+2] != 0 || data[pixelOffset+3] != 0) {
// this pixel contains non-black. zero it out
memset(&data[pixelOffset], 0, 4);
}
}
}
// create our new image and release the context data
CGImageRef newCGImage = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
free(data);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(newCGImage);
return newImage;
}

Is there a way to measure skin tone of an object from an iphone app

Im trying to find a way to programme/buy an app to use the iphone to detect someones skin tone against an objective scale using RGB froma phot they take of themselves. Anyone got any pointers?
I think the objections to this are correct - its going to be pretty hard work to calibrate it. However, any solution is going to rely on being able to get the colour of a specific pixel in an image (in this case a UIImageView...)
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
This code is from a colourPicker class which carries the following (c)
// Created by markj on 3/6/09.
// Copyright 2009 Mark Johnson. All rights reserved.
The full article is here
http://www.markj.net/iphone-uiimage-pixel-color/