Sprite kit and colorWithPatternImage - sprite-kit

Do we have any way of repeating an image across an area, like a SKSpriteNode? SKColor colorWithPatternImage doesn't work unfortunately.
Edit:
I did the following categories, it seems to work so far. Using Mac, not tested on iOS. Likely needs some fixing for iOS.
// Add to SKSpriteNode category or something.
+(SKSpriteNode*)patternWithImage:(NSImage*)image size:(const CGSize)SIZE;
// Add to SKTexture category or something.
+(SKTexture*)patternWithSize:(const CGSize)SIZE image:(NSImage*)image;
And the implementations. Put in respective files.
+(SKSpriteNode*)patternWithImage:(NSImage*)imagePattern size:(const CGSize)SIZE {
SKTexture* texturePattern = [SKTexture patternWithSize:SIZE image:imagePattern];
SKSpriteNode* sprite = [SKSpriteNode spriteNodeWithTexture:texturePattern];
return sprite;
}
+(SKTexture*)patternWithSize:(const CGSize)SIZE image:(NSImage*)image {
// Hopefully this function would be platform independent one day.
SKColor* colorPattern = [SKColor colorWithPatternImage:image];
// Correct way to find scale?
DLog(#"backingScaleFactor: %f", [[NSScreen mainScreen] backingScaleFactor]);
const CGFloat SCALE = [[NSScreen mainScreen] backingScaleFactor];
const size_t WIDTH_PIXELS = SIZE.width * SCALE;
const size_t HEIGHT_PIXELS = SIZE.height * SCALE;
CGContextRef cgcontextref = MyCreateBitmapContext(WIDTH_PIXELS, HEIGHT_PIXELS);
NSAssert(cgcontextref != NULL, #"Failed creating context!");
// CGBitmapContextCreate(
// NULL, // let the OS handle the memory
// WIDTH_PIXELS,
// HEIGHT_PIXELS,
CALayer* layer = CALayer.layer;
layer.frame = CGRectMake(0, 0, SIZE.width, SIZE.height);
layer.backgroundColor = colorPattern.CGColor;
[layer renderInContext:cgcontextref];
CGImageRef imageref = CGBitmapContextCreateImage(cgcontextref);
SKTexture* texture1 = [SKTexture textureWithCGImage:imageref];
DLog(#"size of pattern texture: %#", NSStringFromSize(texture1.size));
CGImageRelease(imageref);
CGContextRelease(cgcontextref);
return texture1;
}
Ok this is needed as well. This likely only works on Mac.
CGContextRef MyCreateBitmapContext(const size_t pixelsWide, const size_t pixelsHigh) {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
//int bitmapByteCount;
size_t bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);// 1
//bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);// 2
bitmapData = NULL;
#define kBitmapInfo kCGImageAlphaPremultipliedLast
//#define kBitmapInfo kCGImageAlphaPremultipliedFirst
//#define kBitmapInfo kCGImageAlphaNoneSkipFirst
// According to http://stackoverflow.com/a/18921840/129202 it should be safe to just cast
CGBitmapInfo bitmapinfo = (CGBitmapInfo)kBitmapInfo; //kCGImageAlphaNoneSkipFirst; //0; //kCGBitmapAlphaInfoMask; //kCGImageAlphaNone; //kCGImageAlphaNoneSkipFirst;
context = CGBitmapContextCreate (bitmapData,// 4
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
bitmapinfo
);
if (context== NULL)
{
free (bitmapData);// 5
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );// 6
return context;// 7
}

iOS working code:
CGRect textureSize = CGRectMake(0, 0, 488, 650);
CGImageRef backgroundCGImage = [UIImage imageNamed:#"background.png"].CGImage;
UIGraphicsBeginImageContext(self.level.worldSize); // use WithOptions to set scale for retina display
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawTiledImage(context, textureSize, backgroundCGImage);
UIImage *tiledBackground = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
SKTexture *backgroundTexture = [SKTexture textureWithCGImage:tiledBackground.CGImage];
SKSpriteNode *backgroundNode = [SKSpriteNode spriteNodeWithTexture:backgroundTexture];
[self addChild:backgroundNode];

I found that the above linked sprite kit shader did not work with Xcode 10, so I rolled my own. Here is the shader code:
void main(void) {
vec2 offset = sprite_size - fmod(node_size, sprite_size) / 2;
vec2 pixel = v_tex_coord * node_size + offset;
vec2 target = fmod(pixel, sprite_size) / sprite_size;
vec4 px = texture2D(u_texture, target);
gl_FragColor = px;
}
Note that the offset variable is only required if you want the pattern centralised - you can omit it, and its addition in the following line, if you want your tile pattern to start with the tile texture's bottom left corner.
Also note that you will need to manually add the node_size and sprite_size variables to the shader (and update them if they change) as neither of these have standard representations any more.
// The sprite node's texture will be used as a single tile
let node = SKSpriteNode(imageNamed: "TestTile")
let tileShader = SKShader(fileNamed: "TileShader.fsh")
// The shader needs to know the tile size and the node's final size.
tileShader.attributes = [
SKAttribute(name: "sprite_size", type: .vectorFloat2),
SKAttribute(name: "node_size", type: .vectorFloat2)
]
// At this point, the node's size is equal to its texture's size.
// We can therefore use it as the sprite size in the shader.
let spriteSize = vector_float2(
Float(node.size.width),
Float(node.size.height)
)
// Replace this with the desired size of the node.
// We will set this as the size of the node later.
let size = CGSize(x: 512, y: 256)
let nodeSize = vector_float2(
Float(size.width),
Float(size.height)
)
newBackground.setValue(
SKAttributeValue(vectorFloat2: spriteSize),
forAttribute: "sprite_size"
)
newBackground.setValue(
SKAttributeValue(vectorFloat2: nodeSize),
forAttribute: "node_size"
)
node.shader = tileShader
node.size = size

Yes, it is possible to implement that with a call to CGContextDrawTiledImage(), but that wastes a lot of memory for medium and large size nodes. A significantly improved approach is provided at spritekit_repeat_shader. This blog post provides example GLSL code and BSD licensed source is provided.

Related

Image Circular Wrap in iOS

I have a problem - I want to create a circular wrap function which will wrap an image as depicted below:
This is available in OSX however is not available on iOS.
My logic so far has been:
Split the image up into x sections and for each section:
Rotate alpha degrees
Scale the image in the x axis to create a diamond shaped 'warped' effect of the image
Rotate back 90 - atan((h / 2) / (w / 2))
Translate the offset
My problem is that this seems inaccurate and I have been unable to mathematically figure out how to do this correctly - any help would be massively appreciated.
Link to OSX docs for CICircularWrap:
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CICircularWrap
Since CICircularWrap is not supported on iOS (EDIT: it is now - check answer below), one has to code his own effect for now. Probably the simplest way is to compute the transformation from polar to cartesian coordinate systems and then interpolate from the source image. I've come up with this simple (and frankly quite slow - it can be much optimised) algorithm:
#import <QuartzCore/QuartzCore.h>
CGContextRef CreateARGBBitmapContext (size_t pixelsWide, size_t pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
CGImageRef circularWrap(CGImageRef inImage,CGFloat bottomRadius, CGFloat topRadius, CGFloat startAngle, BOOL clockWise, BOOL interpolate)
{
if(topRadius < 0 || bottomRadius < 0) return NULL;
// Create the bitmap context
int w = (int)CGImageGetWidth(inImage);
int h = (int)CGImageGetHeight(inImage);
//result image side size (always a square image)
int resultSide = 2*MAX(topRadius, bottomRadius);
CGContextRef cgctx1 = CreateARGBBitmapContext(w,h);
CGContextRef cgctx2 = CreateARGBBitmapContext(resultSide,resultSide);
if (cgctx1 == NULL || cgctx2 == NULL)
{
return NULL;
}
// Get image width, height. We'll use the entire image.
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx1, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
int *data1 = CGBitmapContextGetData (cgctx1);
int *data2 = CGBitmapContextGetData (cgctx2);
int resultImageSize = resultSide*resultSide;
double temp;
for(int *p = data2, pos = 0;pos<resultImageSize;p++,pos++)
{
*p = 0;
int x = pos%resultSide-resultSide/2;
int y = -pos/resultSide+resultSide/2;
CGFloat phi = modf(((atan2(x, y)+startAngle)/2.0/M_PI+0.5),&temp);
if(!clockWise) phi = 1-phi;
phi*=w;
CGFloat r = ((sqrtf(x*x+y*y))-topRadius)*h/(bottomRadius-topRadius);
if(phi>=0 && phi<w && r>=0 && r<h)
{
if(!interpolate || phi >= w-1 || r>=h-1)
{
//pick the closest pixel
*p = data1[(int)r*w+(int)phi];
}
else
{
double dphi = modf(phi, &temp);
double dr = modf(r, &temp);
int8_t* c00 = (int8_t*)(data1+(int)r*w+(int)phi);
int8_t* c01 = (int8_t*)(data1+(int)r*w+(int)phi+1);
int8_t* c10 = (int8_t*)(data1+(int)r*w+w+(int)phi);
int8_t* c11 = (int8_t*)(data1+(int)r*w+w+(int)phi+1);
//interpolate components separately
for(int component = 0; component < 4; component++)
{
double avg = ((*c00 & 0xFF)*(1-dphi)+(*c01 & 0xFF)*dphi)*(1-dr)+((*c10 & 0xFF)*(1-dphi)+(*c11 & 0xFF)*dphi)*dr;
*p += (((int)(avg))<<(component*8));
c00++; c10++; c01++; c11++;
}
}
}
}
CGImageRef result = CGBitmapContextCreateImage(cgctx2);
// When finished, release the context
CGContextRelease(cgctx1);
CGContextRelease(cgctx2);
// Free image data memory for the context
if (data1) free(data1);
if (data2) free(data2);
return result;
}
Use the circularWrap function with parameters:
CGImageRef inImage the source image
CGFloat bottomRadius the bottom side of the source image will transform into a circle with this radius
CGFloat topRadius the same for the top side of the source image, this can be larger or smaler than the bottom radius. (results in wraping around top/bottom of the image)
CGFloat startAngle the angle in which the left and right sides of the source image will transform. BOOL clockWise direction of rendering
BOOL interpolate a simple anti-aliasing algorithm. Only the inside of the image is interpolated
some samples (top left is the source image):
with code:
image1 = [UIImage imageWithCGImage:circularWrap(sourceImage.CGImage,0,300,0,YES,NO)];
image2 = [UIImage imageWithCGImage:circularWrap(sourceImage.CGImage,100,300,M_PI_2,NO,YES)];
image3 = [UIImage imageWithCGImage:circularWrap(sourceImage.CGImage,300,200,M_PI_4,NO,YES)];
image4 = [UIImage imageWithCGImage:circularWrap(sourceImage.CGImage,250,300,0,NO,NO)];
enjoy! :)
Apple have added CICircularWrap to iOS 9
https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CICircularWrap
Wraps an image around a transparent circle.
Localized Display Name
Circular Wrap Distortion
Availability
Available in OS X v10.5 and later and in iOS 9 and later.

How to change the skin color of face from the source image in ios?

MY code:
How to manage RGB value for differet shades of face,and how to apply?
this code will change the color of face along with hair,but i want
1.only face to be colored excluding hair.
-(void)changeSkinColorValue:(float)value WithImage:(UIImage*)needToModified
{
CGContextRef ctx;
CGImageRef imageRef = needToModified.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
//unsigned char *rawData = malloc(firstImageV.image.size.height * firstImageV.image.size.width * 10);
CFMutableDataRef m_DataRef = CFDataCreateMutableCopy(0, 0,CGDataProviderCopyData(CGImageGetDataProvider(firstImageV.image.CGImage)));
UInt8 *rawData = (UInt8 *) CFDataGetMutableBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * firstImageV.image.size.width;
NSUInteger bitsPerComponent = 8;
CGContextRef context1 = CGBitmapContextCreate(rawData, firstImageV.image.size.width, firstImageV.image.size.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context1, CGRectMake(0, 0, firstImageV.image.size.width, firstImageV.image.size.height), imageRef);
NSLog(#"%d::%d",width,height);
// for(int ii = 0 ; ii < 250 ; ii+=4)
//{
for(int ii = 0 ; ii < length ; ii+=4)
{
//NSLog(#"Raw data %s",rawData);
int R = rawData[ii];
int G = rawData[ii+1];
int B = rawData[ii+2];
// NSLog(#"%d %d %d", R, G, B);
//if( ( (R>60)&&(R<237) ) || ((G>10)&&(G<120))||((B>4) && (B<120)))
// if( ( (R>100)&&(R<186) ) || ((G>56)&&(G<130))||((B>30) && (B<120)))
// if( ( (R>188)&&(R<228) ) || ((G>123)&&(G<163))||((B>85) && (B<125)))
// if( ( (R>95)&&(R<260) ) || ((G>40)&&(G<210))||((B>20) && (B<170)))
//new code......
if( ( (R>0)&&(R<260) ) || ((G>0)&&(G<210))||((B>0) && (B<170)))
{
rawData[ii+1]=R;//13;
rawData[ii+2]=G;//43;
rawData[ii+3]=value;//63
}
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage(ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
//UIImageView *ty=[[UIImageView alloc]initWithFrame:CGRectMake(100, 200, 400, 400)];
//ty.image=rawImage;
//[self.view addSubview:ty];
[secondImageV setImage:rawImage];
CGContextRelease(context1);
CGContextRelease(ctx);
free(rawData);
}
Try to use GPUImage Framework by Brad Larson:
https://github.com/BradLarson/GPUImage
This is very powerful framework for working with images.
Read this question:
GPUImage create a custom Filter that change selected colors
Hope it helps!
The quick answer is to change your ORs:
if( ( (R>0)&&(R<260) ) || ((G>0)&&(G<210)) || ((B>0) && (B<170)))
to ANDs:
if( ( (R>0)&&(R<260) ) && ((G>0)&&(G<210)) && ((B>0) && (B<170)))
Then adjust your RGB ranges to approximate the range of skin tones in the image (try with some sliders in your UI).
By the way, I presume you know that this:
rawData[ii+1]=R;//13;
rawData[ii+2]=G;//43;
rawData[ii+3]=value;//63
is assigning the red channel to the green channel, the green channel to the blue channel, and value to the ALPHA channel. I don't expect that is what you want.
Also your needToModified image is probably intended to be the same image as firstImageV.image, which is not reflected in your code as it is now.
This approach will only work if your identified colour ranges are wholly and only present in flesh regions of the image.
A long answer could look at more sophisticated selection of colour ranges, alternative ways of selecting image regions, the use of CIFilter or the openCV framework...
Try Masking the image ... it will help to change the specific range of colors defined in mask
Code:
- (UIImage *)doctorTheImage:(UIImage *)originalImage
{
const float brownsMask[6] = {85, 255, 85, 255, 85, 255};
UIImageView *imageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(originalImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor (context, 1.0,1.0, 1.0, 1);
CGContextFillRect(context, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height));
CGImageRef brownRef = CGImageCreateWithMaskingColors(imageView.image.CGImage, brownsMask);
CGRect imageRect = CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height);
CGContextTranslateCTM(context, 0, imageView.image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
//CGImageRef whiteRef = CGImageCreateWithMaskingColors(brownRef, whiteMask);
CGContextDrawImage (context, imageRect, brownRef);
//[originalImage drawInRect:CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height)];
CGImageRelease(brownRef);
// CGImageRelease(whiteRef);
UIImage *doctoredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return doctoredImage;
}
CFDataGetBytePtr According to the docs, it Returns a read-only pointer to the bytes of a CFData object., are you sure you're not looking for CFDataGetBytes which Copies the byte contents of a CFData object to an external buffer. In which case you'll have to allocate your buffer to contain width * height * bpp. Once you have this copy, you can manipulate it anyway you want to create the new picture.
Pixel Selection According to your question, I seem to understand that you want to change skin color from White to Black. Your current code iterates over every pixel to change its color. You should evaluate the "distance" between the pixel color and what you're looking for, and if it's below a certain threshold process it. It might be easier to perform the operation in HSV than by dealing with RGB colors
1) Get a CGPath of face where you want to apply your algorithm.
2) Get width of CGimage.
int width = CGImageGetWidth(image);
3) Get pixel CGPoint of currently processing pixel. Check for the condition before applying your algorithm. And then apply your condition. That's it. Here is the sample portion of code.
CGFloat pointY = (int)(i/4)/(int)width;
CGFloat pointX = (int)(i/4)%(int)width;
CGPoint point = CGPointMake(pointX, pointY);
if (CGPathContainsPoint(yourFacePath, NULL, point, NO))
{
if( ( (R>0)&&(R<260) ) || ((G>0)&&(G<210))||((B>0) && (B<170)))
{
rawData[ii+1]=R;//13;
rawData[ii+2]=G;//43;
rawData[ii+3]=value;//63
}
}
make the face and the hair as objects. object have color and have a global method to change their colors. then in your main,make both objects (hair, face) and specify what you want to color exactly. This is the right approach. hope it helps!

Bad access on CGContextClearRect

I am getting the error EXC_BAD_ACCESS on the line CGContextClearRect(c, self.view.bounds) below. I can't seem to figure out why. This is in a UIViewController class. Here is the function I am in during the crash.
- (void)level0Func {
printf("level0Func\n");
frameStart = [NSDate date];
UIImage *img;
CGContextRef c = startContext(self.view.bounds.size);
printf("address of context: %x\n", c);
/* drawing/updating code */ {
CGContextClearRect(c, self.view.bounds); // crash occurs here
CGContextSetFillColorWithColor(c, [UIColor greenColor].CGColor);
CGContextFillRect(c, self.view.bounds);
CGImageRef cgImg = CGBitmapContextCreateImage(c);
img = [UIImage imageWithCGImage:cgImg]; // this sets the image to be passed to the view for drawing
// CGImageRelease(cgImg);
}
endContext(c);
}
Here are my startContext() and endContext():
CGContextRef createContext(int width, int height) {
CGContextRef r = NULL;
CGColorSpaceRef colorSpace;
void *bitmapData;
int byteCount;
int bytesPerRow;
bytesPerRow = width * 4;
byteCount = width * height;
colorSpace = CGColorSpaceCreateDeviceRGB();
printf("allocating %i bytes for bitmap data\n", byteCount);
bitmapData = malloc(byteCount);
if (bitmapData == NULL) {
fprintf(stderr, "could not allocate memory when creating context");
//free(bitmapData);
CGColorSpaceRelease(colorSpace);
return NULL;
}
r = CGBitmapContextCreate(bitmapData, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
return r;
}
CGContextRef startContext(CGSize size) {
CGContextRef r = createContext(size.width, size.height);
// UIGraphicsPushContext(r); wait a second, we dont need to push anything b/c we can draw to an offscreen context
return r;
}
void endContext(CGContextRef c) {
free(CGBitmapContextGetData(c));
CGContextRelease(c);
}
What I am basically trying to do is draw to a context that I am not pushing onto the stack so I can create a UIImage out of it. Here is my output:
wait_fences: failed to receive reply: 10004003
level0Func
allocating 153600 bytes for bitmap data
address of context: 68a7ce0
Any help would be appreciated. I am stumped.
You're not allocating enough memory. Here are the relevant lines from your code:
bytesPerRow = width * 4;
byteCount = width * height;
bitmapData = malloc(byteCount);
When you compute bytesPerRow, you (correctly) multiply the width by 4 because each pixel requires 4 bytes. But when you compute byteCount, you do not multiply by 4, so you act as though each pixel only requires 1 byte.
Change it to this:
bytesPerRow = width * 4;
byteCount = bytesPerRow * height;
bitmapData = malloc(byteCount);
OR, don't allocate any memory and Quartz will allocate the correct amount for you, and free it for you. Just pass NULL as the first argument of CGBitmapContextCreate:
r = CGBitmapContextCreate(NULL, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
Check that the values for self.view.bounds.size are valid. This method could be called before the view size is properly setup.

split UIImage by colors and create 2 images

I have looked through replacing colors in an image but cannot get it to work how i need because I am trying to do it with every color but one, as well as transparency.
what I am looking for is a way to take in an image and split out a color (say all the pure black) from that image. Then take that split out portion and make a new image with a transparent background and the split out portion.
(here is just an example of the idea, say i want to take a screenshot of this page. make every other color but pure black be transparent, and save that new image to the library, or put it into a UIImageView)
i have looked in to CGImageCreateWithMaskingColors but cant seem to do what I need with the transparent portion, and I dont really understand the colorMasking input other than you can provide it with a {Rmin,Rmax,Gmin,Gmax,Bmin,Bmax} color mask but when I do, it colors everything. any ideas or input would be great.
Sounds like you're going to have to get access to the underlying bytes and write code to process them directly. You can use CGImageGetDataProvider() to get access to the data of an image, but there's no guarantee that the format will be something you know how to handle. Alternately you can create a new CGContextRef using a specific format you know how to handle, then draw the original image into your new context, then process the underlying data. Here's a quick attempt at doing what you want (uncompiled):
- (UIImage *)imageWithBlackPixels:(UIImage *)image {
CGImageRef cgImage = image.CGImage;
// create a premultiplied ARGB context with 32bpp
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
size_t bpc = 8; // bits per component
size_t bpp = bpc * 4 / 8; // bytes per pixel
size_t bytesPerRow = bpp * width;
void *data = malloc(bytesPerRow * height);
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bpc, bytesPerRow, colorspace, bitmapInfo);
CGColorSpaceRelease(colorspace);
if (ctx == NULL) {
// couldn't create the context - double-check the parameters?
free(data);
return nil;
}
// draw the image into the context
CGContextDrawImage(ctx, CGRectMake(0, 0, width, height), cgImage);
// replace all non-black pixels with transparent
// preserve existing transparency on black pixels
for (size_t y = 0; y < height; y++) {
size_t rowStart = bytesPerRow * y;
for (size_t x = 0; x < width; x++) {
size_t pixelOffset = rowStart + x*bpp;
// check the RGB components of the pixel
if (data[pixelOffset+1] != 0 || data[pixelOffset+2] != 0 || data[pixelOffset+3] != 0) {
// this pixel contains non-black. zero it out
memset(&data[pixelOffset], 0, 4);
}
}
}
// create our new image and release the context data
CGImageRef newCGImage = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
free(data);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(newCGImage);
return newImage;
}

Create a mask from difference between two images (iPhone)

How can I detect the difference between 2 images, creating a mask of the area that's different in order to process the area that's common to both images (gaussian blur for example)?
EDIT: I'm currently using this code to get the RGBA value of pixels:
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
The problem is, the images are being captured from the iPhone's camera so they are not exactly the same position. I need to create areas of a couple of pixels and extracting the general color of the area (maybe by adding up the RGBA values and dividing by the number of pixels?). How could I do this and then translate it to a CGMask?
I know this is a complex question, so any help is appreciated.
Thanks.
I think the simplest way to do this would be to use a difference blend mode. The following code is based on code I use in CKImageAdditions.
+ (UIImage *) differenceOfImage:(UIImage *)top withImage:(UIImage *)bottom {
CGImageRef topRef = [top CGImage];
CGImageRef bottomRef = [bottom CGImage];
// Dimensions
CGRect bottomFrame = CGRectMake(0, 0, CGImageGetWidth(bottomRef), CGImageGetHeight(bottomRef));
CGRect topFrame = CGRectMake(0, 0, CGImageGetWidth(topRef), CGImageGetHeight(topRef));
CGRect renderFrame = CGRectIntegral(CGRectUnion(bottomFrame, topFrame));
// Create context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(colorSpace == NULL) {
printf("Error allocating color space.\n");
return NULL;
}
CGContextRef context = CGBitmapContextCreate(NULL,
renderFrame.size.width,
renderFrame.size.height,
8,
renderFrame.size.width * 4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if(context == NULL) {
printf("Context not created!\n");
return NULL;
}
// Draw images
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextDrawImage(context, CGRectOffset(bottomFrame, -renderFrame.origin.x, -renderFrame.origin.y), bottomRef);
CGContextSetBlendMode(context, kCGBlendModeDifference);
CGContextDrawImage(context, CGRectOffset(topFrame, -renderFrame.origin.x, -renderFrame.origin.y), topRef);
// Create image from context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
return image;
}
There are three reasons pixels will change from one iPhone photo to the next, the subject changed, the iPhone moved, and random noise. I assume for this question, you're most interested in the subject changes, and you want to process out the effects of the other two changes. I also assume the app intends the user to keep the iPhone reasonably still, so iPhone movement changes are less significant than subject changes.
To reduce the effects of random noise, just blur the image a little. A simple averaging blur, where each pixel in the resulting image is an average of the original pixel with its nearest neighbors should be sufficient to smooth out any noise in a reasonably well lit iPhone image.
To address iPhone movement, you can run a feature detection algorithm on each image (look up feature detection on Wikipedia for a start). Then calculate the transforms needed to align the least changed detected features.
Apply that transform to the blurred images, and find the difference between the images. Any pixels with a sufficient difference will become your mask. You can then process the mask to eliminate any islands of changed pixels. For example, a subject may be wearing a solid colored shirt. The subject may move from one image to the next, but the area of the solid colored shirt may overlap resulting in a mask with a hole in the middle.
In other words, this is a significant and difficult image processing problem. You won't find the answer in a stackoverflow.com post. You will find the answer in a digital image processing textbook.
Can't you just subtract pixel values from the images, and process pixels where the difference i 0?
Every pixel which does not have a suitably similar pixel in the other image within a certain radius can be deemed to be part of the mask. It's slow, (though there's not much that would be faster) but it works fairly simply.
Go through the pixels, copy the ones that are different in the lower image to a new one (not opaque).
Blur the upper one completely, then show the new one above.