color replacement in image for iphone application - iphone

Basically i want to implement color replacement feature for my paint application.
Below are original and expected output
Original:
After changing wall color selected by user along with some threshold for replacement
I have tried two approaches but could not got working as expected
Approach 1:
Queue-based Flood Fill algorithm for color replacement
but with i got below output with terribly slow and wall shadow has not been preserved.
Approach 2:
So i have tried to look at another option and found below post from SO
How to change a particular color in an image?
but i could not understand logic and not sure about my code implementation from step 3.
Please find below code for each step wise with my understanding.
1) Convert the image from RGB to HSV using cvCvtColor (we only want to
change the hue).
IplImage *mainImage=[self CreateIplImageFromUIImage:[UIImage imageNamed:#"original.jpg"]];
IplImage *hsvImage = cvCreateImage(cvGetSize(mainImage), IPL_DEPTH_8U, 3);
IplImage *threshImage = cvCreateImage(cvGetSize(mainImage), IPL_DEPTH_8U, 3);
cvCvtColor(mainImage,hsvImage,CV_RGB2HSV);
2) Isolate a color with cvThreshold specifying a
certain tolerance (you want a range of colors, not one flat color).
cvThreshold(hsvImage, threshImage, 0, 100, CV_THRESH_BINARY);
3) Discard areas of color below a minimum size using a blob detection
library like cvBlobsLib. This will get rid of dots of the similar
color in the scene. Do i need to specify original image or thresold image?
CBlobResult blobs = CBlobResult(threshImage, NULL, 0);
blobs.Filter( blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 10);
4) Mask the color with cvInRangeS and use the
resulting mask to apply the new hue.
Not sure about this function how it helps in color replacement and not able to understand arguments to be provided.
5) cvMerge the new image with the
new hue with an image composed by the saturation and brightness
channels that you saved in step one.
i understand that cvMerge will merge three channel of H S and V but how i can use output of above three steps.
so basically stuck with opencv implementation,
if possible then please guide me for opencv implemenation or any other solution to tryout.

Finally i am able to achieve some desired output using below javacv code and same ported to opencv too.
this solution has 2 problems
don't have edge detection, i think using contours i can achieve it
replaced color has flat hue and sat which should set based on source
pixel hue sat difference but not sure how to achieve that. may be
instead of cvSet using cvAddS
IplImage image = cvLoadImage("sample.png");
CvSize cvSize = cvGetSize(image);
IplImage hsvImage = cvCreateImage(cvSize, image.depth(),image.nChannels());
IplImage hChannel = cvCreateImage(cvSize, image.depth(), 1);
IplImage sChannel = cvCreateImage(cvSize, image.depth(), 1);
IplImage vChannel = cvCreateImage(cvSize, image.depth(), 1);
cvSplit(hsvImage, hChannel, sChannel, vChannel, null);
IplImage cvInRange = cvCreateImage(cvSize, image.depth(), 1);
CvScalar source=new CvScalar(72/2,0.07*255,66,0); //source color to replace
CvScalar from=getScaler(source,false);
CvScalar to=getScaler(source, true);
cvInRangeS(hsvImage, from , to, cvInRange);
IplImage dest = cvCreateImage(cvSize, image.depth(), image.nChannels());
IplImage temp = cvCreateImage(cvSize, IPL_DEPTH_8U, 2);
cvMerge(hChannel, sChannel, null, null, temp);
cvSet(temp, new CvScalar(45,255,0,0), cvInRange);// destination hue and sat
cvSplit(temp, hChannel, sChannel, null, null);
cvMerge(hChannel, sChannel, vChannel, null, dest);
cvCvtColor(dest, dest, CV_HSV2BGR);
cvSaveImage("output.png", dest);
method to for calculating threshold
CvScalar getScaler(CvScalar seed,boolean plus){
if(plus){
return CV_RGB(seed.red()+(seed.red()*thresold),seed.green()+(seed.green()*thresold),seed.blue()+(seed.blue()*thresold));
}else{
return CV_RGB(seed.red()-(seed.red()*thresold),seed.green()-(seed.green()*thresold),seed.blue()-(seed.blue()*thresold));
}
}

I know this answer will be useful to someone someday.
try this out in your view viewdidLoad() override method for iOS.
image in the code snippet below should be from your UIImageView
seed also is fixed.you can make it dynamic based on user tap event.
cv::Mat mask = cv::Mat::zeros(image.rows + 2, image.cols + 2, CV_8U);
imageView.image = [self UIImageFromCVMat:image];
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
try {
if(seed.x > 0 && seed.y > 0){
cv::floodFill(image, mask, seed, cv::Scalar(50, 155, 20) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
cv::floodFill(image, mask, seed2, cv::Scalar(50, 155, 20) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
cv::floodFill(image, mask, seed3, cv::Scalar(50, 155, 0) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
}
} catch (Exception ex) {
}
cv::cvtColor(image, image, cv::COLOR_RGB2BGR);
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
self.imageView.image = [self UIImageFromCVMat:image];

Related

Combining image channels in CImg

In CImg, I have split an RGBA image apart into multiple single-channel images, with code like:
CImg<unsigned char> input("foo.png");
CImg<unsigned char> r = input.get_channel(0), g = input.get_channel(1), b = input.get_channel(2), a = input.get_channel(3);
Then I try to swizzle the channel order:
CImg<unsigned char> output(input.width(), input.height(), 1, input.channels());
output.channel(0) = g;
output.channel(1) = b;
output.channel(2) = r;
output.channel(3) = a;
When I save the image out, however, it turns out grayscale, apparently based on the alpha channel value; for example, this input:
becomes this output:
How do I specify the image color format so that CImg saves into the correct color space?
Simply copying a channel does not work like that; a better approach is to copy the pixel data with std::copy:
std::copy(g.begin(), g.end(), &output.atX(0, 0, 0, 0));
std::copy(b.begin(), b.end(), &output.atX(0, 0, 0, 1));
std::copy(r.begin(), r.end(), &output.atX(0, 0, 0, 2));
std::copy(a.begin(), a.end(), &output.atX(0, 0, 0, 3));
This results in an output image like:

Using HoughCircles to detect and measure pupil and iris

I'm trying to use OpenCV, more specifically its HoughCircles to detect and measure the pupil and iris, currently I've been playing with some of the variables in the function, because it either returns 0 circles, or an excessive amount. Below is the code and test image I'm using.
Code for measuring iris:
eye1 = [self increaseIn:eye1 Contrast:2 andBrightness:0];
cv::cvtColor(eye1, eye1, CV_RGBA2RGB);
cv::bilateralFilter(eye1, eye2, 75, 100, 100);
cv::vector<cv::Vec3f> circles;
cv::cvtColor(eye2, eye1, CV_RGBA2GRAY);
cv::morphologyEx(eye1, eye1, 4, cv::getStructuringElement(cv::MORPH_RECT,cv::Size(3, 3)));
cv::threshold(eye1, eye1, 0, 255, cv::THRESH_OTSU);
eye1 = [self circleCutOut:eye1 Size:50];
cv::GaussianBlur(eye1, eye1, cv::Size(7, 7), 0);
cv::HoughCircles(eye1, circles, CV_HOUGH_GRADIENT, 2, eye1.rows/4);
Code for measuring pupil:
eye1 = [self increaseBlackPupil:eye1];
cv::Mat eye2 = cv::Mat::zeros(eye1.rows, eye1.cols, CV_8UC3);
eye1 = [self increaseIn:eye1 Contrast:2 andBrightness:0];
cv::cvtColor(eye1, eye1, CV_RGBA2RGB);
cv::bilateralFilter(eye1, eye2, 75, 100, 100);
cv::threshold(eye2, eye1, 25, 255, CV_THRESH_BINARY);
cv::SimpleBlobDetector::Params params;
params.minDistBetweenBlobs = 75.0f;
params.filterByInertia = false;
params.filterByConvexity = false;
params.filterByCircularity = false;
params.filterByArea = true;
params.minArea = 50;
params.maxArea = 500;
cv::Ptr<cv::FeatureDetector> blob_detector = new cv::SimpleBlobDetector(params);
blob_detector->create("SimpleBlob");
cv::vector<cv::KeyPoint> keypoints;
blob_detector->detect(eye1, keypoints);
I know the image is rough, I've been also trying to find a way to clean it up and make the edges clearer.
So my question to put it plainly: What can I do to adjust the parameters in the function HoughCircles or changes to the images to make the iris and pupil detected?
Thanks
Ok, without experimenting too much, what I understand is that you've only applied a bilateral filter to the image before using the Hough circle detector.
In my opinion, you need to include a thresholding step into the process.
I took your sample image that you provided in the post and made it undergo the following steps:
conversion to greyscale
morphological gradient
thresholding
hough circle detection.
after the thresholding step, I got the following image for the left eye only:
here's the code:
greyscale:
cvCvtColor(im_rgb,im_rgb,CV_RGB2GRAY);
morphology:
cv::morphologyEx(im_rgb,im_rgb,4,cv::getStructuringElement(cv::MORPH_RECT,cv::Size(size,size)));
thresholding:
cv::threshold(im_rgb, im_rgb, low, high, cv::THRESH_OTSU);
hough circle detection:
cv::vector<cv::Vec3f> circles;
cv::HoughCircles(im_rgb, circles, CV_HOUGH_GRADIENT, 2, im_rgb.rows/4);
Now when I print:
NSLog(#"Found %ld cirlces", circles.size());
I get:
"Found 1 cirlces"
Hope this helps.

Converting PIL image to GTK pixmap with alpha

So I need to take an image I made in PIL and convert it to a pixmap to be displayed in a drawable.
How do I convert from PIL to pixmap and keep the images alpha?
Currently I have this code written:
def gfx_draw_tem2(self, r, x, y):
#im = Image.open("TEM/TEM cropped.png")
im = Image.new("RGBA", (r*2,r*2), (255, 255, 255, 255))
draw = ImageDraw.Draw(im)
for i in range(0,r*2):
for j in range(0,r*2):
if(self.in_circle(i,j,r)):
draw.point((i,j), fill=(100,50,75,50)) #alpha at 255 for test2.png
im.save("test.png")
im_data = im.tostring()
pixbuf = gdk.pixbuf_new_from_data(im_data, gdk.COLORSPACE_RGB, True, 8, im.size[0], im.size[1], 4*im.size[0])
pixmap2, mask = pixbuf.render_pixmap_and_mask()
self.pixmap.draw_drawable(self.white_gc, pixmap2, 0,0,x-r,y-r,-1,-1)
Here are the images I created from im.save("test.png"):
http://imgur.com/43spsBG,lqowten#0
Notice the first picture has an alpha of 255 (full) and the seconds has an alpha of 50.
However When I convert the images to a pixmap with my current code I lose the transparent affect.
Thanks for your help,
Ian
EDIT: I have narrowed it down a little bit with more testing. I am losing the alpha of my image when converting the pixbuf to a pixmap.
Okay figured it out.
Trick here is to not convert the pixbuf to a pixmap using pixbuf.render_pixmap_and_mask()
Instead I took my self.pixmap that I draw onto my drawable and called draw_pixbuf() on it.
Here is the new code I used.
def gfx_draw_tem2(self, r, x, y):
im = Image.new("RGBA", (r*2,r*2), (1, 1, 1, 0))
draw = ImageDraw.Draw(im)
for i in range(0,r*2):
for j in range(0,r*2):
if(self.in_circle(i,j,r)):
draw.point((i,j), fill=(100,50,75,140))
im_data = im.tostring()
pixbuf = gdk.pixbuf_new_from_data(im_data, gdk.COLORSPACE_RGB, True, 8, im.size[0], im.size[1], 4*im.size[0])
self.pixmap.draw_pixbuf(self.white_gc, pixbuf, 0, 0, x, y, -1, -1, gdk.RGB_DITHER_NORMAL, 0, 0)
Hope this helps someone.

OpenGL ES 1.1: How to change texture color without losing luminance?

I have particles that I want to be able to change the color of in code, so any color can be used. So I have only one texture that basically has luminance.
I've been using glColor4f(1f, 0f, 0f, 1f); to apply the color.
Every blendfunc I've tried that has come close to working ends up like the last picture below. I still want to preserve luminance, like in the middle picture. (This is like the Overlay or Soft Light filters in Photoshop, if the color layer was on top of the texture layer.)
Any ideas for how to do this without programmable shaders? Also, since these are particles, I don't want a black box behind it, I want it to add onto the scene.
Here is a solution that might be close to what you're looking for:
glColor4f(1.0f, 0.0f, 0.0f, 1.0f);
glActiveTexture( GL_TEXTURE0 );
glEnable( GL_TEXTURE_2D );
glBindTexture(GL_TEXTURE_2D, spriteTexture);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glActiveTexture( GL_TEXTURE1 );
glEnable( GL_TEXTURE_2D );
glBindTexture(GL_TEXTURE_2D, spriteTexture);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD );
What it does is multiply the original texture by the specified color and then adds the pixels values of the original texture on top:
final_color.rgba = original_color.rgba * color.rgba + original_color.rgba;
This will result in a brighter image than what you've asked for but might be good enough with some tweaking.
Should you want to preserve the alpha value of the texture, you'll need to use GL_COMBINE instead of GL_ADD (+ set GL_COMBINE_RGB and GL_COMBINE_ALPHA properly).
Here are some results using this technique on your texture.
NONSENSE! You don't have to use multi-texturing. Just premultiply your alpha.
If you premultiply alpha on the image after you load it in and before you create the GL texture for it then you only need one texture unit for the GL_ADD texture env mode.
If you're on iOS then Apple's libs can premultiply for you. See the example Texture2D class and look for the kCGImageAlphaPremultipliedLast flag.
If you're not using an image loader that supports premultiply then you have to do it manually after loading the image. Pseudo code:
uint8* LoadRGBAImage(const char* pImageFileName) {
Image* pImage = LoadImageData(pImageFileName);
if (pImage->eFormat != FORMAT_RGBA)
return NULL;
// allocate a buffer to store the pre-multiply result
// NOTE that in a real scenario you'll want to pad pDstData to a power-of-2
uint8* pDstData = (uint8*)malloc(pImage->rows * pImage->cols * 4);
uint8* pSrcData = pImage->pBitmapBytes;
uint32 bytesPerRow = pImage->cols * 4;
for (uint32 y = 0; y < pImage->rows; ++y) {
byte* pSrc = pSrcData + y * bytesPerRow;
byte* pDst = pDstData + y * bytesPerRow;
for (uint32 x = 0; x < pImage->cols; ++x) {
// modulate src rgb channels with alpha channel
// store result in dst rgb channels
uint8 srcAlpha = pSrc[3];
*pDst++ = Modulate(*pSrc++, srcAlpha);
*pDst++ = Modulate(*pSrc++, srcAlpha);
*pDst++ = Modulate(*pSrc++, srcAlpha);
// copy src alpha channel directly to dst alpha channel
*pDst++ = *pSrc++;
}
}
// don't forget to free() the pointer!
return pDstData;
}
uint8 Modulate(uint8 u, uint8 uControl) {
// fixed-point multiply the value u with uControl and return the result
return ((uint16)u * ((uint16)uControl + 1)) >> 8;
}
Personally, I'm using libpng and premultiplying manually.
Anyway, after you premultiply, just bind the byte data as an RGBA OpenGL texture. Using glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD); with a single texture unit should be all you need after that. You should get exactly (or pretty damn close) to what you want. You might have to use glBlendFunc(GL_SRC_ALPHA, GL_ONE); as well if you really want to make the thing look shiny btw.
This is subtly different from the Ozirus method. He's never "reducing" the RGB values of the texture by premultiplying, so the RGB channels get added too much and look sort of washed out/overly bright.
I suppose the premultiply method is more akin to Overlay whereas the Ozirus method is Soft Light.
For more, see:
http://en.wikipedia.org/wiki/Alpha_compositing
Search for "premultiplied alpha"

CGPathRef intersection

Is there a way to find out whether two CGPathRefs are intersected or not. In my case all the CGPaths are having closePath.
For example, I am having two paths. One path is the rectangle which is rotated with some angle and the other path is curved path. Two paths origin will be changing frequently. At some point they may intersect. I want to know when they are intersected. Please let me know if you have any solution.
Thanks in advance
Make one path the clipping path, draw the other path, then search for pixels that survived the clipping process:
// initialise and erase context
CGContextAddPath(context, path1);
CGContextClip(context);
// set fill colour to intersection colour
CGContextAddPath(context, path2);
CGContextFillPath(context);
// search for pixels that match intersection colour
This works because clipping = intersecting.
Don't forget that intersection depends on the definition of interiority, of which there are several. This code uses the winding-number fill rule, you might want the even odd rule or something else again. If interiority doesn't keep you up at night, then this code should be fine.
My previous answer involved drawing transparent curves to an RGBA context. This solution is superior to the old one because it is
simpler
uses a quarter of the memory as an 8bit greyscale context suffices
obviates the need for hairy, difficult-to-debug transparency code
Who could ask for more?
I guess you could ask for a complete implementation, ready to cut'n'paste, but that would spoil the fun and obfuscate an otherwise simple answer.
OLDER, HARDER TO UNDERSTAND AND LESS EFFICIENT ANSWER
Draw both CGPathRefs separately at 50% transparency into a zeroed, CGBitmapContextCreate-ed RGBA memory buffer and check for any pixel values > 128. This works on any platform that supports CoreGraphics (i.e. iOS and OSX).
In pseudocode
// zero memory
CGContextRef context;
context = CGBitmapContextCreate(memory, wide, high, 8, wide*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaPremultipliedLast);
CGContextSetRGBFillColor(context, 1, 1, 1, 0.5); // now everything you draw will be at 50%
// draw your path 1 to context
// draw your path 2 to context
// for each pixel in memory buffer
if(*p > 128) return true; // curves intersect
else p+= 4; // keep looking
Let the resolution of the rasterised versions be your precision and choose the precision to suit your performance needs.
1) There isn't any CGPath API to do this. But, you can do the math to figure it out. Take a look at this wikipedia article on Bezier curves to see how the curves in CGPath are implemented.
2) This is going to be slow on the iPhone I would expect but you could fill both paths into a buffer in difference colors (say, red and blue, with alpha=0.5) and then iterate through the buffer to find any pixels that occur at intersections. This will be extremely slow.
For iOS, the alpha blend seems to be ignored.
Instead, you can do a color blend, which will achieve the same effect, but doesn't need alpha:
CGContextSetBlendMode(context, kCGBlendModeColorDodge);
CGFloat semiTransparent[] = { .5,.5,.5,1};
Pixels in output Image will be:
RGB = 0,0,0 = (0.0f) ... no path
RGB = 64,64,64 = (0.25f) ... one path, no intersection
RGB = 128,128,128 = (0.5f) ... two paths, intersection found
Complete code for drawing:
-(void) drawFirst:(CGPathRef) first second:(CGPathRef) second into:(CGContextRef)context
{
/** setup the context for DODGE (everything gets lighter if it overlaps) */
CGContextSetBlendMode(context, kCGBlendModeColorDodge);
CGFloat semiTransparent[] = { .5,.5,.5,1};
CGContextSetStrokeColor(context, semiTransparent);
CGContextSetFillColor(context, semiTransparent);
CGContextAddPath(context, first);
CGContextFillPath(context);
CGContextStrokePath(context);
CGContextAddPath(context, second);
CGContextFillPath(context);
CGContextStrokePath(context);
}
Complete code for checking output:
[self drawFirst:YOUR_FIRST_PATH second:YOUR_SECOND_PATH into:context];
// Now we can get a pointer to the image data associated with the bitmap
// context.
BOOL result = FALSE;
unsigned char* data = CGBitmapContextGetData (context);
if (data != NULL) {
for( int i=0; i<width; i++ )
for( int k=0; k<width; k++ )
{
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((width*round(k))+round(i));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
if( red > 254 )
{
result = TRUE;
break;
}
}
And, finally, here's a slightly modified code from another SO answer ... complete code for creating an RGB space on iOS 4, iOS 5, that will support the above functions:
- (CGContextRef) createARGBBitmapContextWithFrame:(CGRect) frame
{
/** NB: this requires iOS 4 or above - it uses the auto-allocating behaviour of Apple's method, to reduce a potential memory leak in the original StackOverflow version */
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = frame.size.width;
size_t pixelsHigh = frame.size.height;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (NULL,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst
//kCGImageAlphaFirst
);
if (context == NULL)
{
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}