How to give gradient in irregular shape of image? - iphone

I have an PNG image and I want to fill that image with gradient. Is this possible?
Means on the time of touch on iPad that part should be fill with gradient.
Just like in flood fill image fill with simple color(R+G+B).
I want that image fill gradient.
Thanx in advance for help.
And this is my line of code where i am find irregular shape and then fill the color by touch i need to modified it.
- (unsigned char*)rawDataFromImage:(UIImage*)image
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSLog(#"w=%d,h=%d",width,height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
return rawData;
}
If any one having different solution for that so please give me.
I don't want to draw color on image i want to touch and fill on image but with gradient.

-(void)drawRect:(CGRect)rect
{
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGGradientRef glossGradient;
CGColorSpaceRef rgbColorspace;
size_t num_locations = 2;
CGFloat locations[2] = { 0.0, 1.0 };
CGFloat components[8] = { 1.0, 1.0, 1.0, 0.35, // Start color
1.0, 1.0, 1.0, 0.06 }; // End color
rgbColorspace = CGColorSpaceCreateDeviceRGB();
glossGradient = CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);
CGRect currentBounds = self.bounds;
CGPoint topCenter = CGPointMake(CGRectGetMidX(currentBounds), 0.0f);
CGPoint midCenter = CGPointMake(CGRectGetMidX(currentBounds), CGRectGetMidY(currentBounds));
CGContextDrawLinearGradient(currentContext, glossGradient, topCenter, midCenter, 0);
CGGradientRelease(glossGradient);
CGColorSpaceRelease(rgbColorspace);
}

Related

CGContextShowTextAtPoint producing non-Retina output on Retina device

I'm trying to add a text to an UIImage and I'm getting a pixelated drawing.
I have tried some other answers with no success:
Drawing on the retina display using CoreGraphics - Image pixelated
Retina display core graphics font quality
Drawing with Core Graphics looks chunky on Retina display
My code:
-(UIImage *)addText:(UIImage *)imgV text:(NSString *)text1
{
int w = self.frame.size.width * 2;
int h = self.frame.size.height * 2;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate
(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), imgV.CGImage);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1);
char* text = (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding];
CGContextSelectFont(context, "Arial", 24, kCGEncodingMacRoman);
// Adjust text;
CGContextSetTextDrawingMode(context, kCGTextInvisible);
CGContextShowTextAtPoint(context, 0, 0, text, strlen(text));
CGPoint pt = CGContextGetTextPosition(context);
float posx = (w/2 - pt.x)/2.0;
float posy = 54.0;
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetRGBFillColor(context, 255, 255, 255, 1.0);
CGContextShowTextAtPoint(context, posx, posy, text, strlen(text));
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];
}
Like Peter said, use UIGraphicsBeginImageContextWithOptions. You might want to pass image.scale to the scale attribute (where image.scale is the scale of one of the images you're drawing), or simply use [UIScreen mainScreen].scale.
This code can be made simpler overall. Try something like:
// Create the image context
UIGraphicsBeginImageContextWithOptions(_baseImage.size, NO, _baseImage.scale);
// Draw the image
CGRect rect = CGRectMake(0, 0, _baseImage.size.width, _baseImage.size.height);
[_baseImage drawInRect:rect];
// Get a vertically centered rect for the text drawing
rect.origin.y = (rect.size.height - FONT_SIZE) / 2 - 2;
rect = CGRectIntegral(rect);
rect.size.height = FONT_SIZE;
// Draw the text
UIFont *font = [UIFont boldSystemFontOfSize:FONT_SIZE];
[[UIColor whiteColor] set];
[text drawInRect:rect withFont:font lineBreakMode:NSLineBreakByClipping alignment:NSTextAlignmentCenter];
// Get and return the new image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
Where text is an NSString object.
I was facing the exact same problem and posted a question last night to which no-one replied. I have combined fnf's suggested changes with Clif Viegas's answer in the following question:
CGContextDrawImage draws image upside down when passed UIImage.CGImage
to come up with the solution. My addText method is slightly different from yours:
+(UIImage *)addTextToImage:(UIImage *)img text:(NSString *)text1{
int w = img.size.width;
int h = img.size.height;
CGSize size = CGSizeMake(w, h);
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(size);
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0,h);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1);
char* text = (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding];// \"05/05/09\";
CGContextSelectFont(context, "Times New Roman", 14, kCGEncodingMacRoman);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
//rotate text
CGContextSetTextMatrix(context, CGAffineTransformMakeRotation( -M_PI/8 ));
CGContextShowTextAtPoint(context, 70, 88, text, strlen(text));
CGColorSpaceRelease(colorSpace);
UIGraphicsEndImageContext();
return newImg;
}
In summary, what I have done is add
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(size);
}
Remove
// CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
and put
CGContextRef context = UIGraphicsGetCurrentContext();
instead. But this causes the image to be upside down. Hence I put
CGContextTranslateCTM(context, 0,h);
CGContextScaleCTM(context, 1.0, -1.0);
just before
CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage);
This way you can use CG functions instead of using drawInRect etc.
Don't forget
UIGraphicsEndImageContext
at the end. Hope this helps.

How to set pixels transparent and after non transparent

I have a code thet changes the pixels in UIImage. I need set pixels in touch places transparent and later after when I touch there again non-transparent. How can I do it???
Code
//It create new image with changed pixels
- (UIImage*)fromImage:(UIImage*)source
colorBuffer:(NSArray *)colors erase:(BOOL)eraser
{
CGContextRef ctx;
CGImageRef imageRef = [source CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
byte *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int byteIndex = 0;
for (int ii = 0 ; ii < width * height ; ++ii)
{
if(eraser)
{
// SET PIXEL TRANSPARENT
}
else
{
//SET PIXEL NON-TRANSPARENT
}
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast );
CGColorSpaceRelease(colorSpace);
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(ctx);
free(rawData);
return rawImage;
}

Detect pixel collision/overlapping between two images

I have two UIImageViews that contain images with some transparent area. Is there any way to check if the non-transparent area between both images collide?
Thanks.
[UPDATE]
So this is what I have up until now, unfortunately it still ain't working but I can't figure out why.
if (!CGRectIntersectsRect(frame1, frame2)) return NO;
NSLog(#"OverlapsPixelsInImage:withImage:> Images Intersect");
UIImage *img1 = imgView1.image;
UIImage *img2 = imgView2.image;
CGImageRef imgRef1 = [img1 CGImage];
CGImageRef imgRef2 = [img2 CGImage];
float minx = MIN(frame1.origin.x, frame2.origin.x);
float miny = MIN(frame1.origin.y, frame2.origin.y);
float maxx = MAX(frame1.origin.x + frame1.size.width, frame2.origin.x + frame2.size.width);
float maxy = MAX(frame1.origin.y + frame1.size.height, frame2.origin.y + frame2.size.height);
CGRect canvasRect = CGRectMake(0, 0, maxx - minx, maxy - miny);
size_t width = floorf(canvasRect.size.width);
size_t height = floorf(canvasRect.size.height);
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerRow = 4 * width;
unsigned char *rawData = calloc(width * height, sizeof(*rawData));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context, 0, canvasRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(frame2.origin.x - minx, frame2.origin.y - miny, frame2.size.width, frame2.size.height), imgRef2);
CGContextDrawImage(context, CGRectMake(frame1.origin.x - minx, frame1.origin.y - miny, frame1.size.width, frame1.size.height), imgRef1);
CGContextRelease(context);
int byteIndex = 0;
for (int i = 0; i < width * height; i++)
{
CGFloat alpha = rawData[byteIndex + 3];
if (alpha > 128)
{
NSLog(#"collided in byte: %d", i);
free(rawData);
return YES;
}
byteIndex += 4;
}
free(rawData);
return NO;
You can draw both the alpha channels of both images into a single bitmap context and then look through the data for any transparent pixels. Take a look at the clipRectToPath() code in Clipping CGRect to a CGPath. It's solving a different problem, but the approach is the same. Rather than using CGContextFillPath() to draw into the context, just draw both of your images.
Here's the flow:
Create an alpha-only bitmap context (kCGImageAlphaOnly)
Draw everything you want to compare into it
Walk the pixels looking at the value. In my example, it considers < 128 to be "transparent." If you want fully transparent, use == 0.
When you find a transparent pixel, the example just makes a note of what column it was in. In your problem, you might just return YES, or you might use that data to form another mask.
Not easily, you basically have to read the in raw bitmap data and walk the pixels.

Drawing into CGImageRef

I want to create a CGImageRef and draw points to it.
What context to use to create an empty CGImageRef and be able to draw onto it.
CGContextRef or CGBitmapContextRef?
If you can provide code to create an empty CGImageRef image and draw to it I would appreciate.
#define HIRESDEVICE (((int)rintf([[[UIScreen mainScreen] currentMode] size].width/[[UIScreen mainScreen] bounds].size.width )>1))
- (CGImageRef) blerg
{
CGFloat imageScale = (CGFloat)1.0;
CGFloat width = (CGFloat)180.0;
CGFloat height = (CGFloat)180.0;
if ( HIRESDEVICE )
{
imageScale = (CGFloat)2.0;
}
// Create a bitmap graphics context of the given size
//
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, width * imageScale, height * imageScale, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// Draw ...
//
CGContextSetRGBFillColor(context, (CGFloat)0.0, (CGFloat)0.0, (CGFloat)0.0, (CGFloat)1.0 );
// …
// Get your image
//
CGImageRef cgImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
return cgImage;
}

Issue with Transparency

We have an issue with transparency. While writing an image to Context with gradient, transparency (which is unwanted) is getting applied. We are not sure why this has been getting applied. We need the context to be "ONLY" with Gradient but not with "TRANSPARENCY".
Attaching the snippet of the code for your reference.
- (UIImage *)ReflectImage:(CGFloat)refFract {
int reflectionHeight = self.size.height * refFract;
CGImageRef gradientMaskImage = NULL;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef gradientBitmapContext = CGBitmapContextCreate(nil, 1, reflectionHeight,
8, 0, colorSpace, kCGImageAlphaNone);
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGColorSpaceRelease(colorSpace);
CGPoint gradientStartPoint = CGPointMake(0, reflectionHeight);
CGPoint gradientEndPoint = CGPointZero;
CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(grayScaleGradient);
CGContextSetGrayFillColor(gradientBitmapContext, 0.0, 0.5);
CGContextFillRect(gradientBitmapContext, CGRectMake(0, 0, 1, reflectionHeight));
gradientMaskImage = CGBitmapContextCreateImage(gradientBitmapContext);
CGContextRelease(gradientBitmapContext);
CGImageRef reflectionImage = CGImageCreateWithMask(self.CGImage, gradientMaskImage);
CGImageRelease(gradientMaskImage);
CGSize size = CGSizeMake(self.size.width, self.size.height + reflectionHeight);
UIGraphicsBeginImageContext(size);
[self drawAtPoint:CGPointZero];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, CGRectMake(0, self.size.height, self.size.width, reflectionHeight), reflectionImage);
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(reflectionImage);
return result;
}
So, can someone please let me know why this is so? It would be of great help if this issue gets resolved.
Thanks!!
I didn't try running any of this, but you do seem to be passing an alpha value to CGContextSetGrayFillColor.
Also, the use of "device gray" has been generally discouraged. You might want to double check to make sure that the color space you're getting back has the same number of components as you expect it to.