How t draw outline/stroke around UIImage (*.png) - Example inside - iphone

I tried many many ways to draw a black outline around image.
This is an example of the result I want:
Can someone please let me know how should I do it? or give me an example ?
Edit: i stuck in here: can someone please help me finish it ?
What i did was to make another shape in black color under the the white with shadow and then fill it all in black so it will be like an outline - but i cant figure out how to make the last and important part of making the shadow and fill it to be all in black.
- (IBAction)addStroke:(id)sender{
[iconStrokeTest setImage:[self makeIconStroke:icon.imageView.image]];
}
- (UIImage *)makeIconStroke:(UIImage *)image{
CGImageRef originalImage = [image CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL,
CGImageGetWidth(originalImage),
CGImageGetHeight(originalImage),
8,
CGImageGetWidth(originalImage)*4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGBitmapContextGetWidth(bitmapContext), CGBitmapContextGetHeight(bitmapContext)), originalImage);
CGImageRef finalMaskImage = [self createMaskWithImageAlpha:bitmapContext];
UIImage *result = [UIImage imageWithCGImage:finalMaskImage];
CGContextRelease(bitmapContext);
CGImageRelease(finalMaskImage);
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(result.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[[UIColor blackColor] setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, result.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, result.size.width, result.size.height);
CGContextDrawImage(context, rect, result.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, result.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
- (CGImageRef)createMaskWithImageAlpha:(CGContextRef)originalImageContext {
UInt8 *data = (UInt8 *)CGBitmapContextGetData(originalImageContext);
float width = CGBitmapContextGetBytesPerRow(originalImageContext) / 4;
float height = CGBitmapContextGetHeight(originalImageContext);
int strideLength = ROUND_UP(width * 1, 4);
unsigned char * alphaData = (unsigned char * )calloc(strideLength * height, 1);
CGContextRef alphaOnlyContext = CGBitmapContextCreate(alphaData,
width,
height,
8,
strideLength,
NULL,
kCGImageAlphaOnly);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
unsigned char val = data[y*(int)width*4 + x*4 + 3];
val = 255 - val;
alphaData[y*strideLength + x] = val;
}
}
CGImageRef alphaMaskImage = CGBitmapContextCreateImage(alphaOnlyContext);
CGContextRelease(alphaOnlyContext);
free(alphaData);
// Make a mask
CGImageRef finalMaskImage = CGImageMaskCreate(CGImageGetWidth(alphaMaskImage),
CGImageGetHeight(alphaMaskImage),
CGImageGetBitsPerComponent(alphaMaskImage),
CGImageGetBitsPerPixel(alphaMaskImage),
CGImageGetBytesPerRow(alphaMaskImage),
CGImageGetDataProvider(alphaMaskImage), NULL, false);
CGImageRelease(alphaMaskImage);
return finalMaskImage;
}

Well, theres no build-in API for that. You'll have to do it yourself or find a libary for this. But you could "fake" the effect by drawing the image with a shadow. Note that shadows can be any color, it doesn't have to look like a shadow. This would be the easiest way.
Other than that you could vectorize the raster image and stroke that path. Core image's edge detection filter will help for that but it could turn out to be hard to acomplish.

Related

Crop image using border frame

I am trying to crop image using rectangle frame. But somehow not able to do that according to its required.
Here is What i am trying:
Here is the result i want :
Now what i need is when click on done image should crop in rectangle shape exactly placed in image. I have tried few things like masking & draw image using mask image rect but no success yet.
Here is my code which is not working :
CALayer *mask = [CALayer layer];
mask.contents = (id)[imgMaskImage.image CGImage];
mask.frame = imgMaskImage.frame;
imgEditedImageView.layer.mask = mask;
imgEditedImageView.layer.masksToBounds = YES;
Can anyone suggest me the better way to implement it.
I have tried so many other things & wasted time so please if i get some help that it will be great & appreciated.
Thanks.
- (UIImage *)croppedPhoto {
// For dealing with Retina displays as well as non-Retina, we need to check
// the scale factor, if it is available. Note that we use the size of teh cropping Rect
// passed in, and not the size of the view we are taking a screenshot of.
CGRect croppingRect = CGRectMake(imgMaskImage.frame.origin.x,
imgMaskImage.frame.origin.y, imgMaskImage.frame.size.width,
imgMaskImage.frame.size.height);
imgMaskImage.hidden=YES;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(croppingRect.size, YES,
[UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(croppingRect.size);
}
// Create a graphics context and translate it the view we want to crop so
// that even in grabbing (0,0), that origin point now represents the actual
// cropping origin desired:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, -croppingRect.origin.x, -croppingRect.origin.y);
[self.view.layer renderInContext:ctx];
// Retrieve a UIImage from the current image context:
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Return the image in a UIImageView:
return snapshotImage;
}
Here is the way you do
+(UIImage *)maskImage:(UIImage *)image andMaskingImage:(UIImage *)maskingImage{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskingImage CGImage];
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskingImage.size.width, maskingImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskingImage.size.width/ image.size.width;
if(ratio * image.size.height < maskingImage.size.height) {
ratio = maskingImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskingImage.size.width, maskingImage.size.height}};
//// CHANGE THIS RECT ACCORDING TO YOUR NEEDS
CGRect rect2 = {{-((image.size.width*ratio)-maskingImage.size.width)/2 , -((image.size.height*ratio)-maskingImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
CGColorSpaceRelease(colorSpace);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
You need to have image like this
Note that
The mask image cannot have ANY transparency. Instead, transparent areas must be white or some value between black and white. The more towards black a pixel is the less transparent it becomes.

Detect pixel collision/overlapping between two images

I have two UIImageViews that contain images with some transparent area. Is there any way to check if the non-transparent area between both images collide?
Thanks.
[UPDATE]
So this is what I have up until now, unfortunately it still ain't working but I can't figure out why.
if (!CGRectIntersectsRect(frame1, frame2)) return NO;
NSLog(#"OverlapsPixelsInImage:withImage:> Images Intersect");
UIImage *img1 = imgView1.image;
UIImage *img2 = imgView2.image;
CGImageRef imgRef1 = [img1 CGImage];
CGImageRef imgRef2 = [img2 CGImage];
float minx = MIN(frame1.origin.x, frame2.origin.x);
float miny = MIN(frame1.origin.y, frame2.origin.y);
float maxx = MAX(frame1.origin.x + frame1.size.width, frame2.origin.x + frame2.size.width);
float maxy = MAX(frame1.origin.y + frame1.size.height, frame2.origin.y + frame2.size.height);
CGRect canvasRect = CGRectMake(0, 0, maxx - minx, maxy - miny);
size_t width = floorf(canvasRect.size.width);
size_t height = floorf(canvasRect.size.height);
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerRow = 4 * width;
unsigned char *rawData = calloc(width * height, sizeof(*rawData));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context, 0, canvasRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(frame2.origin.x - minx, frame2.origin.y - miny, frame2.size.width, frame2.size.height), imgRef2);
CGContextDrawImage(context, CGRectMake(frame1.origin.x - minx, frame1.origin.y - miny, frame1.size.width, frame1.size.height), imgRef1);
CGContextRelease(context);
int byteIndex = 0;
for (int i = 0; i < width * height; i++)
{
CGFloat alpha = rawData[byteIndex + 3];
if (alpha > 128)
{
NSLog(#"collided in byte: %d", i);
free(rawData);
return YES;
}
byteIndex += 4;
}
free(rawData);
return NO;
You can draw both the alpha channels of both images into a single bitmap context and then look through the data for any transparent pixels. Take a look at the clipRectToPath() code in Clipping CGRect to a CGPath. It's solving a different problem, but the approach is the same. Rather than using CGContextFillPath() to draw into the context, just draw both of your images.
Here's the flow:
Create an alpha-only bitmap context (kCGImageAlphaOnly)
Draw everything you want to compare into it
Walk the pixels looking at the value. In my example, it considers < 128 to be "transparent." If you want fully transparent, use == 0.
When you find a transparent pixel, the example just makes a note of what column it was in. In your problem, you might just return YES, or you might use that data to form another mask.
Not easily, you basically have to read the in raw bitmap data and walk the pixels.

iphone taking grayscale screenshot?

I'm wondering if there is a SIMPLE way to take a Grayscale screenshot, i know i can take color screenshot like this:
UIGraphicsBeginImageContext(self.view.bounds.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *data = UIImagePNGRepresentation(image);
now what i need to add in these lines of code to make the UIImage Grayscaled? thank you for reading.
Just convert your image to gray scale.
Read this post. Good luck.
Here is the method:d
#pragma mark -
#pragma mark Grayscale
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
// Create a new UIImage object
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
// Return the new grayscale image
return newImage;
}
Based on Cam's code with the ability to deal with the scale for Retina displays.
- (UIImage *) toGrayscale
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
int width = imageRect.size.width;
int height = imageRect.size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image
scale:self.scale
orientation:UIImageOrientationUp];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}

iphone image resolution increased when emailed

In my app i capture an image and then add a frame to it...
I also have the feature to add custom text on the final image (original image + frame). I am using the following code to draw the text.
-(UIImage *)addText:(UIImage *)img text:(NSString *)textInput
{
CGFloat imageWidth = img.size.width;
CGFloat imageHeigth = img.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, imageWidth, imageHeigth, 8,
4 * imageWidth, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, imageWidth, imageHeigth), img.CGImage);
CGContextSetCMYKFillColor(context, 0.0, 0.0, 0.0, 1.0,1);
CGContextSetFont(context, customFont);
UIColor * strokeColor = [UIColor blackColor];
CGContextSetFillColorWithColor(context, strokeColor.CGColor);
CGContextSetFontSize(context, DISPLAY_FONT_SIZE * DisplayToOutputScale);
// Create an array of Glyph's the size of text that will be drawn.
CGGlyph textToPrint[[textInput length]];
for (int i = 0; i < [textInput length]; ++i)
{
// Store each letter in a Glyph and subtract the MagicNumber to get appropriate value.
textToPrint[i] = [textInput characterAtIndex:i] + 3 - 32;
}
// First pass to be displayed invisible, will be used to calculate the length of the text in glyph
CGContextSetTextDrawingMode(context, kCGTextInvisible);
CGContextShowGlyphsAtPoint(context, 0 , 0 , textToPrint, [textInput length]);
CGPoint endPoint = CGContextGetTextPosition(context);
// Calculate position of text on white border frame
CGFloat xPos = (imageWidth/2.0f) - (endPoint.x/2.0f);
CGFloat yPos;
yPos = 30 * DisplayToOutputScale;
// Toggle off invisible mode, we are ready to draw the text
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextShowGlyphsAtPoint(context, xPos , yPos , textToPrint, [textInput length]);
// Extract resulting image
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];
}
I email the image using UIImageJPEGRepresentation and attach the data.
When i email the image without adding custom text the image size increases from 1050 x 1275 to 2100 x 2550 which is strange.
But when i email the image with text added the image size remains unchanged.
Can any one explain me why this happens ??
I think there is something to do with converting from UIImage to UIData.
Thanx
I had the same problem. Fix it with starting the context with a scale of 1:
UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);

UIImage Shadow Trouble

I'm trying to add a small shadow to an image, much like the icon shadows in the App Store. Right now I'm using the following code to round the corners of my images. Does anyone know how I can adapt it to add a small shadow?
- (UIImage *)roundCornersOfImage:(UIImage *)source height:(int)height width:(int)width {
int w = width;
int h = height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextBeginPath(imageContext);
CGRect rect = CGRectMake(0, 0, w, h);
addRoundedRectToPath(imageContext, rect, 10, 10);
CGContextClosePath(imageContext);
CGContextClip(imageContext);
CGContextDrawImage(imageContext, CGRectMake(0, 0, w, h), source.CGImage);
CGImageRef imageMasked = CGBitmapContextCreateImage(imageContext);
CGContextRelease(imageContext);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];
}
addRoundedRectToPath refers to another method that obviously rounds the corners.
First, here's a link to the documentation:
http://developer.apple.com/iPhone/library/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_shadows/dq_shadows.html#//apple_ref/doc/uid/TP30001066-CH208-TPXREF101
Next, try adding something like this right before the call to CGContextDrawImage(...):
CGFloat components[4] = {0.0, 0.0, 0.0, 1.0};
CGColorRef shadowColor = CGColorCreate(colorSpace, components);
CGContextSetShadowWithColor(imageContext, CGSizeMake(3, 3), 2, shadowColor);
CGColorRelease(shadowColor);
After, the call to CGContextSetShadowWithColor(.....), everything should draw with a shadow that is offset by (3, 3) points, and drawn with a 2.0 point blur radius. You'll probably want to tweak the opacity of the black color (the forth component in components), and change the shadow parameters.
If you'd like to stop drawing with a shadow at some point, you need to save the graphics context before calling CGContextSetShadowWithColor, and restore it when you want to stop drawing with a shadow.