CGImageCreateWithImageInRect causing distortion - iphone

I'm using CGImageCreateWithImageInRect to do a magnifying effect, and it works beautifully, except when I get close to the edges of my view. In that case, clipping causes the image to be distorted. Right now I grab a 72x72 chunk of the view, apply a round mask to it, and then draw the masked image, and a circle on top.
When the copied chunk is near the edge of the view, It winds up smaller than 72x72 because of clipping, and then when it's drawn in the magnifying glass it gets stretched out.
When the touch point is close to the left edge, for example, I would like to create an image where the left part is filled with a solid color, and the right half contains part of the view that's being magnified. Then apply the mask to that image and add the overlay on top.
Here's what I'm doing now. imageRef is the image being magnified, mask is a round mask, and overlay is a circle to mark the edges of the magnified region.
CGImageRef subImage = CGImageCreateWithImageInRect(imageRef, CGRectMake(touchPoint.x - 36, touchPoint.y - 36, 72, 72));
CGImageRef xMaskedImage = CGImageCreateWithMask(subImage, mask);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform xform = CGAffineTransformMake(1.0, 0.0, 0.0, -1.0, 0.0, 0.0);
CGContextConcatCTM(context, xform);
CGRect area = CGRectMake(touchPoint.x - 84, -touchPoint.y, 170, 170);
CGRect area2 = CGRectMake(touchPoint.x - 80, -touchPoint.y + 4, 160, 160);
CGContextDrawImage(context, area2, xMaskedImage);
CGContextDrawImage(context, area, overlay);

I solved this by using CGBitmapContextCreate() to create a bitmap context. Then I drew the captured area into a smaller area of this context, and created an image from it with CGBitmapContextCreateImage(). That was the missing piece of the puzzle.

Related

Filling Colors inside edges

I am new to Iphone development.
Currently i am making coloring app.
I am using apple's paint app as ref to create my app.
I successfully create app where u can color on a screen with given texture image
What i did is
I create a custom UIView which extends opengl and i detect touches on it and draw accordingly.
I also kept background UIImageView which contain outline images, so it feels like your drawing above that Image.
Everything works fine
but i wanted to fill color inside black edges
Like if a image has four square which has black edges and inside of that square is blank and if i touch any square it should fill that square with selected color(mostly i am working on irregular shape)
Can anyone tell me how can i fill colors inside that square
The flood fill algo looks slow as i have some big images which will take time to fill the color
so is there any easy method by which i can fill color
A sample code will b very helpful as i am new to iPhone Dev
I implemnted this kind of feature in my recent project. The difference is: I filled color in border only.
Check my code over here, it might get helpful to you
// apply color to only border & return an image
+ (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)color
{
// load the image
UIImage *img = [UIImage imageNamed:name];
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(img.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[color setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
Enjoy Programming !

gradient cut from top to bottom

This is a follow up question to - gradient direction from left to right
In this apple refection sample code,
Apple Reflection Example
when the size slider is moved, the image is cut from bottom to top. How can I cut it from top to bottom when the slider is moved? I am trying to understand this tutorial better
//I know the code is in this section here but I can't figure out what to change
- (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height
{
...
}
//it probably has something to do with this code.
//I think this tells it how much to cut.
//Though I can't figure out how does it know where the 0,0 of the image is and why
// keep 0,0 of the image on the top? I am assuming this is where it hinges its
//point and cuts the image from bottom to top
CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create the bitmap context
CGContextRef bitmapContext = CGBitmapContextCreate (NULL, pixelsWide, pixelsHigh, 8, 0, colorSpace,(kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst));
CGColorSpaceRelease(colorSpace);
return bitmapContext;
}
What if the image to reflect was on the top. So in order to show it properly, I need to reveal it from top down, not bottom up. That;s the effect I am trying to achieve. In this case I just moved the UIImageViews around in their storyboard example. You now see my dilemma
it's very similar to #Putz1103 answer. You should create a new method starting from the previous - (UIImage *)reflectedImage:(UIImageView *)fromImage withWidth:(NSUInteger)width.
- (UIImage *)reflectedImage:(UIImageView *)fromImage withWidth:(NSUInteger)width andHeight:(NSUInteger)height
{
....
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, width, height), gradientMaskImage);
....
}
Then in slideAction method, use something like:
self.reflectionView.image = [self reflectedImage:self.imageView withWidth:self.imageView.bounds.size.width andHeight:reflectionHeight];
Good luck!
My guess would be this line:
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, fromImage.bounds.size.width, height), gradientMaskImage);
If you change it to this it should do the opposite:
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, fromImage.bounds.size.height - height, fromImage.bounds.size.width, height), gradientMaskImage);
Basically you need to set the clipping rectangle to the bottom of the image instead of the top of the image. This will invert what your slider does, but that is easy to resolve and I'll leave that for your exercise.

Changing color space on and image

I'm creating a mask based on DeviceGray color space based image.
What basically I want to do is to change all sorts of gray (beside black) pixels into white and leave black pixels as they are.
So I want my image to be consisted with black and white pixels.
Any idea how to achieve that using CoreGraphics means ?
Please dont offer running all over the pixels in the loop
Use CGImageCreateWithMaskingColors and CGContextSetRGBFillColor together like this:
CGImageRef myMaskedImage;
const CGFloat myMaskingColors[6] = { 0, 124, 0, 68, 0, 0 };
myColorMaskedImage = CGImageCreateWithMaskingColors (image,
myMaskingColors);
CGContextSetRGBFillColor (myContext, 0.6373,0.6373, 0, 1);
CGContextFillRect(context, rect);
CGContextDrawImage(context, rect, myColorMaskedImage);
By the way, the fill color is mentioned at the CGImageCreateWithMaskingColors discussion.

How to animate a circle which is drawn in OpenGL?

I'm drawing several circles within my view by means of the drawRect function.
I'd like to have my circles pop up (scale to 1.2 -> scale to 1.0)
I've used coreanimation in the past but using OpenGL takes different functions.
Here's a snippit of my code which draws a circle in my view:
//calling draw function
CGContextRef contextRef = UIGraphicsGetCurrentContext();
//setting a specific fill color
CGContextSetRGBFillColor(contextRef, 0.0, 255.0, 0.0, 1.0);
//drawing the circle with a specific height, weight and x,y location
CGContextFillEllipseInRect(contextRef, CGRectMake(30 ,30, 20,20));
How can I animate this circle that it 'pops up'.
"Pops up" is not telling me that much.
if you want it to go from small to big modify width and height of rect.use a global int variable

iOS: 2 Step Image Processing with CoreGraphics

Using CoreGraphics (inside my drawRect method), I'm trying to apply a blend mode to an image (transparent png), and then adjust the alpha of the result. I'm assuming that this needs to be done in two steps, but I could be wrong. Here's what I have so far (which works fine):
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
//SET COLOR - EDIT... added a more practical color example
CGContextSetRGBFillColor(context, 0.0, 1.0, 0.0, 1);
//flips drawing context (apparently this is necessary)
CGContextTranslateCTM(context, 0.0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);//flip context
//DRAW PIN IMAGE
UIImage *pin = [UIImage imageNamed:#"pin"];
CGRect pinrect = CGRectMake(12, 17, 25, 25);
CGContextDrawImage(context, pinrect, pin.CGImage);//draws image in context
//Apply blend mode
CGContextSetBlendMode(context, kCGBlendModeColor);
CGContextClipToMask(context, pinrect, pin.CGImage); // restricts drawing to within alpha channel
//fills context with mask, applying blend mode
CGContextFillRect(context, pinrect);
CGContextRestoreGState(context);
// -- Do something here to make result 50% transparent ?? --
I'm assuming that I need to draw all this into some kind of separate context somewhere, call CGContextSetAlpha(...), and then re-draw it back to my original context, but I'm not sure how. Setting the alpha before my final CGContextFillRect will just change the amount that the blend mode was applied, not the alpha of the entire image.
EDIT: screenshot posted
Thanks in advance.
Using transparency layers, you can apply the blend to an image drawn at 100% and display the result at 50%. The result looks like this:
I used the textured background so that you could clearly see that the lower image is 50% transparent to everything, instead of just the other image as was the case in my previous attempt. Here is the code:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0.0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);//flip context
CGRect fullImageRect = (CGRect){42,57,100,100};
CGRect transparentImageRect = (CGRect){12,17,100,100};
CGContextSetRGBFillColor(context, 0.0, 1.0, 0.0, 1);
// Draw image at 100%
UIImage *testImage = [UIImage imageNamed:#"TestImage"];
CGContextDrawImage(context,fullImageRect,testImage.CGImage);
// Set 50% transparency and begin a transparency layer. Inside the transparency layer, the alpha is automatically reset to 1.0
CGContextSetAlpha(context,0.5);
CGContextBeginTransparencyLayer(context, NULL);
// Draw the image. It is viewed at 100% within the transparency layer and 50% outside the transparency layer.
CGContextDrawImage(context, transparentImageRect, testImage.CGImage);
// Draw blend on top of image
CGContextClipToMask(context, transparentImageRect, testImage.CGImage);
CGContextSetBlendMode(context, kCGBlendModeColor);
CGContextFillRect(context, transparentImageRect);
// Exit transparency layer, causing the image and blend to be composited at 50%.
CGContextEndTransparencyLayer(context);
Edit: Old content removed as it took a lot of space and wasn't helpful. Look in the revision history if you want to see it.