This is a follow up question to - gradient direction from left to right
In this apple refection sample code,
Apple Reflection Example
when the size slider is moved, the image is cut from bottom to top. How can I cut it from top to bottom when the slider is moved? I am trying to understand this tutorial better
//I know the code is in this section here but I can't figure out what to change
- (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height
{
...
}
//it probably has something to do with this code.
//I think this tells it how much to cut.
//Though I can't figure out how does it know where the 0,0 of the image is and why
// keep 0,0 of the image on the top? I am assuming this is where it hinges its
//point and cuts the image from bottom to top
CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create the bitmap context
CGContextRef bitmapContext = CGBitmapContextCreate (NULL, pixelsWide, pixelsHigh, 8, 0, colorSpace,(kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst));
CGColorSpaceRelease(colorSpace);
return bitmapContext;
}
What if the image to reflect was on the top. So in order to show it properly, I need to reveal it from top down, not bottom up. That;s the effect I am trying to achieve. In this case I just moved the UIImageViews around in their storyboard example. You now see my dilemma
it's very similar to #Putz1103 answer. You should create a new method starting from the previous - (UIImage *)reflectedImage:(UIImageView *)fromImage withWidth:(NSUInteger)width.
- (UIImage *)reflectedImage:(UIImageView *)fromImage withWidth:(NSUInteger)width andHeight:(NSUInteger)height
{
....
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, width, height), gradientMaskImage);
....
}
Then in slideAction method, use something like:
self.reflectionView.image = [self reflectedImage:self.imageView withWidth:self.imageView.bounds.size.width andHeight:reflectionHeight];
Good luck!
My guess would be this line:
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, fromImage.bounds.size.width, height), gradientMaskImage);
If you change it to this it should do the opposite:
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, fromImage.bounds.size.height - height, fromImage.bounds.size.width, height), gradientMaskImage);
Basically you need to set the clipping rectangle to the bottom of the image instead of the top of the image. This will invert what your slider does, but that is easy to resolve and I'll leave that for your exercise.
Related
When I am drawing UIImage* image on UIView using the below code, the image is mirrored horizontally. It is like that if I draw 4, it is drawing like this ..
CGRect rect = CGRectMake(x, y, imageWidth, imageHeight);
CGContextDrawImage((CGContextRef) g, rect, ((UIImage*)image).CGImage);
That is the problem??? I am doing wrong??? or if somebody know how to fix it, please let me also know. I very appreciate that in advance.
Thanks a loooooooooot.
See: CGContextDrawImage draws image upside down when passed UIImage.CGImage
Use [image drawInRect:rect] instead of CGContextDrawImage.
you can turn the picture the right way around using
CGAffineTransform transform =CGAffineTransformMakeTranslation(0.0, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
//draw image in the context
CGContextDrawImage(context, rect, ((UIImage*)image).CGImage);
Using [image drawInRect:rect] uses the default context i.e. the screen, you can not give it your current context, e.g. if you want to put it as part of a button's ima
I'm drawing a custom UIView that is sitting inside a xib whose ViewController is pushed onto a NavigationController.
Essentially the problem is that in the call to drawRect:(CGRect)rect, rect has origin at (0,0) when it should have origin at (0,nav_bar_height). Therefore, the following code, which draws an image
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
// set up the image
UIImage * img = [UIImage imageWithData:someData];
// flip the image to the correct orientation
CGContextTranslateCTM(context, 0, rect.size.height + rect.origin.y );
CGContextScaleCTM(context, 1, -1);
// draw the image
CGContextDrawImage(context, rect, [img CGImage]);
UIGraphicsPopContext();
}
This will cut off the top 30 pixels or so of the image and leave empty the bottom 30.
How can I account for the navigation bar height?
Without knowing more about your view-structure, I risk being unhelpful, but let me know if the following helps at all: instead of using the rect parameter, which can be sort of unpredictable, try using the bounds of your custom view. In reality, I can't imagine why you'd be having this problem, unless your view is being overlapped with the navigation bar; I'd suggest checking to be sure this isn't so, in any case. Best of luck!
Update
Looks like that didn't help. Just offset your y parameter by self.navigationController.navigationBar.bounds.size.height. So your code should look like:
//...
CGFloat dy = self.navigationController.navigationBar.bounds.size.height;
CGRect r = CGRectMake(rect.origin.x,rect.origin.y + dy, rect.size.width, rect.size.height - dy);
//...
CGContextDrawImage(context,r,img.CGImage);
//...
I hope that was more helpful.
I'm using CGImageCreateWithImageInRect to do a magnifying effect, and it works beautifully, except when I get close to the edges of my view. In that case, clipping causes the image to be distorted. Right now I grab a 72x72 chunk of the view, apply a round mask to it, and then draw the masked image, and a circle on top.
When the copied chunk is near the edge of the view, It winds up smaller than 72x72 because of clipping, and then when it's drawn in the magnifying glass it gets stretched out.
When the touch point is close to the left edge, for example, I would like to create an image where the left part is filled with a solid color, and the right half contains part of the view that's being magnified. Then apply the mask to that image and add the overlay on top.
Here's what I'm doing now. imageRef is the image being magnified, mask is a round mask, and overlay is a circle to mark the edges of the magnified region.
CGImageRef subImage = CGImageCreateWithImageInRect(imageRef, CGRectMake(touchPoint.x - 36, touchPoint.y - 36, 72, 72));
CGImageRef xMaskedImage = CGImageCreateWithMask(subImage, mask);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform xform = CGAffineTransformMake(1.0, 0.0, 0.0, -1.0, 0.0, 0.0);
CGContextConcatCTM(context, xform);
CGRect area = CGRectMake(touchPoint.x - 84, -touchPoint.y, 170, 170);
CGRect area2 = CGRectMake(touchPoint.x - 80, -touchPoint.y + 4, 160, 160);
CGContextDrawImage(context, area2, xMaskedImage);
CGContextDrawImage(context, area, overlay);
I solved this by using CGBitmapContextCreate() to create a bitmap context. Then I drew the captured area into a smaller area of this context, and created an image from it with CGBitmapContextCreateImage(). That was the missing piece of the puzzle.
It's really a pain, but always when I draw an UIImage in -drawRect:, it's upside-down.
When I flip the coordinates, the image draws correctly, but at the cost of all other CG functions drawing "wrong" (flipped).
What's your strategy when you have to draw images and other things? Is there any rule of thumb how to not get stuck in this problem over and over again?
Also, one nasty thing when I flip the y-axis is, that my CGRect from the UIImageView frame is wrong. Instead of the origin appearing at 10,10 upper left as expected, it appears at the bottom.
But at the same time, all those normal line drawing functions of CGContext take correct coordinates. drawing a line in -drawRect with origin 10,10 upper left, will really start at upper left. But at the same time that's strange, because core graphics actually has a flipped coordinate system with y 0 at the bottom.
So it seems like something is really inconsistent there. Drawing with CGContext functions takes coordinates as "expected" (cmon, nobody thinks in coordinates starting from bottom left, that's silly), while drawing any kind of image still works the "wrong" way.
Do you use helper methods to draw images? Or is there anything useful that makes image drawing not a pain in the butt?
Problem: Origin is at lower-left corner; positive y goes upward (negative y goes downward).
Goal: Origin at upper-left corner; positive y going downward (negative y going upward).
Solution:
Move origin up by the view's height.
Negate (multiply by -1) the y axis.
The way to do this in code is to translate up by the view bounds' height and scale by (1, -1), in that order.
There are a couple of portions of the Quartz 2D Programming Guide that are relevant to this topic, including “Drawing to a Graphics Context on iPhone OS” and the whole chapter on Transforms. Of course, you really should read the whole thing.
You can do that by apply affinetransform on the point you want to convert in UIKit related coordinates. Following is example.
// Create a affine transform object
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
// First translate your image View according to transform
transform = CGAffineTransformTranslate(transform, 0, - imageView.bounds.size.height);
// Then whenever you want any point according to UIKit related coordinates apply this transformation on the point or rect.
// To get tranformed point
CGPoint newPointForUIKit = CGPointApplyAffineTransform(oldPointInCGKit, transform);
// To get transformed rect
CGRect newRectForUIKit = CGRectApplyAffineTransform(oldPointInCGKit, transform);
The better answer to this problem is to use the UIImage method drawInRect: to draw your image. I'm assuming you want the image to span the entire bounds of your view. This is what you'd type in your drawRect: method.
Instead of:
CGContextRef ctx = UIGraphicsGetCurrentContext();
UIImage *myImage = [UIImage imageNamed:#"theImage.png"];
CGImageRef img = [myImage CGImage];
CGRect bounds = [self bounds];
CGContextDrawImage(ctx, bounds, img);
Write this:
UIImage *myImage = [UIImage imageNamed:#"theImage.png"];
CGRect bounds = [self bounds];
[myImage drawInRect:bounds];
It's really a pain, but always when I draw an UIImage in -drawRect:, it's upside-down.
Are you telling the UIImage to draw, or getting its CGImage and drawing that?
As noted in “Drawing to a Graphics Context on iPhone OS”, UIImages are aware of the difference in co-ordinate spaces and should draw themselves correctly without you having to flip your co-ordinate space yourself.
CGImageRef flip (CGImageRef im) {
CGSize sz = CGSizeMake(CGImageGetWidth(im), CGImageGetHeight(im));
UIGraphicsBeginImageContextWithOptions(sz, NO, 0);
CGContextDrawImage(UIGraphicsGetCurrentContext(),
CGRectMake(0, 0, sz.width, sz.height), im);
CGImageRef result = [UIGraphicsGetImageFromCurrentImageContext() CGImage];
UIGraphicsEndImageContext();
return result;
}
Call the above method using the code below:
This code deals with getting the left half of an image from an existing UIImageview and setting the thus generated image to a new imageview - imgViewleft
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con,
CGRectMake(0,0,sz.width/2.0,sz.height),
flip(leftReference));
imgViewLeft = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];
I am trying to do custom drawing of an image(with alpha) and eventually going to apply different color tints to it. Right now I am just trying to draw it correctly. This might be simple problem but the result is the ALPHA on my image is black. Image is .png created from photoshop with correct alpha.
- (void) drawRect:(CGRect)area
{
[imgView.image drawInRect: area blendMode:blendMode alpha:alpha]; // draw image
}
blendMode is normal and alpha is 1.0. The image is good except alpha being black. Any help appreciated.
Tried another method of drawing, but shows black alpha and upside down(don't care about it being upside right now tho)
- (void) drawRect:(CGRect)area
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
// Draw picture first
CGContextDrawImage(context, area, imgView.image.CGImage);
CGContextRestoreGState(context);
}
[imagename drawInRect:CGRectMake(x, y, width, height)];
or
[imagename drawAtPoint:CGPointMake(x, y)];
I believe that when you use drawInRect:blendMode:alpha, the alpha that you pass in overrides the alpha in your image. The documentation I just looked at wasn't quite clear.