I have a CAShapeLayer which contains a CGMutablePath that has a stroke drawn around it. In my app, I transform this CAShapeLayer to increase / decrease it's size at certain times. I'm noticing when I transform the CAShapeLayer, the stroke gets transformed as well. Ideally I'd like to keep the lineWidth of the stroke at 3 at all times even when the CAShapeLayers transformed.
I tried shutting off the stroke before I transformed then readding it afterwards but it didn't work:
subLayerShapeLayer.lineWidth = 0;
subLayerShapeLayer.strokeColor = nil;
self.layer.sublayerTransform = CATransform3DScale(self.layer.sublayerTransform, graphicSize.width / self.graphic.size.width, graphicSize.height / self.graphic.size.height, 1);
shapeLayer.strokeColor = [UIColor colorWithRed:0 green:0 blue:0 alpha:1].CGColor;;
shapeLayer.lineWidth = 3;
Does anyone know how I might be able to accomplish this task? Seems as though it should be able to redraw the stroke after transforming somehow.
Transform the CGPath itself and not its drawn representation (the CAShapeLayer).
Have a close look at CGPathCreateMutableCopyByTransformingPath - CGPath Reference
CGPathCreateMutableCopyByTransformingPath
Creates a mutable copy of a graphics path transformed by a
transformation matrix.
CGMutablePathRef CGPathCreateMutableCopyByTransformingPath(
CGPathRef path,
const CGAffineTransform *transform
);
Parameters
path The path to copy.
transform A pointer to an affine transformation matrix, or NULL if no transformation is needed. If specified, Quartz applies the transformation to all elements of the new path.
Return Value
A new, mutable copy of the specified path transformed by the transform parameter. You are responsible for releasing this object.
Availability
Available in iOS 5.0 and later.
Declared In
CGPath.h
Related
I'm using CABasicAnimation for layer animations. In convenience initializer init(keyPath:) I specify which values I want to animate, but I do it mostly intuitively. I mean, I know that it should animate layer's position.x, for example, so I use that value. But where can I find the complete list of values? I checked the documentation for both the initializer and CABasicAnimation and found just some examples of values.
The resource you are looking for is the Key-Value Coding Extensions page of the Core Animation Programming Guide.
There are additions for properties of the types CGPoint, CGSize, CGRect, and CATransform3D.
CGPoint
For point properties you can use .x and .y. For example:
"position.x" // use a number
CGSize
For size properties you can use .width and .height. For example:
"shadowOffset.height" // use a number
CGRect
For rectangle properties you can use origin and size, as well as the point and size additions on those. For example:
"bounds.origin.x" // use a number
"frame.size.width" // use a number
"frame.origin" // use a point
CATransform3D
Core Animation transform properties have additions for scale (.x, .y, .z), rotation (.x, .y, .z), and translation (.x, .y, .z). For example:
"transform.rotation.z" // use a number
"transform.translation.x" // use a number
You can also just use .scale as a number that scales uniformly on all axis, .rotation as a number for the rotation around the z-axis (same as rotation.z), and .translation as a size that translates along the x- and y-axis.
I want to draw UIimage with CGAffineTransform but It gives wrong output with CGContextConcatCTM
I have try with below code :
CGAffineTransform t = CGAffineTransformMake(1.67822, -1.38952, 1.38952, 1.67822, 278.684, 209.129); // transformation of uiimageview
UIGraphicsBeginImageContext(CGSizeMake(1024, 768));
CGContextRef imageContext = UIGraphicsGetCurrentContext();
CGContextDrawImage(imageContext, dragView.frame, dragView.image.CGImage);
CGContextConcatCTM(imageContext, t);
NSLog(#"\n%#\n%#", NSStringFromCGAffineTransform(t),NSStringFromCGAffineTransform(CGContextGetCTM(imageContext)));
Output :
[1.67822, -1.38952, 1.38952, 1.67822, 278.684, 209.129] // imageview transformation
[1.67822, 1.38952, 1.38952, -1.67822, 278.684, 558.871] // drawn image transformation
CGAffineTransform CGAffineTransformMake (
CGFloat a,
CGFloat b,
CGFloat c,
CGFloat d,
CGFloat tx,
CGFloat ty
);
Parameter b, d and ty changed, How to solve this?
There is no problem to solve. Your log output is correct.
Comparing the two matrixes, the difference between the two is this:
scale vertically by -1 (which affects two of the first four members)
translate vertically by 349.742 (which affects the last member)
I'm going to take a guess and say your view is about 350 points tall. Am I right? Actually, the 349.742 is weird, since you set the context's height to 768. It's almost half (perhaps because the anchor point is centered?), but well short, and cutting off the status bar wouldn't make sense here (and wouldn't account for a 68.516-point difference). So that is a puzzle. But, what follows is still true:
A vertical scale and translate is how you would flip a context. This context has gone from lower-left origin to upper-left origin, or vice versa.
That happened before you concatenated your (unexplained, hard-coded) matrix in. Assuming you didn't flip the context yourself, it probably came that way (I would guess as a UIKit implementation detail).
Concatenation (as in CGContextConcatCTM) does not replace the old transformation matrix with the new one; it is matrix multiplication. The matrix you have afterward is the product of both the matrix you started with and the one you concatenated onto it. The resulting matrix is both flipped and then… whatever your matrix does.
You can see this for yourself by simply getting the CTM before you concatenate your matrix onto it, and logging that. You should see this:
[0, -1, 0, -1, 0, 349.742]
See also “The Math Behind the Matrices” in the Quartz 2D Programming Guide.
I'm trying to add perspective to a view by using CATransform3D. Currently, this is what I'm getting:
And this is what I wanna get:
I'm having a hard time doing that. I'm completely lost here. Here's my code:
CATransform3D t = CATransform3DIdentity;
t.m11 = 0.8;
t.m21 = 0.1;
t.m31 = -0.1;
t.m41 = 0.1;
[[viewWindow layer] setTransform:t];
Matrix element .m34 is responsible for perspective. It's not discussed much in the documentation, so you'll have to toy with it. This answer talks a little bit about how to use it: https://stackoverflow.com/a/7596326/1228525
To actually see the effects of that matrix you need to do two things:
1. Apply that perspective matrix to the parent view's sublayer transform
2. Rotate the child view (the one on which you want perspective) - otherwise it will remain flat and you won't be able to tell it now has a 3D perspective.
The numbers are arbitrary, make them whatever looks best:
CATransform3D t = CATransform3DIdentity;
t.m34 = .005;
parentView.layer.sublayerTransform. = t;
childview.layer.transform = CATransform3DMakeRotation(45,1,0,0);
The perspective will look different depending on where the child is in the parent view. If the child is in the center of the parent it will be like you are looking at the child view in 3D straight on. The further from the center it is, the more it will be like you are viewing from a glancing angle.
This is what I got using the above code and centering the child view: (apparently I'm not allowed to post pictures since I'm new, so you'll have to see the link) http://i.stack.imgur.com/BiYCS.png
It's very hard to tell what you're going for based on those pictures; a bit more explanation might be helpful if my answer isn't what you want. From what I can tell from the picture, the bottom one isn't perspective...
I was able to easily achieve the right CATransform3D using AGGeometryKit.
#import <AGGeometryKit/AGGeometryKit.h>
UIView *view = ...; // create a view
// setting anchorPoint to zero
view.layer.anchorPoint = CGPointZero;
view.layer.transform = CATransform3DMakeTranslation(-view.layer.bounds.size.width * .5, -view.layer.bounds.size.height * .5, 0);
// setting a trapezoid transform
AGKQuad quad = view.layer.quadrilateral;
quad.tl.x -= 10; // shift top left x-value with 10 pixels
view.layer.quadrilateral = quad; // the quad is converted to CATransform3D and applied
// Make half-transparent grey, the background color for the layer
UIColor *Light_Grey
= [UIColor colorWithRed : 110/255.0
green : 110/255.0
blue : 110/255.0
alpha : 0.5];
// Get a CGColor object with the same color values
CGColorRef cgLight_Grey = [Light_Grey CGColor];
[boxLayer setBackgroundColor : cgLight_Grey];
// Create a UIImage
UIImage *layerImage = [UIImage imageNamed : #"Test.png"];
// Get the underlying CGImage
CGImageRef image = [layerImage CGImage];
// Put the CGImage on the layer
[boxLayer setContents : (id) image];
Consider the above sample code segment.
UIColor *Light_Grey is set with an alpha value of 0.5. My question is : Is there anyway I can set the alpha value of CGImageRef image?
The reason of my asking this is even though the alpha value of boxLayer is 0.5, any images set on top of BoxLayer seem to have an alpha default value of 1, which would cover up anything lying directly underneath the images.
Hope that somebody knowledgable on this can help.
It looks you can make a copy using CGImageCreate and use the decode array to rescale the alpha (e.g. 0.0-0.5)
decode
The decode array for the image. If you
do not want to allow remapping of the
image’s color values, pass NULL for
the decode array. For each color
component in the image’s color space
(including the alpha component), a
decode array provides a pair of values
denoting the upper and lower limits of
a range. For example, the decode array
for a source image in the RGB color
space would contain six entries total,
consisting of one pair each for red,
green, and blue. When the image is
rendered, Quartz uses a linear
transform to map the original
component value into a relative number
within your designated range that is
appropriate for the destination color
space.
I have an array of CGPoints, and I'd like to fill the whole screen with colours, the colour of each pixel depending on the total distance to each of the points in the array. The natural way to do this is to, for each pixel, compute the total distance, and turn that into a colour. Questions follow:
1) How can I colour a single pixel in Quartz? I've been thinking of making 1 by 1 rectangles.
2) Are there better, more efficient ways to achieve this effect?
You don't need to draw it pixel by pixel. You can use radial gradients:
CGPoint points[count];
/* set the points */
CGContextSaveGState(context);
CGContextBeginTransparencyLayer(context, NULL);
CGContextSetAlpha(context, 0.5);
CGContextSetBlendMode(context, kCGBlendModeXOR);
CGContextClipToRect(context, myFrame);
CGFloat radius = myFrame.size.height+myFrame.size.width;
CGColorSpaceRef colorSpace;
CFArrayRef colors;
const CGFloat * locations;
/* create the colors for the gradient */
for(NSUInteger i = 0;i<count;i++){
CGGradientRef gradient = CGGradientCreateWithColors(CGColorSpaceCreateDeviceGray(), colors, locations);
CGContextDrawRadialGradient(context, gradient, points[i], 0.0, points[i], radius, 0);
}
CGContextSetAlpha(context, 1.0);
CGContextEndTransparencyLayer(context);
CGContextRestoreGState(context);
Most of the code is clear, but here are some points:
kCGBlendMode basically adds the value of back- and foreground if both have the same alpha and alpha is not 1.0. You might also be able to use kCGBlendModeColorBurn without the need to play with transparency. Check the reference.
radius is big enough to cover the whole frame. You can set a different value.
Note that locations values should be between 0.0 and 1.0. You need to calibrate your color values depending on the radius.
This has been asked before:
How do I draw a point using Core Graphics?
From Quartz, a 1x1 rectangle would do what you want. But it is certainly not very efficient.
You are better off creating a memory buffer, calculating your point distances, and writing into the array directly within your processing loop. Then to display the result, simply create a CGImage which you can then render into your screen context.