I want to draw UIimage with CGAffineTransform but It gives wrong output with CGContextConcatCTM
I have try with below code :
CGAffineTransform t = CGAffineTransformMake(1.67822, -1.38952, 1.38952, 1.67822, 278.684, 209.129); // transformation of uiimageview
UIGraphicsBeginImageContext(CGSizeMake(1024, 768));
CGContextRef imageContext = UIGraphicsGetCurrentContext();
CGContextDrawImage(imageContext, dragView.frame, dragView.image.CGImage);
CGContextConcatCTM(imageContext, t);
NSLog(#"\n%#\n%#", NSStringFromCGAffineTransform(t),NSStringFromCGAffineTransform(CGContextGetCTM(imageContext)));
Output :
[1.67822, -1.38952, 1.38952, 1.67822, 278.684, 209.129] // imageview transformation
[1.67822, 1.38952, 1.38952, -1.67822, 278.684, 558.871] // drawn image transformation
CGAffineTransform CGAffineTransformMake (
CGFloat a,
CGFloat b,
CGFloat c,
CGFloat d,
CGFloat tx,
CGFloat ty
);
Parameter b, d and ty changed, How to solve this?
There is no problem to solve. Your log output is correct.
Comparing the two matrixes, the difference between the two is this:
scale vertically by -1 (which affects two of the first four members)
translate vertically by 349.742 (which affects the last member)
I'm going to take a guess and say your view is about 350 points tall. Am I right? Actually, the 349.742 is weird, since you set the context's height to 768. It's almost half (perhaps because the anchor point is centered?), but well short, and cutting off the status bar wouldn't make sense here (and wouldn't account for a 68.516-point difference). So that is a puzzle. But, what follows is still true:
A vertical scale and translate is how you would flip a context. This context has gone from lower-left origin to upper-left origin, or vice versa.
That happened before you concatenated your (unexplained, hard-coded) matrix in. Assuming you didn't flip the context yourself, it probably came that way (I would guess as a UIKit implementation detail).
Concatenation (as in CGContextConcatCTM) does not replace the old transformation matrix with the new one; it is matrix multiplication. The matrix you have afterward is the product of both the matrix you started with and the one you concatenated onto it. The resulting matrix is both flipped and then… whatever your matrix does.
You can see this for yourself by simply getting the CTM before you concatenate your matrix onto it, and logging that. You should see this:
[0, -1, 0, -1, 0, 349.742]
See also “The Math Behind the Matrices” in the Quartz 2D Programming Guide.
Related
I'm trying to figure out what all these arguments do, as when I draw my bullet image it appears as a solid block instead of a sprite that alternates between solid color and an empty portion (i.e instead of 10101 it's 11111, with 0's being empty parts in the texture).
Before, I was using batch.draw(texture, float x, float y) and it displays the texture correctly. However I was playing around with rotation, and this is the version of draw that seemed most suitable:
batch.draw(texture, x, y, originX, originY, width, height, scaleX, scaleY, rotation, srcX, srcY, srcWidth, srcHeight, flipX, flipY)
I can figure out the obvious ones, those being originX, originY (location to draw the image from its upper left pixel I believe) however then I don't know what the x, y coordinate after texture is for.
scaleX,scaleY, rotation, and flipX, flipY I know what to do with, but what is srcX and srcY, along with the srcWidth and srcHeight for?
edit: I played around and figured out what the srcX,srcY and srcHeight,Width do. I can not figure out what originX,Y does, even though I'm guess it's the centerpoint of the image. Since I don't want to play around with this one anyway, should I leave it as 0,0?
What would be common uses for manipulating the centerpoint of images?
Answering main question.
srcX, srcY, srcWidth, srcHeight are values determine which part (rectangle) of source texture you want to draw. For example, your source image is 100x100 pixels of size. And you want to draw only 60x60 part in a middle of source image.
batch.draw(texture, x, y, 20, 20, 60, 60);
Answering your edited question.
Origin is a center point for rotation and scale transformations. So if you want to your sprite scales and rotates around it's center point you should set origin values so:
float originX = width * 0.5f;
float originY = height * 0.5f;
In case you don't care about rotation and scaling you may not specify this params (leave it 0).
And keep in mind, that origin is not determines image drawing position (this is most common mistake). It means that two next method calls are draw image at same position (forth and fifth params are originX and originY):
batch.draw(image, x, y, 0, 0, width, height, ...);
batch.draw(image, x, y, 50, 50, width, height, ...);
According to the documentation, the parameters are as defined:
srcX - the x-coordinate in texel space
srcY - the y-coordinate in texel space
srcWidth - the source with in texels
srcHeight - the source height in texels
I'm trying to add perspective to a view by using CATransform3D. Currently, this is what I'm getting:
And this is what I wanna get:
I'm having a hard time doing that. I'm completely lost here. Here's my code:
CATransform3D t = CATransform3DIdentity;
t.m11 = 0.8;
t.m21 = 0.1;
t.m31 = -0.1;
t.m41 = 0.1;
[[viewWindow layer] setTransform:t];
Matrix element .m34 is responsible for perspective. It's not discussed much in the documentation, so you'll have to toy with it. This answer talks a little bit about how to use it: https://stackoverflow.com/a/7596326/1228525
To actually see the effects of that matrix you need to do two things:
1. Apply that perspective matrix to the parent view's sublayer transform
2. Rotate the child view (the one on which you want perspective) - otherwise it will remain flat and you won't be able to tell it now has a 3D perspective.
The numbers are arbitrary, make them whatever looks best:
CATransform3D t = CATransform3DIdentity;
t.m34 = .005;
parentView.layer.sublayerTransform. = t;
childview.layer.transform = CATransform3DMakeRotation(45,1,0,0);
The perspective will look different depending on where the child is in the parent view. If the child is in the center of the parent it will be like you are looking at the child view in 3D straight on. The further from the center it is, the more it will be like you are viewing from a glancing angle.
This is what I got using the above code and centering the child view: (apparently I'm not allowed to post pictures since I'm new, so you'll have to see the link) http://i.stack.imgur.com/BiYCS.png
It's very hard to tell what you're going for based on those pictures; a bit more explanation might be helpful if my answer isn't what you want. From what I can tell from the picture, the bottom one isn't perspective...
I was able to easily achieve the right CATransform3D using AGGeometryKit.
#import <AGGeometryKit/AGGeometryKit.h>
UIView *view = ...; // create a view
// setting anchorPoint to zero
view.layer.anchorPoint = CGPointZero;
view.layer.transform = CATransform3DMakeTranslation(-view.layer.bounds.size.width * .5, -view.layer.bounds.size.height * .5, 0);
// setting a trapezoid transform
AGKQuad quad = view.layer.quadrilateral;
quad.tl.x -= 10; // shift top left x-value with 10 pixels
view.layer.quadrilateral = quad; // the quad is converted to CATransform3D and applied
I basically have a pie chart where I have lines coming out of each segment of the pie chart. So in the case where the line comes out of the circle to the left, when I draw my text, it is reversed. "100%" would look like => "%001" (Note, the 1 and % sign are actually drawn in reverse to, like if a mirror. So the little overhang on top of the 1 points to the right, rather than the left.)
I tried reading through Apple's docs for the AffineTransform, but it doesn't make complete sense to me. I tried making this transformation matrix to start:
CGAffineTransform transform1 = CGAffineTransformMake(-1, 0, 0, 1, 0, 0);
This does flip the text around its x-axis so the text now looks correct on the left side of the circle. However, the text is now on the line, rather than at the end of the line like it originally was. So I thought I could translate it by moving the text in the x-axis direction by changing the tx value in the matrix. So instead of using the above matrix, I used this:
CGAffineTransform transform1 = CGAffineTransformMake(-1, 0, 0, 1, -strlen(t1AsChar), 0);
However, the text just stays where it's at. What am I doing wrong? Thanks.
strlen() doesn't give you the size of the rendered text box, it just gives you the length of the string itself (how many characters that string has). If you're using a UITextField you can use textField.frame.size.width instead.
I have some 3D models that I render in OpenGL in a 3D space, and I'm experiencing some headaches in moving the 'character' (that is the camera) with rotations and translation inside this world.
I receive the input (ie the coordinates where to move/the dregrees to turn) from some extern event (image a user input or some data from a GPS+compass device) and the kind of event is rotation OR translation .
I've wrote this method to manage these events:
- (void)moveThePlayerPositionTranslatingLat:(double)translatedLat Long:(double)translatedLong andRotating:(double)degrees{
[super startDrawingFrame];
if (degrees != 0)
{
glRotatef(degrees, 0, 0, 1);
}
if (translatedLat != 0)
{
glTranslatef(translatedLat, -translatedLong, 0);
}
[self redrawView];
}
Then in redrawView I'm actualy drawing the scene and my models. It is something like:
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
NSInteger nModels = [models count];
for (NSInteger i = 0; i < nModels; i++)
{
MD2Object * mdobj = [models objectAtIndex:i];
glPushMatrix();
double * deltas = calloc(sizeof(double),2);
deltas[0] = currentCoords[0] - mdobj.modelPosition[0];
deltas[1] = currentCoords[1] - mdobj.modelPosition[1];
glTranslatef(deltas[0], -deltas[1], 0);
free(deltas);
[mdobj setupForRenderGL];
[mdobj renderGL];
[mdobj cleanupAfterRenderGL];
glPopMatrix();
}
[super drawView];
The problem is that when translation an rotation events are called one after the other: for example when I'm rotating incrementally for some iterations (still around the origin) then I translate and finally rotate again but it appears that the last rotation does not occur around the current (translated) position but around the old one (the old origin). I'm well aware that this happens when the order of transformations is inverted, but I believed that after a drawing the new center of the world was given by the translated system.
What am I missing? How can I fix this? (any reference to OpenGL will be appreciated too)
I would recommend not doing cummulative transformations in the event handler, but internally storing the current values for your transformation and then only transforming once, but I don't know if this is the behaviour that you want.
Pseudocode:
someEvent(lat, long, deg)
{
currentLat += lat;
currentLong += long;
currentDeg += deg;
}
redraw()
{
glClear()
glRotatef(currentDeg, 0, 0, 1);
glTranslatef(currentLat, -currentLong, 0);
... // draw stuff
}
It sounds like you have a couple of things that are happening here:
The first is that you need to be aware that rotations occur about the origin. So when you translate then rotate, you are not rotating about what you think is the origin, but the new origin which is T-10 (the origin transformed by the inverse of your translation).
Second, you're making things quite a bit harder than you really need. What you might want to consider instead is to use gluLookAt. You essentially give it a position within your scene and a point in your scene to look at and an 'up' vector and it will set up the scene properly. To use it properly, keep track of where you camera is located, call that vector p, and a vector n (for normal ... indicates the direction you're looking) and u (your up vector). It will make things easier for more advanced features if n and u are orthonormal vectors (i.e. they are orthoginal to each other and have unit length). If you do this, you can compute r = n x u, (your 'right' vector), which will be a normal vector orthoginal to the other two. You then 'look at' p+n and provide the u as the up vector.
Ideally, your n, u and r have some canonical form, for instance:
n = <0, 0, 1>
u = <0, 1, 0>
r = <1, 0, 0>
You then incrementally accumulate your rotations and apply them to the canonical for of your oritentation vectors. You can use either Euler Rotations or Quaternion Rotations to accumulate your rotations (I've come to really appreciate the quaternion approach for a variety of reasons).
I have a custom UIView which is drawn using its -[drawRect:] method.
The problem is that the anti-aliasing acts very weird as black lines horizontal or vertical lines are drawn very blurry.
If I disable anti-aliasing with CGContextSetAllowsAntialiasing, everything is drawn as expected.
Anti-Aliasing:
alt text http://dustlab.com/stuff/antialias.png
No Anti-Aliasing (which looks like the expected result with AA):
alt text http://dustlab.com/stuff/no_antialias.png
The line width is exactly 1, and all coordinates are integral values.
The same happens if I draw a rectangle using CGContextStrokeRect, but not if I draw exactly the same CGRect with UIRectStroke.
Since a stroke expands equal amounts to both sides, a line of one pixel width must not be placed on an integer coordinate, but at 0.5 pixels offset.
Calculate correct coordinates for stroked lines like this:
CGPoint pos = CGPointMake(floorf(pos.x) + 0.5f, floorf(pos.y) + 0.5f);
BTW: Don't cast your values to int and back to float to get rid of the decimal part. There's a function for this in C called floor.
in your view frames, you probably have float values that are not integers. While the frames are precise enough to do fractions of a pixel (float), you will get blurriness unless you cast to an int
CGRect frame = CGRectMake((int)self.frame.bounds..., (int)...., (int)...., (int)....);