skewing a UIImageView using CGAffineTransform - iphone

I am trying to skew a rectangle so the two vertical sides are slanted but parallel and the top and bottom are horizontal.
I am trying to use CGAffineTransform and have found this code but I am not figuring out what to put in the various parts.
imageView.layer.somethingMagic.imageRightTop = (CGPoint){ 230, 30 };
imageView.layer.somethingMagic.imageRightBottom = (CGPoint){ 300, 150 };
#define CGAffineTransformDistort(t, x, y) (CGAffineTransformConcat(t, CGAffineTransformMake(1, y, x, 1, 0, 0)))
#define CGAffineTransformMakeDistort(x, y) (CGAffineTransformDistort(CGAffineTransformIdentity, x, y))
although this is said to be easy I don't know what to put in the different places.
I assume image view would be my image that I want to change however what goes into somethingMagic. and imageRightTop and imageRightBottom.
Also how do I define t.
If there is a more thorough explanation I would appreciate it since in most cases I found only this as the explanation of what to do to skew a rectangle.
Thanks

Let's assume you have a variable named imageView holding a reference to your UIImageView.
I wrote a little sample to demonstrate how you could get this behavior. What this code does is creating a new CGAffineTransform matrix. This matrix has the same values as the identity transform matrix with one exception: the value at location [2,1]. This value is controlled by the c-parameter of the CGAffineTransformMake-function and controls the shearing along the x-axis. You can change the amount of shearing by setting shearValue.
The code:
Objective-C
CGFloat shearValue = 0.3f; // You can change this to anything you want
CGAffineTransform shearTransform = CGAffineTransformMake(1.f, 0.f, shearValue, 1.f, 0.f, 0.f);
[imageView setTransform:shearTransform];
Swift 5
let shearValue = CGFloat(0.3) // You can change this to anything you want
let shearTransform = CGAffineTransform(a: 1, b: 0, c: shearValue, d: 1, tx: 0, ty: 0)
imageView.transform = shearTransform
And here's what the shearTransform-matrix looks like:
[1 0 0]
[0.3 1 0]
[0 0 1]

Related

SceneKit – Rotate and animate a SCNNode

I'm trying to display a pyramid that points following the z axis and then rotates on itself around z too.
As my camera is on the z axis, I'm expecting to see the pyramid from above. I managed to rotate the pyramid to see it this way but when I add the animation it seems to rotate on multiple axis.
Here is my code:
// The following create the pyramid and place it how I want
let pyramid = SCNPyramid(width: 1.0, height: 1.0, length: 1.0)
let pyramidNode = SCNNode(geometry: pyramid)
pyramidNode.position = SCNVector3(x: 0, y: 0, z: 0)
pyramidNode.rotation = SCNVector4(x: 1, y: 0, z: 0, w: Float(M_PI / 2))
scene.rootNode.addChildNode(pyramidNode)
// But the animation seems to rotate aroun 2 axis and not just z
var spin = CABasicAnimation(keyPath: "rotation")
spin.byValue = NSValue(SCNVector4: SCNVector4(x: 0, y: 0, z: 1, w: 2*Float(M_PI)))
spin.duration = 3
spin.repeatCount = HUGE
pyramidNode.addAnimation(spin, forKey: "spin around")
Trying to both manually set and animate the same property can cause issues. Using a byValue animation makes the problem worse -- that concatenates to the current transform, so it's harder to keep track of whether the current transform is what the animation expects to start with.
Instead, separate the fixed orientation of the pyramid (its apex is in the -z direction) from the animation (it spins around the axis it points in). There's two good ways to do this:
Make pyramidNode the child of another node that gets the one-time rotation (π/2 around x-axis), and apply the spin animation directly to pyramidNode. (In this case, the apex of the pyramid will still point in the +y direction of its local space, so you'll want to spin around that axis instead of the z-axis.)
Use the pivot property to transform the local space of pyramidNode's contents, and animate pyramidNode relative to its containing space.
Here's some code to show the second approach:
let pyramid = SCNPyramid(width: 1.0, height: 1.0, length: 1.0)
let pyramidNode = SCNNode(geometry: pyramid)
pyramidNode.position = SCNVector3(x: 0, y: 0, z: 0)
// Point the pyramid in the -z direction
pyramidNode.pivot = SCNMatrix4MakeRotation(CGFloat(M_PI_2), 1, 0, 0)
scene.rootNode.addChildNode(pyramidNode)
let spin = CABasicAnimation(keyPath: "rotation")
// Use from-to to explicitly make a full rotation around z
spin.fromValue = NSValue(SCNVector4: SCNVector4(x: 0, y: 0, z: 1, w: 0))
spin.toValue = NSValue(SCNVector4: SCNVector4(x: 0, y: 0, z: 1, w: CGFloat(2 * M_PI)))
spin.duration = 3
spin.repeatCount = .infinity
pyramidNode.addAnimation(spin, forKey: "spin around")
Some unrelated changes to improve code quality:
Use CGFloat when explicit conversion is required to initialize an SCNVector component; using Float or Double specifically will break on 32 or 64 bit architecture.
Use .infinity instead of the legacy BSD math constant HUGE. This type-infers to whatever the type of spin.repeatCount is, and uses a constant value that's defined for all floating-point types.
Use M_PI_2 for π/2 to be pedantic about precision.
Use let instead of var for the animation, since we never assign a different value to spin.
More on the CGFloat error business: In Swift, numeric literals have no type until the expression they're in needs one. That's why you can do things like spin.duration = 3 -- even though duration is a floating-point value, Swift lets you pass an "integer literal". But if you do let d = 3; spin.duration = d you get an error. Why? Because variables/constants have explicit types, and Swift doesn't do implicit type conversion. The 3 is typeless, but when it gets assigned to d, type inference defaults to choosing Int because you haven't specified anything else.
If you're seeing type conversion errors, you probably have code that mixes literals, constants, and/or values returned from functions. You can probably just make the errors go away by converting everything in the expression to CGFloat (or whatever the type you're passing that expression to is). Of course, that'll make your code unreadable and ugly, so once you get it working you might start removing conversions one at a time until you find the one that does the job.
SceneKit includes animation helpers which are much simpler & shorter to use than CAAnimations. This is ObjC but gets across the point:
[pyramidNode runAction:
[SCNAction repeatActionForever:
[SCNAction rotateByX:0 y:0 z:2*M_PI duration:3]]];
I changed byValue to toValue and this worked for me. So change the line...
spin.byValue = NSValue(SCNVector4: SCNVector4(...
Change it to...
spin.toValue = NSValue(SCNVector4: SCNVector4(x: 0, y:0, z:1, w: 2*float(M_PI))

Robot arm programming, transformation of coordinate system

I have a project in which i need make a matlab based simulation of a robotic arm.
img http://img845.imageshack.us/img845/4512/l5mx.png
The first part sits in the origin, and can rotate around the Z axis in the world coordinate system. This joint is called joint1. The next joint, joint2 is displaced 0.8m in the Z direction of the coordinate system of part 1. It will rotate around the Y axis of the coordinate system of part2. Joint3 is displaced 0.6m in the Z direction of the csystem of part2. It will rotate around the y axis of the csystem of part3. The end of part3 is displaced 0.7m in the Z direction of csystem of part3.
Now, lets try to do some matrices of this. I'm quite sure i'm doing something wrong with these. The coordinates will be in homogenous form, so v = [v,1].
T_Wto1 = [cos(alpha(1)), -sin(alpha(1)), 0 , 0;
sin(alpha(1)), cos(alpha(1)), 0 , 0;
0, 0, 1 , 0.8;
0, 0, 0, 1];
T_1to2 = [cos(alpha(2)), 0, sin(alpha(2)), 0;
0, 1, 0, 0;
-sin(alpha(2)), 0, cos(alpha(2)), 0.6;
0, 0, 0, 1];
T_2to3 = [cos(alpha(3)), 0, sin(alpha(3)), 0;
0, 1, 0, 0;
-sin(alpha(3)), 0, cos(alpha(3)), 0.7;
0, 0, 0, 1];
For alpha(1) = 0, alpha(2) = alpha(3) = pi/2
First of all. If i use p1 = T_Wto1*[0,0,0,1]', i get [0,0,0.8,1]', so far so good. Then, T_1to2*[0,0,0.8,1]' gives [0.8,0,0.6,1]' (it is now displaced 0.8 in the X direction, which is really 0.8 in the Z direction, because of the rotation). Now, say that i want to transform this back to world coordinates. It should say [0.6,0,0.8], but i'm unsure on how to do that. If you just take the inverse of the matrix T_Wto2 (a product of T_Wto1 and T_1to2), you just get the origin [0,0,0,1] back. What are you supposed to do to make it back into world coordinates again?
Also, are the transformation matrices correct?

Matlab gradient equivalent in opencv

I am trying to migrate some code from Matlab to Opencv and need an exact replica of the gradient function. I have tried the cv::Sobel function but for some reason the values in the resulting cv::Mat are not the same as the values in the Matlab version. I need the X and Y gradient in separate matrices for further calculations.
Any workaround that could achieve this would be great
Sobel can only compute the second derivative of the image pixel which is not what we want.
(f(i+1,j) + f(i-1,j) - 2f(i,j)) / 2
What we want is
(f(i+i,j)-f(i-1,j)) / 2
So we need to apply
Mat kernelx = (Mat_<float>(1,3)<<-0.5, 0, 0.5);
Mat kernely = (Mat_<float>(3,1)<<-0.5, 0, 0.5);
filter2D(src, fx, -1, kernelx)
filter2D(src, fy, -1, kernely);
Matlab treats border pixels differently from inner pixels. So the code above is wrong at the border values. One can use BORDER_CONSTANT to extent the border value out with a constant number, unfortunately the constant number is -1 by OpenCV and can not be changed to 0 (which is what we want).
So as to border values, I do not have a very neat answer to it. Just try to compute the first derivative by hand...
You have to call Sobel 2 times, with arguments:
xorder = 1, yorder = 0
and
xorder = 0, yorder = 1
You have to select the appropriate kernel size.
See documentation
It might still be that the MatLab implementation was different, ideally you should retrieve which kernel was used there...
Edit:
If you need to specify your own kernel, you can use the more generic filter2D. Your destination depth will be CV_16S (16bit signed).
Matlab computes the gradient differently for interior rows and border rows (the same is true for the columns of course). At the borders, it is a simple forward difference gradY(1) = row(2) - row(1). The gradient for interior rows is computed by the central difference gradY(2) = (row(3) - row(1)) / 2.
I think you cannot achieve the same result with just running a single convolution filter over the whole matrix in OpenCV. Use cv::Sobel() with ksize = 1, then treat the borders (either manually or by applying a [ 1 -1 ] filter).
Pei's answer is partly correct. Matlab uses these calculations for the borders:
G(:,1) = A(:,2) - A(:,1);
G(:,N) = A(:,N) - A(:,N-1);
so used the following opencv code to complete the gradient:
static cv::Mat kernelx = (cv::Mat_<double>(1, 3) << -0.5, 0, 0.5);
static cv::Mat kernely = (cv::Mat_<double>(3, 1) << -0.5, 0, 0.5);
cv::Mat fx, fy;
cv::filter2D(Image, fx, -1, kernelx, cv::Point(-1, -1), 0, cv::BORDER_REPLICATE);
cv::filter2D(Image, fy, -1, kernely, cv::Point(-1, -1), 0, cv::BORDER_REPLICATE);
fx.col(fx.cols - 1) *= 2;
fx.col(0) *= 2;
fy.row(fy.rows - 1) *= 2;
fy.row(0) *= 2;
Jorrit's answer is partly correct.
In some cases, the value of the directional derivative may be negative, and MATLAB will retain these negative numbers, but OpenCV Mat will set the negative number to 0.

Draw image with CGAffineTransform and CGContextDrawImage

I want to draw UIimage with CGAffineTransform but It gives wrong output with CGContextConcatCTM
I have try with below code :
CGAffineTransform t = CGAffineTransformMake(1.67822, -1.38952, 1.38952, 1.67822, 278.684, 209.129); // transformation of uiimageview
UIGraphicsBeginImageContext(CGSizeMake(1024, 768));
CGContextRef imageContext = UIGraphicsGetCurrentContext();
CGContextDrawImage(imageContext, dragView.frame, dragView.image.CGImage);
CGContextConcatCTM(imageContext, t);
NSLog(#"\n%#\n%#", NSStringFromCGAffineTransform(t),NSStringFromCGAffineTransform(CGContextGetCTM(imageContext)));
Output :
[1.67822, -1.38952, 1.38952, 1.67822, 278.684, 209.129] // imageview transformation
[1.67822, 1.38952, 1.38952, -1.67822, 278.684, 558.871] // drawn image transformation
CGAffineTransform CGAffineTransformMake (
CGFloat a,
CGFloat b,
CGFloat c,
CGFloat d,
CGFloat tx,
CGFloat ty
);
Parameter b, d and ty changed, How to solve this?
There is no problem to solve. Your log output is correct.
Comparing the two matrixes, the difference between the two is this:
scale vertically by -1 (which affects two of the first four members)
translate vertically by 349.742 (which affects the last member)
I'm going to take a guess and say your view is about 350 points tall. Am I right? Actually, the 349.742 is weird, since you set the context's height to 768. It's almost half (perhaps because the anchor point is centered?), but well short, and cutting off the status bar wouldn't make sense here (and wouldn't account for a 68.516-point difference). So that is a puzzle. But, what follows is still true:
A vertical scale and translate is how you would flip a context. This context has gone from lower-left origin to upper-left origin, or vice versa.
That happened before you concatenated your (unexplained, hard-coded) matrix in. Assuming you didn't flip the context yourself, it probably came that way (I would guess as a UIKit implementation detail).
Concatenation (as in CGContextConcatCTM) does not replace the old transformation matrix with the new one; it is matrix multiplication. The matrix you have afterward is the product of both the matrix you started with and the one you concatenated onto it. The resulting matrix is both flipped and then… whatever your matrix does.
You can see this for yourself by simply getting the CTM before you concatenate your matrix onto it, and logging that. You should see this:
[0, -1, 0, -1, 0, 349.742]
See also “The Math Behind the Matrices” in the Quartz 2D Programming Guide.

laying out images in UIScrollView automatically

i have a list of images retrieved from xml i want to populate them to a uiscrollview in an order such that it will look like this.
1 2 3
4 5 6
7 8 9
10
if there is only 10 images it will just stop here.
right now my current code is this
for (int i = 3; i<[appDelegate.ZensaiALLitems count]-1; i++) {
UIButton *zenbutton2 =[UIButton buttonWithType:UIButtonTypeCustom];
Items *ZensaiPLUitems = [appDelegate.ZensaiALLitems objectAtIndex:i];
NSURL *ZensaiimageSmallURL = [NSURL URLWithString:ZensaiPLUitems.ZensaiimageSmallURL];
NSLog(#"FVGFVEFV :%#", ZensaiPLUitems.ZensaiimageSmallURL);
NSData *simageData = [NSData dataWithContentsOfURL:ZensaiimageSmallURL];
UIImage *itemSmallimage = [UIImage imageWithData:simageData];
[zenbutton2 setImage:itemSmallimage forState:UIControlStateNormal];
zenbutton2.frame=CGRectMake( (i*110+i*110)-660 , 300, 200, 250);
[zenbutton2 addTarget:self action:#selector(ShowNextZensaiPage) forControlEvents:UIControlEventTouchUpInside];
[scrollView addSubview:zenbutton2];
}
notice the CGRectMake , i have to manually assign fixed values to position them.
Is there any way to populate them out without manually assigning.
for e.g the images will automatically go down a position once the first row has 3 images and subsequently for the rest.
If I understand what you are saying, you should be able to write a simple block of code that assigns a position based on the image number.
Something like this (where i is the image number, starting from 0):
- (CGPoint)getImageOrigin:(NSInteger)imageNumber {
CGFloat leftInset = 30;
CGFloat xOffsetBetweenOrigins = 80;
CGFloat topInset = 20;
CGFloat yOffsetBetweenOrigins = 80;
int numPerRow = 3;
CGFloat x = leftInset + (xOffsetBetweenOrigins * (imageNumber % numPerRow));
CGFloat y = topInset + (yOffsetBetweenOrigins * floorf(imageNumber / numPerRow));
CGPoint imageOrigin = CGPointMake(x, y);
return imageOrigin;
}
The origin being calculated here is the upper left corner of each image.
To calculate the x value, I start with the minimum distance from the left side of the screen (leftInset). Then, I add the distance from the left side of one image to the next image, multiplied by the column (imageNumber % numPerRow).
Y is calculated in a similar fashion, but to calculate the row, I use the imageNumber / numPerRow rounded down.
Edit:
You asked me to explain further, so I'll see what I can do.
OK, so I want to be able to input the image number (starting at 0) into my function, and I want the origin (upper left corner point) back.
leftInset is the distance between the left edge of the view, and the left edge of the first image.
xOffsetBetweenOrigins is the distance from the left edge of one image to the left edge of the next image on the same row. So, if I set it to 80 and my image is 50px wide, there will be a 30px gap between two images in the same row.
topInset is like left inset. It is the distance from the top edge of the view to the top edge of the images in the top row.
yOffsetBetweenOrigins is the distance from the top edge of an image to the top edge of the image below it. If I set this to 80, and my image is 50px tall, then there will be a 30px vertical gap between rows.
numPerRow is straightforward. It is just the number of images per row.
To calculate the x value of the upper left corner of the image, I always start with the leftInset, because it is constant. If I am on the first image of a row, that will be the entire x value. If I am on the second image of the row, I need to add xOffsetBetweenOrigins once, and if I am on the third, I need to add it twice.
To do this, I use the modulus (%) operator. It gives me the remainder of a division operation, so when I say imageNumber % numPerRow, I am asking for the remainder of imageNumber/numPerRow.
If I am on the first image (imageNumber = 0), then 3 goes into 0 no times, and the remainder is 0. If I am on the second image (imageNumber = 1), then I have 1/3. 3 goes into 1 0 times, but the remainder is 1, so I get xOffsetBetweenOrigins*1.
For the y value, I do something similar, but instead of taking the modulus, I simply divide imageNumber/numPerRow and round down. Doing this, I will get 0 for 0, 1, and 2. I will get 1 for 3, 4, and 5.
Edit:
It occurred to me that you might actually have been asking how to use this method. In your code, you would say something like
CGRect newFrame = zenbutton2.frame;
newFrame.origin = [self getImageOrigin:i];
zenbutton2.frame = newFrame;
Another Edit:
Maybe you could try this?
CGPoint origin = [self getImageOrigin:i];
zenbutton2.frame = CGRectMake(origin.x, origin.y, width, height);
If that doesn't work, throw in
NSLog("Origin Values: %f,%f", origin.x, origin.y);
to make sure that you are actually getting something back from getImageOrigin.
I think you probably want to wrap your loop in another loop, to get what I'm going to call a 2D loop:
for (int row = 0; row < num_rows; row++) {
for (int col = 0; col < num_cols; col++) {
// code before
zenButton2.frame = CGRectMake(origin x dependent on col,
origin y dependent on row,
width,
height);
// code after
}
}
Where the x and y of the CGRectMake() are multiples of the width and height of your image times the row and column respectively. Hope that makes sense.