Why does affine scaling UIWebview gives strange results? - iphone

I have a game in HTML5 I wish to enclose inside a UIWebview.
I first rotated the view with an affine transform, but it was then off the mark and badly sized. I decided to set the frame to the enclosing view's frame. It was not a good solution, as others have found. So I followed the concatenation suggestion, and I got into the curious problem that after rotating and translating, the game displays fine, but as soon as I wish to scale, the game starts misbehaving (I get psychedelic colors...), which is not of course the intended result.
CGAffineTransform rot = CGAffineTransformMakeRotation( M_PI/2.0);
CGAffineTransform tran = CGAffineTransformMakeTranslation(0, [self statusBarFrameViewRect:self.view].size.height );
CGAffineTransform tranAndRot = CGAffineTransformConcat(rot, tran);
CGAffineTransform scale = CGAffineTransformMakeScale(self.view.frame.size.height, self.view.frame.size.width);
webView.transform = CGAffineTransformConcat(tranAndRot, scale);
// [webView setScalesPageToFit:YES]; // this line seems to do strictly nothing useful to me, whether on or off.
//I hoped it would help the contents of the view to eventually size to the scaled view, if needed.
Does anyone has a pointer to what I'm doing that's stupid?

In the end, I never found the core issue, which is probably linked to the tools used by the game designers (Unity).
Hence my solution was to do the whole UI in landscape mode and drop the rotate/translate part entirely.

Related

AVCaptureSession front camera orientation

I'm using AVCaptureSession to record video, using AVAssetWriterInput I'm writing the video to a file.
My problem is with the video orientation, using apple RosyWriter example, I can transform the AVAssetWriterInput so the video will be at the right orientation.
- (CGFloat)angleOffsetFromPortraitOrientationToOrientation:(AVCaptureVideoOrientation)orientation
{
CGFloat angle = 0.0;
switch (orientation) {
case AVCaptureVideoOrientationPortrait:
angle = 0.0;
break;
case AVCaptureVideoOrientationPortraitUpsideDown:
angle = M_PI;
break;
case AVCaptureVideoOrientationLandscapeRight:
angle = -M_PI_2;
break;
case AVCaptureVideoOrientationLandscapeLeft:
angle = M_PI_2;
break;
default:
break;
}
return angle;
}
- (CGAffineTransform)transformFromCurrentVideoOrientationToOrientation:(AVCaptureVideoOrientation)orientation
{
CGAffineTransform transform = CGAffineTransformIdentity;
// Calculate offsets from an arbitrary reference orientation (portrait)
CGFloat orientationAngleOffset = [self angleOffsetFromPortraitOrientationToOrientation:orientation];
CGFloat videoOrientationAngleOffset = [self angleOffsetFromPortraitOrientationToOrientation:self.videoOrientation];
// Find the difference in angle between the passed in orientation and the current video orientation
CGFloat angleOffset = orientationAngleOffset - videoOrientationAngleOffset;
transform = CGAffineTransformMakeRotation(angleOffset);
return transform;
}
The problem is with the front camera orientation, using this code wont work after the user change to front camera.
It seems like what cause the problem is when I'm changing the cam to front, the AVCaptureConnection changes and I get different orientation for the front cam and for the back cam.
So maybe I need to adjust to the differences between the initial orientation for back and front cam.
I dont really want to change the connection's orientation every time the user switch cams, because as Apple say it hurts performance (and indeed it looks bad when im doing it),
so instead Apple suggest to use the AVAssetWriterInput transform property to change the output orientation, but I'm not sure I can use the transform, because I want to let the user toggle the camera while recording but I cant change the transform after I start writing (it crashes)...
Any ideas how to solve this?
Since the image from front camera is mirrored you will need to mirror the transformation as well. The function to do this is simply scaling one of the dimensions with -1. Try playing around with this:
transform = CGAffineTransformConcat(CGAffineTransformMakeRotation(angle), CGAffineTransformMakeScale(1.0, -1.0));
Note possible permutations are (beside angle) X scale factor is -1 and or swapping the order of concat
EDIT: explanation
AVCaptureSession will return you the image data that do not rotate as your device does, naturally I believe that would be landscape left or right. If you need your video to have a correct orientation depending on how you are holding the device you need to apply some transform to it, in most cases it is enough to just apply some rotation. However, a front camera is a specific case because it simulates the mirror while you are still receiving non-mirrored frames. As the result to this your video seems to be up-side-down or left-side-right or all the other combinations depending on what rotation you are applying, thus as you said "I tried a lot of angles but I just cant get it right...". Again, to simulate the mirror effect you should scale one of the axis by -1.
EDIT: swapping orientation (from comments)
This is just an idea and I think it is a good one.. Do not use the asset transformation at all, do your own. Create a view with size you want your video to be, then create an image view subview to it with all the transforms, clippings and content modes you need. From the samples you get create images, set them to the image view and get layer snapshot from the view. Then send the snapshot to the asset writer.
I know what you are thinking here is "overhead". Well, I don't think so. As the views and image views do transformation on CPU same does the asset writer, so all you did is you canceled its internal transformation and used your own system. If you will try this I would very much like to hear about your results.
To get the layer image:
- (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Not the easiest way, but I think it will definitely work if you intercept the frame and perform your own transforms. You don't need any UIView for that - AVFoundation allows direct access and manipulation of its frame (you can replace with your own buffer when writing to the file, if needed). The only drawback I see is performance. Even just flipping an image might be very slow on old devices and at high resolution.
If this does hurt performance, you can save the video as it is and maintain a separate array of orientations for each frame. After the video is saved, reopen it, perform transformations for each frame, and save it to a new file. I have done something similar and it does work.
On a side note, AVAssetWriterInput's transform property works only on iPhone. If, say, you record a video with AVFoundation and upload it somewhere and watch it in browser, it will have the wrong orientation. So if you're after a thorough solution, your own transformation is the way to go.
If you often do video/image processing (e.g. in this case, flipping / rotating), consider the OpenCV library. It takes some time to learn, but it's worth it.

Strange behaviour in UIView Animation

I am facing a strange behavior in UIView animation. I am developing an iPad application which is using some UIView animations.
The duration of all animation is set to 0.5. Initially on launching the application all animations are working fine. But after some continuous use no animation is happening, all UIView changes are happening quickly, just like the duration is not set in animation.
I am not sure why this is happening. Has anyone else faced this kind of issue?
Following is one of the animation I am using. Like this I am using a lot of animations but after some time none of the animation is happening but all the codes inside the animation block is working fine
[UIView animateWithDuration:0.5 animations:^{
[tempLabel setFont:titleFont.font];
[tempLabel setTransform: CGAffineTransformMakeRotation((-90 * M_PI / 180))];
tempLabel.frame = CGRectMake(2,0,23,30);
}];
From the UIView class reference, the frame property:
Changes to this property can be animated. However, if the transform property contains a non-identity transform, the value of the frame property is undefined and should not be modified. In that case, you can reposition the view using the center property and adjust the size using the bounds property instead.
Don't animate the frame when you've also set a transform. Use bounds and center instead.
The problem you have seems that you are transforming the control every time, when you apply a transformation for the first time, there is no transformation on control, but next you have a transform, in this case the next transformation may not work as you want.. You have few options to solve this as follows, I haven't tried the code it might work
tempLabel.transform = CGAffineTransformIdentity;
Put the above code before applying any transformation this will reset any transformation applied and your new transformation will be freshly applied or, you can do
tempLabel.transform = CGAffineTransformConcat(tempLabel.transform, YOUR TRANSFORM);
The above will append to your last transformation..
Hope it helps.
Thanks.

Changing UIVIEW's transformation/rotation/scale WITHOUT animation

I need to change the size of my images according to their distance from the center of the screen. If an image is closer to the middle, it should be given a scale of 1, and the further it is from the center, the nearer it's scale is to zero by some function.
Since the user is panning the screen, I need a way to change the images (UIViews) scale, but since this is not a very classic animation where I know a how to define an animation sequence exactly - mostly because of timing issues (due to system performance, I don't know how long the animation will last), I am going to need to simply change the scale in one step (no timed animations).
This way every frame the functiion gets called when panning, all images should update easily.
Is there a way to do that ?
You could directly apply a CGAffineTransform to your UIImageView. i,e:
CGAffineTransform trans = CGAffineTransformMakeScale(1.0,1.0);
imageView.transform = trans;
Of course you can change your values, and or use other CGAffineTransform's, this should get you on your way though.
Hope it helps !

Loading UIView transform & center from settings gives different position

I'm using pan, pinch, and rotate UIGestureRecognizers to allow the user to put certain UI elements exactly where they want them. Using the code from here http://www.raywenderlich.com/6567/uigesturerecognizer-tutorial-in-ios-5-pinches-pans-and-more (or similar code from here http://mobile.tutsplus.com/tutorials/iphone/uigesturerecognizer/) both give me what I need for the user to place these UI elements as they desire.
When the user exits "UI layout" mode, I save the UIView's transform and center like so:
NSString *transformString = NSStringFromCGAffineTransform(self.transform);
[[NSUserDefaults standardUserDefaults] setObject:transformString forKey:#"UItransform", suffix]];
NSString *centerString = NSStringFromCGPoint(self.center);
[[NSUserDefaults standardUserDefaults] setObject:centerString forKey:#"UIcenter"];
When I reload the app, I read the UIView's transform and center like so:
NSString *centerString = [[NSUserDefaults standardUserDefaults] objectForKey:#"UIcenter"];
if( centerString != nil )
self.center = CGPointFromString(centerString);
NSString *transformString = [[NSUserDefaults standardUserDefaults] objectForKey:#"UItransform"];
if( transformString != nil )
self.transform = CGAffineTransformFromString(transformString);
And the UIView ends up rotated and scaled correctly, but in the wrong place. Further, upon entering "UI layout" mode again, I can't always grab the view with the various gestures (as though the view as displayed is not the view as understood by the gesture recognizer?)
I also have a reset button that sets the UIView's transform to the identity and its center to whatever it is when it loads from the NIB. But after loading the altered UIView center and transform, even the reset doesn't work. The UIView's position is wrong.
My first thought was that since those gesture code examples alter center, that rotations must be happening around different centers (assuming some unpredictable sequence of moves, rotations, and scales). As I don't want to save the entire sequence of edits (though that might be handy if I want to have some undo feature in the layout mode), I altered the UIPanGestureRecognizer handler to use the transform to move it. Once I got that working, I figured just saving the transform would get me the current location and orientation, regardless of in what order things happened. But no such luck. I still get a wacky position this way.
So I'm at a loss. If a UIView has been moved and rotated to a new position, how can I save that location and orientation in a way that I can load it later and get the UIView back to where it should be?
Apologies in advance if I didn't tag this right or didn't lay it out correctly or committed some other stackoverflow sin. It's the first time I've posted here.
EDIT
I'm trying the two suggestions so far. I think they're effectively the same thing (one suggests saving the frame and the other suggests saving the origin, which I think is the frame.origin).
So now the save/load from prefs code includes the following.
Save:
NSString *originString = NSStringFromCGPoint(self.frame.origin);
[[NSUserDefaults standardUserDefaults] setObject:originString forKey:#"UIorigin"];
Load (before loading the transform):
NSString *originString = [[NSUserDefaults standardUserDefaults] objectForKey:#"UIorigin"];
if( originString ) {
CGPoint origin = CGPointFromString(originString);
self.frame = CGRectMake(origin.x, origin.y, self.frame.size.width, self.frame.size.height);
}
I get the same (or similar - it's hard to tell) result. In fact, I added a button to just reload the prefs, and once the view is rotated, that "reload" button will move the UIView by some offset repeatedly (as though the frame or transform are relative to itself - which I'm sure is a clue, but I'm not sure what it's pointing to).
EDIT #2
This makes me wonder about depending on the view's frame. From Apple http://developer.apple.com/library/ios/#documentation/WindowsViews/Conceptual/ViewPG_iPhoneOS/WindowsandViews/WindowsandViews.html#//apple_ref/doc/uid/TP40009503-CH2-SW6 (emphasis mine):
The value in the center property is always valid, even if scaling or rotation factors have been added to the view’s transform. The same is not true for the value in the frame property, which is considered invalid if the view’s transform is not equal to the identity transform.
EDIT #3
Okay, so when I'm loading the prefs in, everything looks fine. The UI panel's bounds rect is {{0, 0}, {506, 254}}. At the end of my VC's viewDidLoad method, all still seems okay. But by the time things actually are displayed, bounds is something else. For example: {{0, 0}, {488.321, 435.981}} (which looks like how big it is within its superview once rotated and scaled). If I reset bounds to what it's supposed to be, it moves back into place.
It's easy enough to reset the bounds to what they're supposed to be programatically, but I'm actually not sure when to do it! I would've thought to do it at the end of viewDidLoad, but bounds is still correct at that point.
EDIT #4
I tried capturing self.bounds in initWithCoder (as it's coming from a NIB), and then in layoutSubviews, resetting self.bounds to that captured CGRect. And that works.
But it seems horribly hacky and fraught with peril. This can't really be the right way to do this. (Can it?) skram's answer below seems so straightforward, but doesn't work for me when the app reloads.
You would save the frame property as well. You can use NSStringFromCGRect() and CGRectFromString().
When loading, set the frame then apply your transform. This is how I do it in one of my apps.
Hope this helps.
UPDATE: In my case, I have Draggable UIViews that rotation and resizing can be applied to. I use NSCoding to save and load my objects, example below.
//encoding
....
[coder encodeCGRect:self.frame forKey:#"rect"];
// you can save with NSStringFromCGRect(self.frame);
[coder encodeObject:NSStringFromCGAffineTransform(self.transform) forKey:#"savedTransform"];
//init-coder
CGRect frame = [coder decodeCGRectForKey:#"rect"];
// you can use frame = CGRectFromString(/*load string*/);
[self setFrame:frame];
self.transform = CGAffineTransformFromString([coder decodeObjectForKey:#"savedTransform"]);
What this does is save my frame and transform, and load them when needed. The same method can be applied with NSStringFromCGRect() and CGRectFromString().
UPDATE 2: In your case. You would do something like this..
[self setFrame:CGRectFromString([[NSUserDefaults standardUserDefaults] valueForKey:#"UIFrame"])];
self.transform = CGAffineTransformFromString([[NSUserDefaults standardUserDefaults] valueForKey:#"transform"]);
Assuming you're saving to NSUserDefaults with UIFrame, and transform keys.
I am having trouble reproducing your issue. I have used the following code, which does the following:
Adds a view
Moves it by changing the centre
Scales it with a transform
Rotates it with another transform, concatenated onto the first
Saves the transform and centre to strings
Adds another view and applies the centre and transform from the string
This results in two views in exactly the same place and position:
- (void)viewDidLoad
{
[super viewDidLoad];
UIView *view1 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
view1.layer.borderWidth = 5.0;
view1.layer.borderColor = [UIColor blackColor].CGColor;
[self.view addSubview:view1];
view1.center = CGPointMake(150,150);
view1.transform = CGAffineTransformMakeScale(1.3, 1.3);
view1.transform = CGAffineTransformRotate(view1.transform, 0.5);
NSString *savedCentre = NSStringFromCGPoint(view1.center);
NSString *savedTransform = NSStringFromCGAffineTransform(view1.transform);
UIView *view2 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
view2.layer.borderWidth = 2.0;
view2.layer.borderColor = [UIColor greenColor].CGColor;
[self.view addSubview:view2];
view2.center = CGPointFromString(savedCentre);
view2.transform = CGAffineTransformFromString(savedTransform);
}
Giving:
This ties up with what I would expect from the documentation, in that all transforms happen around the centre point and so that is never affected. The only way I can imagine that you're not able to restore items to their previous state is if somehow the superview was different, either with its own transform or a different frame, or a different view altogether. But I can't tell that from your question.
In summary, the original code in your question ought to be working, so there is something else going on! Hopefully this answer will help you narrow it down.
You should the also save the UIView's location,
CGPoint position = CGPointMake(self.view.origin.x, self.view.origin.y)
NSString _position = NSStringFromCGPoint(position);
// Do the saving
I'm not sure of everything that's going on, but here are some ideas that may help.
1- skram's solution seems plausible, but it's the bounds you want to save, not the frame. (Note that, if there's been no rotation, the center and bounds define the frame. So, setting the two is the same as setting the frame.)
From, the View Programming Guide for IOS you linked to:
Important If a view’s transform property is not the identity
transform, the value of that view’s frame property is undefined and
must be ignored. When applying transforms to a view, you must use the
view’s bounds and center properties to get the size and position of
the view. The frame rectangles of any subviews are still valid because
they are relative to the view’s bounds.
2- Another idea. When you reload the app, you could try the following:
First, set the view's transform to the identity transform.
Then, set the view's bounds and center to the saved values.
Finally, set the view's transform to the saved transform.
Depending on where your app is restarting, it may be starting back up with some of the old geometry. I really don't think this will change anything, but it's easy enough to try.
Update: After some testing, it really does seem like this wouldn't have any effect. Changing the transform does not seem to change the bounds or center (although it does change the frame.)
3- Lastly, you may save some trouble by rewriting the pinch gesture recognizer to operate on the bounds rather than the transform. (Again, use bounds, not frame, because an earlier rotation could have rendered the frame invalid.) In this way, the transform is used only for rotations, which, I think, cannot be done any other way without redrawing.
From the same guide, Apple's recommendation is:
You typically modify the transform property of a view when you want to
implement animations. For example, you could use this property to
create an animation of your view rotating around its center point. You
would not use this property to make permanent changes to your view,
such as modifying its position or size a view within its superview’s
coordinate space. For that type of change, you should modify the frame
rectangle of your view instead.
Thanks to all who contributed answers! The sum of them all led me to the following:
The trouble seems to have been that the bounds CGRect was being reset after loading the transform from preferences at startup, but not when updating the preferences while modifying in real time.
I think there are two solutions. One would be to first load the preferences from layoutSubviews instead of from viewDidLoad. Nothing seems to happen to bounds after layoutSubviews is called.
For other reasons in my app, however, it's more convenient to load the preferences from the view controller's viewDidLoad. So the solution I'm using is this:
// UserTransformableView.h
#interface UserTransformableView : UIView {
CGRect defaultBounds;
}
// UserTransformableView.m
- (id)initWithCoder:(NSCoder *)aDecoder {
self = [super initWithCoder:aDecoder];
if( self ) {
defaultBounds = self.bounds;
}
return self;
}
- (void)layoutSubviews {
self.bounds = defaultBounds;
}

CGAffineTransformMakeRotation scales the image

I'm implementing a basic speedometer using an image and rotating it. However, when I set the initial rotation (at something like 240 degrees, converted to radians) It rotates the image and makes it much smaller than it otherwise would be. Some values make the image disappear entirely. (like M_PI_4)
the slider goes from 0-360 for testing.
the following code is called on viewDidLoad, and when the slider value is changed.
-(void) updatePointer
{
double progress = testSlider.value;
progress += pointerStart
CGAffineTransform rotate = CGAffineTransformMakeRotation((progress*M_PI)/180);
[pointerImageView setTransform:rotate];
}
EDIT: Probably important to note that once it gets set the first time, the scale remains the same. So, if I were to set pointerStart to 240, it would shrink, but moving the slider wouldn't change the scale (and it would rotate it as you'd suspect) Replacing "progress" with 240 in the transformation does the same thing. (shrinks it.)
I was able to resolve the issue for anybody who stumbles across this question. Apparently the image is not fully loaded/measured when viewDidLoad is called, so the matrix transforms that cgAffineTransform does actually altered the size of the image. Moving the update code to viewDidAppear fixed the problem.
Take the transform state of the view which you want to rotate and then apply the rotation transform to it.
CGAffineTransform trans = pointerImageView.transform;
pointerImageView.transform = CGAffineTransformRotate(trans, 240);