I want to have a completely upside down interface. I don't mean it should change according to the orientation of the phone. I mean it should be upside down ( UIInterfaceOrientationPortraitUpsideDown ) the whole time. A button should be able to 'right' it again. The same button should return everything to upside down.
What's the best way of achieving this?
Set the transform attribute on the root view's layer. Something like:
bool upsideDown = ...
float degrees = upsideDown ? 180.0f : 0.0f;
NSNumber* radians = [NSNumber numberWithFloat:degrees * M_PI / 180.0f];
[rootView.layer setValue:radians forKeyPath:#"transform.rotation.z"];
(Of course, you could easily optimize out the little formula converting from degrees to radians since you only use 0.0 and pi. Included it just for clarity.)
If you want an upside-down interface, orientation does not matter. In essence, apply a rotated affine transform to the view you use, such as:
window.transform = CGAffineTransformRotate(window.transform, M_PI);
Some offset will be required there if you have a status bar up there. Apply that again and you "rotate" it back.
I would look into two options:
Rotating the Canvas 180 degrees
Inverting the accelerometer
Not sure how to accomplish either task, but a quick Google search displayed some results.
http://www.tuaw.com/2007/09/10/iphone-coding-using-the-accelerometer/
iPhone dev - Manually rotate view
http://iphonedevelopmentbits.com/how-to-make-your-iphone-application-accelerometer-rotation-aware/
How can I rotate an image in response to the iPhone's accelerometer?
Related
I'm using AVCaptureSession to record video, using AVAssetWriterInput I'm writing the video to a file.
My problem is with the video orientation, using apple RosyWriter example, I can transform the AVAssetWriterInput so the video will be at the right orientation.
- (CGFloat)angleOffsetFromPortraitOrientationToOrientation:(AVCaptureVideoOrientation)orientation
{
CGFloat angle = 0.0;
switch (orientation) {
case AVCaptureVideoOrientationPortrait:
angle = 0.0;
break;
case AVCaptureVideoOrientationPortraitUpsideDown:
angle = M_PI;
break;
case AVCaptureVideoOrientationLandscapeRight:
angle = -M_PI_2;
break;
case AVCaptureVideoOrientationLandscapeLeft:
angle = M_PI_2;
break;
default:
break;
}
return angle;
}
- (CGAffineTransform)transformFromCurrentVideoOrientationToOrientation:(AVCaptureVideoOrientation)orientation
{
CGAffineTransform transform = CGAffineTransformIdentity;
// Calculate offsets from an arbitrary reference orientation (portrait)
CGFloat orientationAngleOffset = [self angleOffsetFromPortraitOrientationToOrientation:orientation];
CGFloat videoOrientationAngleOffset = [self angleOffsetFromPortraitOrientationToOrientation:self.videoOrientation];
// Find the difference in angle between the passed in orientation and the current video orientation
CGFloat angleOffset = orientationAngleOffset - videoOrientationAngleOffset;
transform = CGAffineTransformMakeRotation(angleOffset);
return transform;
}
The problem is with the front camera orientation, using this code wont work after the user change to front camera.
It seems like what cause the problem is when I'm changing the cam to front, the AVCaptureConnection changes and I get different orientation for the front cam and for the back cam.
So maybe I need to adjust to the differences between the initial orientation for back and front cam.
I dont really want to change the connection's orientation every time the user switch cams, because as Apple say it hurts performance (and indeed it looks bad when im doing it),
so instead Apple suggest to use the AVAssetWriterInput transform property to change the output orientation, but I'm not sure I can use the transform, because I want to let the user toggle the camera while recording but I cant change the transform after I start writing (it crashes)...
Any ideas how to solve this?
Since the image from front camera is mirrored you will need to mirror the transformation as well. The function to do this is simply scaling one of the dimensions with -1. Try playing around with this:
transform = CGAffineTransformConcat(CGAffineTransformMakeRotation(angle), CGAffineTransformMakeScale(1.0, -1.0));
Note possible permutations are (beside angle) X scale factor is -1 and or swapping the order of concat
EDIT: explanation
AVCaptureSession will return you the image data that do not rotate as your device does, naturally I believe that would be landscape left or right. If you need your video to have a correct orientation depending on how you are holding the device you need to apply some transform to it, in most cases it is enough to just apply some rotation. However, a front camera is a specific case because it simulates the mirror while you are still receiving non-mirrored frames. As the result to this your video seems to be up-side-down or left-side-right or all the other combinations depending on what rotation you are applying, thus as you said "I tried a lot of angles but I just cant get it right...". Again, to simulate the mirror effect you should scale one of the axis by -1.
EDIT: swapping orientation (from comments)
This is just an idea and I think it is a good one.. Do not use the asset transformation at all, do your own. Create a view with size you want your video to be, then create an image view subview to it with all the transforms, clippings and content modes you need. From the samples you get create images, set them to the image view and get layer snapshot from the view. Then send the snapshot to the asset writer.
I know what you are thinking here is "overhead". Well, I don't think so. As the views and image views do transformation on CPU same does the asset writer, so all you did is you canceled its internal transformation and used your own system. If you will try this I would very much like to hear about your results.
To get the layer image:
- (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Not the easiest way, but I think it will definitely work if you intercept the frame and perform your own transforms. You don't need any UIView for that - AVFoundation allows direct access and manipulation of its frame (you can replace with your own buffer when writing to the file, if needed). The only drawback I see is performance. Even just flipping an image might be very slow on old devices and at high resolution.
If this does hurt performance, you can save the video as it is and maintain a separate array of orientations for each frame. After the video is saved, reopen it, perform transformations for each frame, and save it to a new file. I have done something similar and it does work.
On a side note, AVAssetWriterInput's transform property works only on iPhone. If, say, you record a video with AVFoundation and upload it somewhere and watch it in browser, it will have the wrong orientation. So if you're after a thorough solution, your own transformation is the way to go.
If you often do video/image processing (e.g. in this case, flipping / rotating), consider the OpenCV library. It takes some time to learn, but it's worth it.
I have a UIImageView that is set to move up and down the screen with the value of the accelerometer, using the following code:
ship.center = CGPointMake(ship.center.x, ship.center.y+shipPosition.y);
Where shipPosition is a CGPoint set in the accelerometerDidAccelerate method using:
shipPosition.y = acceleration.x*60;
Obviously this works fine, it is very simple. I run into trouble when I try to something equally simple, vary the rotation of the image depending on its acceleration. I do this using:
ship.transform = CGAffineTransformMakeRotation(shipPosition.y);
For some reason this causes a very strange thing to happen, in that the image snaps back to its origin every time the main method is called. I can see frames where the image moves to where it should be, but then instantly snaps back.
This problem only happens when I have the rotation line in, commented out it works fine. I have no idea what is going on here, I have done this many times for different apps and i never had such a problem. In fact I copied my code from a different app I created where it works fine.
EDIT:
What really confuses me is when I change the angle of the rotation from the acceleration to the position of the ship using:
ship.transform = CGAffineTransformMakeRotation(ship.center.y/10);
When I do this, the ship actually rotates based on the accelerometer but does not move, which is crazy because a changing ship.center.y means the position of the ship is changing, but it's not!!
You should set the transform of you view back to CGAffineTransformIdentity before you set his center coordinates or frame and after that apply the new transformation.
The frame property returns the transformed coordinates of a view if it is transformed and not the true (well actually the transformed are true) coordinates.
Quote from the docs:
Warning: If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
Update/Actual Answer:
Well the actual problem is
shipPosition.y = acceleration.x*60;
Since you set the y pos in accelerometerDidAccelerate.
The acceleration won't remember it's old value. So if you move your device it will get a peak and as you slow down it will decelerate again.
Your ship will be +/-60 at the highest acceleration speed but will be 0 when you stop moving your device and shipPosition.y will be 0.
CGAffineTransformMakeRotation expects angle in radians, not in degrees.
1 radian = M_PI / 180.0 degrees
I'm implementing a basic speedometer using an image and rotating it. However, when I set the initial rotation (at something like 240 degrees, converted to radians) It rotates the image and makes it much smaller than it otherwise would be. Some values make the image disappear entirely. (like M_PI_4)
the slider goes from 0-360 for testing.
the following code is called on viewDidLoad, and when the slider value is changed.
-(void) updatePointer
{
double progress = testSlider.value;
progress += pointerStart
CGAffineTransform rotate = CGAffineTransformMakeRotation((progress*M_PI)/180);
[pointerImageView setTransform:rotate];
}
EDIT: Probably important to note that once it gets set the first time, the scale remains the same. So, if I were to set pointerStart to 240, it would shrink, but moving the slider wouldn't change the scale (and it would rotate it as you'd suspect) Replacing "progress" with 240 in the transformation does the same thing. (shrinks it.)
I was able to resolve the issue for anybody who stumbles across this question. Apparently the image is not fully loaded/measured when viewDidLoad is called, so the matrix transforms that cgAffineTransform does actually altered the size of the image. Moving the update code to viewDidAppear fixed the problem.
Take the transform state of the view which you want to rotate and then apply the rotation transform to it.
CGAffineTransform trans = pointerImageView.transform;
pointerImageView.transform = CGAffineTransformRotate(trans, 240);
I have a UIView that I want to always be facing up. So say you have an UIImageView that has an arrow, and no matter what way the device is being held, it's pointing up. Obviously I need the accelerometer, but telling it to rotate the image based on the coordinates is only going to work the first time I think, since rotations are relative. Is there an easier way of doing this? I feel like there would be a simple example app somewhere that would do something like this.
Apple has sample code which does exactly what you want; have a look at this: http://developer.apple.com/library/ios/#samplecode/WhichWayIsUp/Introduction/Intro.html
To receive specific motion data, use the shared instance of UIAccelerometer and it's delegate. http://developer.apple.com/library/ios/documentation/UIKit/Reference/UIAccelerometerDelegate_Protocol/UIAccelerometerDelegate/UIAccelerometerDelegate.html#//apple_ref/occ/intf/UIAccelerometerDelegate
As far as the rotations, I'm not sure. Have a look at the "BubbleLevel" sample code here: http://developer.apple.com/library/ios/samplecode/BubbleLevel/Introduction/Intro.html#//apple_ref/doc/uid/DTS40007331
The accelerometer delegate documentation references that and other relevant sample code.
WhichWayIsUp is the sample application you're looking for - in iOS Reference Library (great docs!)
Transforms applied to a view aren’t cumulative. Rather, the view starts out being aligned to the coordinate system of its parent, and the transform is applied relative to that.
As other answers have alluded to, there are enough pieces in Apple’s BubbleLevel sample code to accomplish what you’re looking for. -[LevelViewController accelerometer:didAccelerate:] already contains an example of how to extract the smoothed screen-relative rotation component of the gravity vector:
// Use a basic low-pass filter to only keep the gravity in the accelerometer values for the X and Y axes
accelerationX = acceleration.x * kFilteringFactor + accelerationX * (1.0 - kFilteringFactor);
accelerationY = acceleration.y * kFilteringFactor + accelerationY * (1.0 - kFilteringFactor);
// keep the raw reading, to use during calibrations
currentRawReading = atan2(accelerationY, accelerationX);
Given the coordinate systems of UIAcceleration and atan2, there are a few factors you’ll need to keep in mind to calculate your view’s desired rotation:
When the device is upright in portrait, atan2 will return -π/2 (-90°), since the input gravity vector is {0.0, -1.0, 0.0}.
CGAffineTransformMakeRotation creates a transform that will rotate clockwise (on iOS) by the specified number of radians, but the raw angle from atan2 specifies a counterclockwise rotation.
Accordingly, you’ll want to rotate your view by -(currentRawReading + π/2), assuming its parent view is oriented with the screen in portrait:
view.transform = CGAffineTransformMakeRotation(-(currentRawReading + M_PI/2));
I have a deadline soon and I have very annoying bug in front of me and no ideas, how to fix it. The problem is, that sometimes device doesn't know what it's orientation is before it has been rotated and so it messes up frames in conditional statements:
if (orient == UIInterfaceOrientationPortrait || orient == 0){
r = CGRectMake(0.0, 0.0, 320.0, 480.0);
// draw stuff
} else {
r = CGRectMake(0.0, 0.0, 480.0, 320.0);
// draw stuff
}
This bug can be easily reproduced if I keep device on the table or in hands (horizontal position) and run my application, it draws landscape-like rectangles in portrait type screen.
My questions are: can I somehow get the right orientation in this kind of situation? Too bad it can not be reproduced in simulator, so I'm not sure if the code I pasted is responsible for this bug, but that is the place where I fix view frames according to orientation. Maybe there's another (proper) way to do that? And maybe I can force application to be in portrait mode until rotation event will be fired?
I don't think it is a bug. When the iPhone is sitting on the table all the gravity is meassured along the Z axis. To determine device orientation x-y axis are used, but no acceleration is measured alongside of them in your scenario.
EDIT: Further explanations:
This image shows how the Axis are oriented on the iPhone:
alt text http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIAcceleration_Class/Art/device_axes.jpg
So when you have the iPhone lying on the back, an acceleration of -1g is applied on the z axis. To receive the measured accelerations simply register a delegate implementing UIAccelerometerDelegate with the UIAccelerometershared instance. See the AccelerometerGraph sample how this is done.
To determine if the iphone sits on the table do something like this:
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration
{
// lowPass filter first
// now choose orientation
if (fabs([acceleration z]) >= 0.9) // sitting on the table: choose new mode or lock current mode
if (fabs([acceleration y]) >= 0.9) // probably portrait mode
if (fabs([acceleration x]) >= 0.9) // probably landscape mode
}
The accelerometer precision is 18mG = 0.018G so take that into account.
Also you need to isolate yourself from the effects of instant movement changes, a simple LowPass filter fits this bill perfectly. The AccelerometerGraph sample shows how to implement such a filter.
Also choose a very low updateInterval if you don't need a lot of precision, 30Hz should be okay for your purposes.
The sensor which determines the device's orientation is neither infallible nor intelligent. If you set the phone down on the table and then walk around it, how is it supposed to know that it's upside down?