I'm using AVCaptureSession to record video, using AVAssetWriterInput I'm writing the video to a file.
My problem is with the video orientation, using apple RosyWriter example, I can transform the AVAssetWriterInput so the video will be at the right orientation.
- (CGFloat)angleOffsetFromPortraitOrientationToOrientation:(AVCaptureVideoOrientation)orientation
{
CGFloat angle = 0.0;
switch (orientation) {
case AVCaptureVideoOrientationPortrait:
angle = 0.0;
break;
case AVCaptureVideoOrientationPortraitUpsideDown:
angle = M_PI;
break;
case AVCaptureVideoOrientationLandscapeRight:
angle = -M_PI_2;
break;
case AVCaptureVideoOrientationLandscapeLeft:
angle = M_PI_2;
break;
default:
break;
}
return angle;
}
- (CGAffineTransform)transformFromCurrentVideoOrientationToOrientation:(AVCaptureVideoOrientation)orientation
{
CGAffineTransform transform = CGAffineTransformIdentity;
// Calculate offsets from an arbitrary reference orientation (portrait)
CGFloat orientationAngleOffset = [self angleOffsetFromPortraitOrientationToOrientation:orientation];
CGFloat videoOrientationAngleOffset = [self angleOffsetFromPortraitOrientationToOrientation:self.videoOrientation];
// Find the difference in angle between the passed in orientation and the current video orientation
CGFloat angleOffset = orientationAngleOffset - videoOrientationAngleOffset;
transform = CGAffineTransformMakeRotation(angleOffset);
return transform;
}
The problem is with the front camera orientation, using this code wont work after the user change to front camera.
It seems like what cause the problem is when I'm changing the cam to front, the AVCaptureConnection changes and I get different orientation for the front cam and for the back cam.
So maybe I need to adjust to the differences between the initial orientation for back and front cam.
I dont really want to change the connection's orientation every time the user switch cams, because as Apple say it hurts performance (and indeed it looks bad when im doing it),
so instead Apple suggest to use the AVAssetWriterInput transform property to change the output orientation, but I'm not sure I can use the transform, because I want to let the user toggle the camera while recording but I cant change the transform after I start writing (it crashes)...
Any ideas how to solve this?
Since the image from front camera is mirrored you will need to mirror the transformation as well. The function to do this is simply scaling one of the dimensions with -1. Try playing around with this:
transform = CGAffineTransformConcat(CGAffineTransformMakeRotation(angle), CGAffineTransformMakeScale(1.0, -1.0));
Note possible permutations are (beside angle) X scale factor is -1 and or swapping the order of concat
EDIT: explanation
AVCaptureSession will return you the image data that do not rotate as your device does, naturally I believe that would be landscape left or right. If you need your video to have a correct orientation depending on how you are holding the device you need to apply some transform to it, in most cases it is enough to just apply some rotation. However, a front camera is a specific case because it simulates the mirror while you are still receiving non-mirrored frames. As the result to this your video seems to be up-side-down or left-side-right or all the other combinations depending on what rotation you are applying, thus as you said "I tried a lot of angles but I just cant get it right...". Again, to simulate the mirror effect you should scale one of the axis by -1.
EDIT: swapping orientation (from comments)
This is just an idea and I think it is a good one.. Do not use the asset transformation at all, do your own. Create a view with size you want your video to be, then create an image view subview to it with all the transforms, clippings and content modes you need. From the samples you get create images, set them to the image view and get layer snapshot from the view. Then send the snapshot to the asset writer.
I know what you are thinking here is "overhead". Well, I don't think so. As the views and image views do transformation on CPU same does the asset writer, so all you did is you canceled its internal transformation and used your own system. If you will try this I would very much like to hear about your results.
To get the layer image:
- (UIImage *)imageFromLayer:(CALayer *)layer {
UIGraphicsBeginImageContext([layer frame].size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
Not the easiest way, but I think it will definitely work if you intercept the frame and perform your own transforms. You don't need any UIView for that - AVFoundation allows direct access and manipulation of its frame (you can replace with your own buffer when writing to the file, if needed). The only drawback I see is performance. Even just flipping an image might be very slow on old devices and at high resolution.
If this does hurt performance, you can save the video as it is and maintain a separate array of orientations for each frame. After the video is saved, reopen it, perform transformations for each frame, and save it to a new file. I have done something similar and it does work.
On a side note, AVAssetWriterInput's transform property works only on iPhone. If, say, you record a video with AVFoundation and upload it somewhere and watch it in browser, it will have the wrong orientation. So if you're after a thorough solution, your own transformation is the way to go.
If you often do video/image processing (e.g. in this case, flipping / rotating), consider the OpenCV library. It takes some time to learn, but it's worth it.
Related
I need to change the size of my images according to their distance from the center of the screen. If an image is closer to the middle, it should be given a scale of 1, and the further it is from the center, the nearer it's scale is to zero by some function.
Since the user is panning the screen, I need a way to change the images (UIViews) scale, but since this is not a very classic animation where I know a how to define an animation sequence exactly - mostly because of timing issues (due to system performance, I don't know how long the animation will last), I am going to need to simply change the scale in one step (no timed animations).
This way every frame the functiion gets called when panning, all images should update easily.
Is there a way to do that ?
You could directly apply a CGAffineTransform to your UIImageView. i,e:
CGAffineTransform trans = CGAffineTransformMakeScale(1.0,1.0);
imageView.transform = trans;
Of course you can change your values, and or use other CGAffineTransform's, this should get you on your way though.
Hope it helps !
I'm implementing a basic speedometer using an image and rotating it. However, when I set the initial rotation (at something like 240 degrees, converted to radians) It rotates the image and makes it much smaller than it otherwise would be. Some values make the image disappear entirely. (like M_PI_4)
the slider goes from 0-360 for testing.
the following code is called on viewDidLoad, and when the slider value is changed.
-(void) updatePointer
{
double progress = testSlider.value;
progress += pointerStart
CGAffineTransform rotate = CGAffineTransformMakeRotation((progress*M_PI)/180);
[pointerImageView setTransform:rotate];
}
EDIT: Probably important to note that once it gets set the first time, the scale remains the same. So, if I were to set pointerStart to 240, it would shrink, but moving the slider wouldn't change the scale (and it would rotate it as you'd suspect) Replacing "progress" with 240 in the transformation does the same thing. (shrinks it.)
I was able to resolve the issue for anybody who stumbles across this question. Apparently the image is not fully loaded/measured when viewDidLoad is called, so the matrix transforms that cgAffineTransform does actually altered the size of the image. Moving the update code to viewDidAppear fixed the problem.
Take the transform state of the view which you want to rotate and then apply the rotation transform to it.
CGAffineTransform trans = pointerImageView.transform;
pointerImageView.transform = CGAffineTransformRotate(trans, 240);
I have a game in HTML5 I wish to enclose inside a UIWebview.
I first rotated the view with an affine transform, but it was then off the mark and badly sized. I decided to set the frame to the enclosing view's frame. It was not a good solution, as others have found. So I followed the concatenation suggestion, and I got into the curious problem that after rotating and translating, the game displays fine, but as soon as I wish to scale, the game starts misbehaving (I get psychedelic colors...), which is not of course the intended result.
CGAffineTransform rot = CGAffineTransformMakeRotation( M_PI/2.0);
CGAffineTransform tran = CGAffineTransformMakeTranslation(0, [self statusBarFrameViewRect:self.view].size.height );
CGAffineTransform tranAndRot = CGAffineTransformConcat(rot, tran);
CGAffineTransform scale = CGAffineTransformMakeScale(self.view.frame.size.height, self.view.frame.size.width);
webView.transform = CGAffineTransformConcat(tranAndRot, scale);
// [webView setScalesPageToFit:YES]; // this line seems to do strictly nothing useful to me, whether on or off.
//I hoped it would help the contents of the view to eventually size to the scaled view, if needed.
Does anyone has a pointer to what I'm doing that's stupid?
In the end, I never found the core issue, which is probably linked to the tools used by the game designers (Unity).
Hence my solution was to do the whole UI in landscape mode and drop the rotate/translate part entirely.
I have a deadline soon and I have very annoying bug in front of me and no ideas, how to fix it. The problem is, that sometimes device doesn't know what it's orientation is before it has been rotated and so it messes up frames in conditional statements:
if (orient == UIInterfaceOrientationPortrait || orient == 0){
r = CGRectMake(0.0, 0.0, 320.0, 480.0);
// draw stuff
} else {
r = CGRectMake(0.0, 0.0, 480.0, 320.0);
// draw stuff
}
This bug can be easily reproduced if I keep device on the table or in hands (horizontal position) and run my application, it draws landscape-like rectangles in portrait type screen.
My questions are: can I somehow get the right orientation in this kind of situation? Too bad it can not be reproduced in simulator, so I'm not sure if the code I pasted is responsible for this bug, but that is the place where I fix view frames according to orientation. Maybe there's another (proper) way to do that? And maybe I can force application to be in portrait mode until rotation event will be fired?
I don't think it is a bug. When the iPhone is sitting on the table all the gravity is meassured along the Z axis. To determine device orientation x-y axis are used, but no acceleration is measured alongside of them in your scenario.
EDIT: Further explanations:
This image shows how the Axis are oriented on the iPhone:
alt text http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIAcceleration_Class/Art/device_axes.jpg
So when you have the iPhone lying on the back, an acceleration of -1g is applied on the z axis. To receive the measured accelerations simply register a delegate implementing UIAccelerometerDelegate with the UIAccelerometershared instance. See the AccelerometerGraph sample how this is done.
To determine if the iphone sits on the table do something like this:
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration
{
// lowPass filter first
// now choose orientation
if (fabs([acceleration z]) >= 0.9) // sitting on the table: choose new mode or lock current mode
if (fabs([acceleration y]) >= 0.9) // probably portrait mode
if (fabs([acceleration x]) >= 0.9) // probably landscape mode
}
The accelerometer precision is 18mG = 0.018G so take that into account.
Also you need to isolate yourself from the effects of instant movement changes, a simple LowPass filter fits this bill perfectly. The AccelerometerGraph sample shows how to implement such a filter.
Also choose a very low updateInterval if you don't need a lot of precision, 30Hz should be okay for your purposes.
The sensor which determines the device's orientation is neither infallible nor intelligent. If you set the phone down on the table and then walk around it, how is it supposed to know that it's upside down?
I want to have a completely upside down interface. I don't mean it should change according to the orientation of the phone. I mean it should be upside down ( UIInterfaceOrientationPortraitUpsideDown ) the whole time. A button should be able to 'right' it again. The same button should return everything to upside down.
What's the best way of achieving this?
Set the transform attribute on the root view's layer. Something like:
bool upsideDown = ...
float degrees = upsideDown ? 180.0f : 0.0f;
NSNumber* radians = [NSNumber numberWithFloat:degrees * M_PI / 180.0f];
[rootView.layer setValue:radians forKeyPath:#"transform.rotation.z"];
(Of course, you could easily optimize out the little formula converting from degrees to radians since you only use 0.0 and pi. Included it just for clarity.)
If you want an upside-down interface, orientation does not matter. In essence, apply a rotated affine transform to the view you use, such as:
window.transform = CGAffineTransformRotate(window.transform, M_PI);
Some offset will be required there if you have a status bar up there. Apply that again and you "rotate" it back.
I would look into two options:
Rotating the Canvas 180 degrees
Inverting the accelerometer
Not sure how to accomplish either task, but a quick Google search displayed some results.
http://www.tuaw.com/2007/09/10/iphone-coding-using-the-accelerometer/
iPhone dev - Manually rotate view
http://iphonedevelopmentbits.com/how-to-make-your-iphone-application-accelerometer-rotation-aware/
How can I rotate an image in response to the iPhone's accelerometer?