I'm shooting video on an iPhone 4 with the front camera and combining the video with some other media assets. I'd like for this video to be portrait orientation - the default orientation for all video is landscape and in some circumstances, you have to manage this manually.
I'm using AVFoundation and specifically AVAssetExportSession with a AVMutableVideoComposition. Based on the WWDC videos, it's clear that I have to handle 'fixing' the orientation myself when I'm combining videos into a new composition.
So, I've created an AVMutableVideoCompositionLayerInstruction attached to my AVMutableVideoCompositionInstruction and I'm using the setTransform:atTime: method to set a transform designed to rotate the video:
AVMutableVideoCompositionLayerInstruction *passThroughLayer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
CGAffineTransform portraitRotationTransform = CGAffineTransformMakeRotation(degreesToRadians(90.0));
[passThroughLayer setTransform:portraitRotationTransform atTime:kCMTimeZero];
The problem is that when I view the video that is exported, none of the actual contents are on the screen. If I reduce the rotation angle to something like 45 degrees, I can see part of the video on the screen - it's almost as if it's not rotating at the center point. I'm including images below to make it more clear what I'm talking about.
The natural size of the video is coming back as 480x360. I've tried changing that to 360x480 and it doesn't impact the core issue.
0 Degree Rotation:
45 Degree Rotation:
A 90 degree rotation is just all green.
Anyway, I'm hoping someone who has done this before can point me in the right direction. I can't find any docs on some of the more advanced topics in AVFoundation compositions and exports.
Building on what was answered so far. I found a very good way of debugging and finding out what went wrong with your transforms. Using the ramp methods available, you are able to animate the transforms making it easier to see what your transform is doing.
Most of the time I found myself having transforms that appeared to do nothing until I realised that just using preferredTransform property of a video track alone may result in the video feed moving out of the render screen.
AVMutableVideoCompositionLayerInstruction *videoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
[videoLayerInstruction setTransformRampFromStartTransform:CGAffineTransformIdentity
toEndTransform:videoTrack.preferredTransform
timeRange:CMTimeRangeMake(projectClipStart, projectClipDuration)];
Eventually, I found that in some cases I needed to apply a translation to bring back the rotated video into the render screen.
CGAffineTransformConcat(videoTrack.preferredTransform, CGAffineTransformMakeTranslation(0, renderSize.height))
Note: Your translation values may be different.
Try this:
AVMutableVideoCompositionLayerInstruction *passThroughLayer = AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
CGAffineTransform rotationTransform = CGAffineTransformMakeRotation(degreesToRadians(90.0));
CGAffineTransform rotateTranslate = CGAffineTransformTranslate(rotateTransform,320,0);
[passThroughLayer setTransform:rotateTranslate atTime:kCMTimeZero];
Essentially the idea is to create a rotation and translation matrix. You rotate it to the proper orientation and then translate it into the view. I did not see any way to specify a center point while I was glancing through the API.
AVAssetTrack *videoAssetTrack= [[videoAsset tracksWithMediaType:AVMediaTypeVideo] lastObject];
AVMutableCompositionTrack *videoCompositionTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:&error];
videoCompositionTrack.preferredTransform = videoAssetTrack.preferredTransform;
Also may be relevant, but there is a bug in the preferredTransform being set when using the front-facing camera .. see here for example, where the guys in the SDAVAssetExportSession project have coded a work-around:
https://github.com/rs/SDAVAssetExportSession/pull/70
I found solution in Flutter plugin project.
- (CGAffineTransform)fixTransform:(AVAssetTrack*)videoTrack {
CGAffineTransform transform = videoTrack.preferredTransform;
if (transform.tx == 0 && transform.ty == 0) {
NSInteger rotationDegrees = (NSInteger)round(radiansToDegrees(atan2(transform.b, transform.a)));
NSLog(#"TX and TY are 0. Rotation: %ld. Natural width,height: %f, %f", (long)rotationDegrees,
videoTrack.naturalSize.width, videoTrack.naturalSize.height);
if (rotationDegrees == 90) {
NSLog(#"Setting transform tx");
transform.tx = videoTrack.naturalSize.height;
transform.ty = 0;
} else if (rotationDegrees == 270) {
NSLog(#"Setting transform ty");
transform.tx = 0;
transform.ty = videoTrack.naturalSize.width;
}
}
return transform;
}
// set layerInstruction
[firstVideoLayerInstruction setTransform:[self fixTransform:firstVideoAssetTrack] atTime:kCMTimeZero];
Flutter VideoPlayerPlugin
Related
I am merging multiple videos and I want to detect which ones are in portrait mode and rotate them in landscape so that all movies are in landscape mode... I have done everything and works perfectly except the actual rotate, I guess it's something with the center of the rotation or a composed rotation.
AVMutableVideoCompositionLayerInstruction *videoTrackLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:compositionVideoTrack];
if([[AppDelegate sharedInstance] orientationForTrack:avAsset] == UIDeviceOrientationPortrait)
{
CGAffineTransform rotation = CGAffineTransformMakeRotation(M_PI/2);
//CGAffineTransform translateToCenter = CGAffineTransformMakeTranslation(640, 480);
//CGAffineTransform mixedTransform = CGAffineTransformConcat(rotation, translateToCenter);
[videoTrackLayerInstruction setTransform:rotation atTime:nextClipStartTime];
}
else if([[AppDelegate sharedInstance] orientationForTrack:avAsset] == UIDeviceOrientationPortraitUpsideDown)
{
CGAffineTransform rotation = CGAffineTransformMakeRotation(-M_PI/2);
[videoTrackLayerInstruction setTransform:rotation atTime:nextClipStartTime];
}
else if([[AppDelegate sharedInstance] orientationForTrack:avAsset] == UIDeviceOrientationLandscapeLeft)
{
CGAffineTransform rotation = CGAffineTransformMakeRotation(M_PI);
[videoTrackLayerInstruction setTransform:rotation atTime:nextClipStartTime];
}
else if([[AppDelegate sharedInstance] orientationForTrack:avAsset] == UIDeviceOrientationLandscapeRight)
{
[videoTrackLayerInstruction setTransform:CGAffineTransformIdentity atTime:nextClipStartTime];
}
How can I rotate them properly? I have tried multiple sources but nothing rotates them as they should. I am not interested in the 320 scale fit solution I want the video to keep as much resolution as possible before exporting using AVAssetExportSession.
A solution like this:
CGAffineTransform rotationTransform = CGAffineTransformMakeRotation(degreesToRadians(90.0));
CGAffineTransform rotateTranslate = CGAffineTransformTranslate(rotateTransform,320,0);
won't suit my needs as I have tried it. Do you have any ideas? Some help will be pretty much appreciated.
My goal is to use the Google Street View API to display a full pledged panorama scrollable street view image to the user. Basically the API provides me with many images where I can vary the direction, height, zoom, location etc. I can retrieve all these and hope to stitch them together and view it. The first question is, do you know any resources that demoes this full google street view demo working? Where a user can swipe around to move street view around, just like in that old iOS 5 Map Street View thing that I am sure we all miss...
If not, I will be basically downloading hundreds of photos that differ in vertical and horizontal direction. Is there a library or API or resource or method where I can stitch all these photos together to make a big panorama and make it so the user can swipe to view the big panorama on the tiny iPhone screen?
Thanks to everyone!
I threw together a quick implementation to do a lot of this as a demo for you. There are some excellent open source libraries out there that make an amateur version of StreetView very simple. You can check out my demo on GitHub: https://github.com/ocrickard/StreetViewDemo
You can use the heading and pitch parameters from the Google StreetView API to generate tiles. These tiles could be arranged in a UIScrollView as both Bilal and Geraud.ch suggest. However, I really like the JCTiledScrollView because it contains a pretty nice annotation system for adding pins on top of the images like Google does, and its datasource/delegate structure makes for some very straight forward image handling.
The meaty parts of my implementation follow:
- (UIImage *)tiledScrollView:(JCTiledScrollView *)scrollView imageForRow:(NSInteger)row column:(NSInteger)column scale:(NSInteger)scale
{
float fov = 45.f / scale;
float heading = fmodf(column*fov, 360.f);
float pitch = (scale - row)*fov;
if(lastRequestDate) {
while(fabsf([lastRequestDate timeIntervalSinceNow]) < 0.1f) {
//continue only if the time interval is greater than 0.1 seconds
}
}
lastRequestDate = [NSDate date];
int resolution = (scale > 1.f) ? 640 : 200;
NSString *path = [NSString stringWithFormat:#"http://maps.googleapis.com/maps/api/streetview?size=%dx%d&location=40.720032,-73.988354&fov=%f&heading=%f&pitch=%f&sensor=false", resolution, resolution, fov, heading, pitch];
NSError *error = nil;
NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:path] options:0 error:&error];
if(error) {
NSLog(#"Error downloading image:%#", error);
}
UIImage *image = [UIImage imageWithData:data];
//Distort image using GPUImage
{
//This is where you should try to transform the image. I messed around
//with the math for awhile, and couldn't get it. Therefore, this is left
//as an exercise for the reader... :)
/*
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
GPUImageTransformFilter *stillImageFilter = [[GPUImageTransformFilter alloc] init];
[stillImageFilter forceProcessingAtSize:image.size];
//This is actually based on some math, but doesn't work...
//float xOffset = 200.f;
//CATransform3D transform = [ViewController rectToQuad:CGRectMake(0, 0, image.size.width, image.size.height) quadTLX:-xOffset quadTLY:0 quadTRX:(image.size.width+xOffset) quadTRY:0.f quadBLX:0.f quadBLY:image.size.height quadBRX:image.size.width quadBRY:image.size.height];
//[(GPUImageTransformFilter *)stillImageFilter setTransform3D:transform];
//This is me playing guess and check...
CATransform3D transform = CATransform3DIdentity;
transform.m34 = fabsf(pitch) / 60.f * 0.3f;
transform = CATransform3DRotate(transform, pitch*M_PI/180.f, 1.f, 0.f, 0.f);
transform = CATransform3DScale(transform, 1.f/cosf(pitch*M_PI/180.f), sinf(pitch*M_PI/180.f) + 1.f, 1.f);
transform = CATransform3DTranslate(transform, 0.f, 0.1f * sinf(pitch*M_PI/180.f), 0.f);
[stillImageFilter setTransform3D:transform];
[stillImageSource addTarget:stillImageFilter];
[stillImageFilter prepareForImageCapture];
[stillImageSource processImage];
image = [stillImageFilter imageFromCurrentlyProcessedOutput];
*/
}
return image;
}
Now, in order to get the full 360 degree, infinite scrolling effect Google has, you would have to do some trickery in the observeValueForKeyPath method where you observe the contentOffset of the UIScrollView. I've started implementing this, but did not finish it. The idea is that when the user reaches either the left or right side of the view, the contentOffset property of the scrollView is pushed to the opposite side of the scrollView. If you can get the content to align properly, and you set up the contentSize just right, this should work.
Finally, I should note that the Google StreetView system has a limit of 10 images/second, so you have to throttle your requests or the IP address of the device will be blacklisted for a certain amount of time (my home internet is now blacked out from StreetView requests for the next few hours 'cause I didn't understand this at first).
you need to use UIScrollView, set its clipSubviews property to true, add all the images to the UIScrollView and the UIScrollView's contentsOffset according to the images.
You should use the method described on this post: http://mobiledevelopertips.com/user-interface/creating-circular-and-infinite-uiscrollviews.html
You will have to make some factory to transform images to the good size and to your viewport (iPhone/iPad). And then add some buttons where you can clicked to go to the next place.
Unfortunately, if you want to go to a globe version (instead of a tube one), I think you'll need to go full openGL to display images in this 3D surface.
I import a video from the iPhone library and I apply some effects to that video. The problem is that the video has a wrong orientation (landscape). How can I rotate the video to portrait mode?
Have you tried a transform on the AVPlayerLayer?
self.videoLayer = [AVPlayerLayer playerLayerWithPlayer:self.videoPlayer];
self.videoLayer.transform = CATransform3DMakeRotation(M_PI / 2.0f, 0, 0, 1);
I know that Core Image on iOS 5.0 supports facial detection (another example of this), which gives the overall location of a face, as well as the location of eyes and a mouth within that face.
However, I'd like to refine this location to detect the position of a mouth and teeth within it. My goal is to place a mouth guard over a user's mouth and teeth.
Is there a way to accomplish this on iOS?
I pointed in my blog that tutorial has something wrong.
Part 5) Adjust For The Coordinate System: Says you need to change window's and images's coordinates but that is what you shouldn't do. You shouldn't change your views/windows (in UIKit coordinates) to match CoreImage coordinates as in the tutorial, you should do the other way around.
This is the part of code relevant to do that:
(You can get whole sample code from my blog post or directly from here. It contains this and other examples using CIFilters too :D )
// Create the image and detector
CIImage *image = [CIImage imageWithCGImage:imageView.image.CGImage];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:...
options:...];
// CoreImage coordinate system origin is at the bottom left corner and UIKit's
// is at the top left corner. So we need to translate features positions before
// drawing them to screen. In order to do so we make an affine transform
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform,
0, -imageView.bounds.size.height);
// Get features from the image
NSArray *features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features) {
// Get the face rect: Convert CoreImage to UIKit coordinates
const CGRect faceRect = CGRectApplyAffineTransform(
faceFeature.bounds, transform);
// create a UIView using the bounds of the face
UIView *faceView = [[UIView alloc] initWithFrame:faceRect];
...
if(faceFeature.hasMouthPosition) {
// Get the mouth position translated to imageView UIKit coordinates
const CGPoint mouthPos = CGPointApplyAffineTransform(
faceFeature.mouthPosition, transform);
...
}
}
Once you get the mouth position (mouthPos) you simply place your thing on or near it.
This certain distance could be calculated experimentally and must be relative to the triangle formed by the eyes and the mouth. I would use a lot of faces to calculate this distance if possible (Twitter avatars?)
Hope it helps :)
I am having a problem. When the smokeMoveBy action starts a small smoke bubble is spotted on the screen at other place then the moving path of the smoke.
This is only happening when I am using scaleX and scaleY.
The method smokeLoop is being called at each 1 seconds in the scheduler.
Here self is a layer.
Any solutions?
My code follows,
CGPoint dummyPosition=ccp(600, 600);
ParticleSystem *smoke = [ParticleSmoke node];
ccColor4F startColor;
startColor.r = 1.f;
startColor.g = 1.f;
startColor.b = 1.f;
startColor.a = 1.f;
[smoke setStartColor:startColor];
ccColor4F endColor;
endColor.r = 0.8f;
endColor.g = 0.8f;
endColor.b = 0.8f;
endColor.a = 1.0f;
[smoke setEndColor:endColor];
[smoke setLife:0.1f];
[smoke setScaleX:0.1f];
[smoke setScaleY:0.2f];
[smoke setStartSize:30.f];
[self addChild:smoke z:2];
[smoke setPosition:dummyPosition];
-(void)smokeLoop{
id smokeMoveBy = [MoveBy actionWithDuration:durTime position:ccp(0.f, (-1.f*480))]];
id smokeSeq=[Sequence actions:[Place actionWithPosition:smokeInitPosition], smokeMoveBy, nil];
[smoke runAction:smokeSeq];
}
Not sure if this is your issue, but I had an issue with Cocos2D, scaling and moving, which I solved with moving the anchorPoint.
What I wanted to do is zoom (scale) and move a layer. Zooming would move fine if position was {0,0} and transform point was {0.5,0.5}. But then if I'd move it, it would still transform around {0.5,0.5}, which might by then be out of screen, so it would scale really weird.
Solution was to move the transform point to the middle of the screen, every time I moved the layer's position. This was not an easy formula for me to fix, as when I moved the transform point, the scale operation would have a new center point.
The formula I ended up using was the following:
layer = self.foreground;
ccpAdd(
ccpDivide(
ccpNeg(layer.position),
(CGPoint){layer.contentSize.width, layer.contentSize.height}),
(CGPoint){0.5f,0.5f}
);
Basically: Divide the inverse of the layers position (meaning, {300,200} would become {-300,-200}) by the size of the layer {480,320}, and then add {0.5,0.5} (as I want my anchor to be always center + offset)
Anyway, you may need to work out a completely different formula, but this worked for me. I had to apply this to my anchor point every time I moved the layer.
Good luck, hope this helps!