iOS application similar to iOS spring board behavior - iphone

I need to create an application which looks similar to the iOS spring board.
I need to display different profile picture arranged with rows and columns similar to the image below. Remember, I will display pictures and not applications.
I have already created a UIScrollView which display images like a spring board.
first I need to make them clickable(so it would probably be buttons or images with interactions)
my main problem is that, I need to implement a behavior where I can hold/touch over an image/icon for some amount of time and move it to another location, swap it with the image where I dragged it.(Just like when your arranging your icons on a spring board)
I will need to implement this, any advice , I mean does apple have native classes for this?
Or do I need to code everything for this. I already tried searching but I'm having a hard time.

iOS SpingBoard example
func startWiggling() {
deleteButton.isHidden = false
guard contentView.layer.animation(forKey: "wiggle") == nil else { return }
guard contentView.layer.animation(forKey: "bounce") == nil else { return }
let angle = 0.04
let wiggle = CAKeyframeAnimation(keyPath: "transform.rotation.z")
wiggle.values = [-angle, angle]
wiggle.autoreverses = true
wiggle.duration = randomInterval(0.1, variance: 0.025)
wiggle.repeatCount = Float.infinity
contentView.layer.add(wiggle, forKey: "wiggle")
let bounce = CAKeyframeAnimation(keyPath: "transform.translation.y")
bounce.values = [4.0, 0.0]
bounce.autoreverses = true
bounce.duration = randomInterval(0.12, variance: 0.025)
bounce.repeatCount = Float.infinity
contentView.layer.add(bounce, forKey: "bounce")
}
func stopWiggling() {
deleteButton.isHidden = true
contentView.layer.removeAllAnimations()
}

Related

Rotate PDF page with usePageViewController

I am working on PDFKit and try to rotate the page. It is rotating good but without using usePageViewController. With usePageViewController it won't update the frame of the page.
// At defining PDFVIEW object
pdfView.displayMode = .singlePageContinuous
pdfView.usePageViewController(true, withViewOptions: nil)
// For rotating screen
pdfView.currentPage?.rotation += 90
without usePageViewController pagination is not working. How to achieve both at same place.
https://i.stack.imgur.com/xifjV.jpg
https://i.stack.imgur.com/HUogU.jpg
You can do a trick, it works for me :).
Before changing the rotation set
usePageViewController(false)
and after that
usePageViewController(true)
guard let currentPage = self.pdfView.currentPage else {return}
self.pdfView.usePageViewController(false)
self.angle = self.angle + 90
currentPage.rotation = self.angle
self.pdfView.usePageViewController(true)
self.pdfView.autoScales = true
self.pdfView.go(to: currentPage)

How to Take a Snapshot of a Sceneview with the Sceneviews Autolighting

For anybody who has worked with snapshotting sceneview screens, you would know what I mean when I say the photo output appears much darker then the screen you are capturing. How can I capture photo output of the sceneview that shows the sceneviews brightness. Im not sure how to ask this question better but essentially this is how I am capturing the sceneview.
#IBAction func ARSnapTapped(_ sender: Any) {
if !draw {
let newImg: UIImage = self.sceneView.snapshot()
DispatchQueue.main.async {
self.imageTaken.image = newImg
self.imageTakenView.isHidden = false
}
self.image = newImg
}
}
This is the solution for anybody looking to enhance the snapshot output lighting when taking snapshots of sceneview in swift.
if let camera = sceneView.pointOfView?.camera {
camera.wantsHDR = true
camera.wantsExposureAdaptation = true
camera.whitePoint = 1.0
camera.exposureOffset = 1
camera.minimumExposure = 1
camera.maximumExposure = 1
}
Adjusting the values for exposureOffset and min/max will brighten/darken the output from the screenshot.

Programmatically creating an SKTileDefinition

I've been beating my head against a wall for hours now. I am trying to modify a texture inside my app using a CIFilter and then use that new texture as a part of a new SKTileDefinition to recolor tiles on my map.
The function bellow finds tiles that players "own" and attempts to recolor them by changing the SKTileDefinition to the coloredDefinition.
func updateMapTileColoration(for players: Array<Player>){
for player in players {
for row in 0..<mainBoardMap!.numberOfRows {
for col in 0..<mainBoardMap!.numberOfColumns {
let rowReal = mainBoardMap!.numberOfRows - 1 - row
if player.crownLocations!.contains(CGPoint(x: row, y: col)) {
if let tile = mainBoardMap!.tileDefinition(atColumn: col, row: rowReal) {
let coloredDefinition = colorizeTile(tile: tile, into: player.color!)
print(coloredDefinition.name)
mainBoardMap!.tileSet.tileGroups[4].rules[0].tileDefinitions.append(coloredDefinition)
mainBoardMap!.setTileGroup(crownGroup!, andTileDefinition: crownGroup!.rules[0].tileDefinitions[1], forColumn: col, row: rowReal)
}
}
}
}
}
And here is the function that actulaly applies the CIFilter: colorizeTile
func colorizeTile(tile: SKTileDefinition, into color: UIColor) -> SKTileDefinition{
let texture = tile.textures[0]
let colorationFilter = CIFilter(name: "CIColorMonochrome")
colorationFilter!.setValue(CIImage(cgImage: texture.cgImage()), forKey: kCIInputImageKey)
colorationFilter!.setValue(CIColor(cgColor: color.cgColor), forKey: "inputColor")
colorationFilter!.setValue(0.25, forKey: "inputIntensity")
let coloredTexture = texture.applying(colorationFilter!)
let newDefinition = SKTileDefinition(texture: texture)
newDefinition.textures[0] = coloredTexture
newDefinition.name = "meow"
return newDefinition
}
I would love any help in figuring out why I cannot change the tileDefinition like I am trying to do. It seems intuitively correct to be able to define a new TileDefinition and add it to the tileGroup and then set the tile group to the specific tile definition. However, this is leading to blank tiles...
Any pointers?
After trying a bunch of things I finally figured out what is wrong. The tile definition wasn't being created correctly because I never actually drew a new texture. As I learned, a CIImage is not the drawn texture its just a recipe and we need a context to draw the texture. After this change, the SKTileDefinition is properly created. The problem wasn't where I thought it was so I am sort-of second hand anwering the question. My method for creating a SKTileDefinition was correct.
if drawContext == nil{
drawContext = CIContext()
}
let texture = tile.textures[0]
let colorationFilter = CIFilter(name: "CIColorMonochrome")
colorationFilter!.setValue(CIImage(cgImage: texture.cgImage()), forKey: kCIInputImageKey)
colorationFilter!.setValue(CIColor(cgColor: color.cgColor), forKey: "inputColor")
colorationFilter!.setValue(0.75, forKey: "inputIntensity")
let result = colorationFilter!.outputImage!
let output = drawContext!.createCGImage(result, from: result.extent)
let coloredTexture = SKTexture(cgImage: output!)
let newDefinition = SKTileDefinition(texture: texture)
newDefinition.textures[0] = coloredTexture
newDefinition.name = "meow"

iOS 11 doesn't grab screen from MKMapView

I have an app that displays the locations the user has walked on an MKMapView. When the user leaves the map view the app grabs the screen and saves the image on disk. Up til iOSS 10.3 this method was always successful. With iOS 11.0 the screen grab is a blank image. I get no notification from xcode that there were some changes and that I need to adjust the code.
Interestingly, screen grabs from text pages are still grabbed and saved successfully.
Did anyone encounter the same problem and got the solution?
The code that has always been successful up til now, is:
override func viewWillDisappear(_ animated: Bool) {
//Set the full file name under which the track will be saved.
let fileBaseName = self.imageName.appending(String(describing: (self.display?.trackDate)!))
let fileFullName = fileBaseName.appending(".png")
//Check if the image already has been saved
if !AuxiliaryObjects.shared.getImageFileName(with: fileFullName ) {
//Create the sizes of the capture
let screenRect = self.trackMapView.frame
let screenSize = screenRect.size
let screenScale = UIScreen.main.scale
var grabRect = self.trackMapView.convertRegion(self.mapRegion, toRectTo: self.view)
var heightAdjustment : CGFloat = 0.0
//Grab the image from the screen
UIGraphicsBeginImageContextWithOptions(screenSize, false, screenScale)
self.trackMapView.drawHierarchy(in: screenRect, afterScreenUpdates: true)
let myImage = UIGraphicsGetImageFromCurrentImageContext()
grabRect.origin.x *= (myImage?.scale)!
grabRect.origin.y *= (myImage?.scale)!
grabRect.size.width *= (myImage?.scale)!
grabRect.size.height *= (myImage?.scale)!
let grabImage = (myImage?.cgImage)!.cropping(to: grabRect)
let mapImage = UIImage(cgImage: grabImage!)
UIGraphicsEndImageContext()
AuxiliaryObjects.shared.save(image: mapImage, with: fileFullName, and: self.imageName)
self.display?.displayImage = AuxiliaryObjects.shared.getImage(with: fileFullName, with: self.tableImageRect)!
} else {
self.display?.displayImage = AuxiliaryObjects.shared.getImage(with: fileFullName, with: self.tableImageRect)!
}
}
I submitted a code level support request at Apple to get the answer to the question. Apple does not support the use of drawHierarhy in grabbing a MapKit screen. The way to go is using the MKMapSnapshotter utility to create an MKMapSnapshot and then draw in the lines and annotations by converting all the map coordinates to view coordinates.
Since this gave me some problems with getting the a mirrored image and translating the coordinates properly, I decided to use the layer method render(in: CGContext) this provided me a well functioning very efficient screen grab.
func creatSnapshot(with fileName: String) {
UIGraphicsBeginImageContextWithOptions(self.trackMapView.frame.size, false, UIScreen.main.scale)
let currentContext = UIGraphicsGetCurrentContext()
self.trackMapView.layer.render(in: currentContext!)
let contextImage = (UIGraphicsGetImageFromCurrentImageContext())!
UIGraphicsEndImageContext()
let region = self.trackMapView.region
var cropRect = self.trackMapView.convertRegion(region, toRectTo: self.trackMapView.superview)
cropRect.origin.x *= contextImage.scale
cropRect.origin.y *= contextImage.scale
cropRect.size.height *= contextImage.scale
cropRect.size.width *= contextImage.scale
let cgMapImage = contextImage.cgImage?.cropping(to: cropRect)
let mapImage = UIImage(cgImage: cgMapImage!)
AuxiliaryObjects.shared.save(image: mapImage, with: fileName, and: self.imageName)
self.displayTrack?.displayImage = AuxiliaryObjects.shared.getImage(with: fileName, with: self.tableImageRect)!
NotificationCenter.default.post(name: self.imageSavedNotification, object: self)
}

Only First Track Playing of AVMutableComposition()

New Edit Below
I have already referenced
AVMutableComposition - Only Playing First Track (Swift)
but it is not providing the answer to what I am looking for.
I have a AVMutableComposition(). I am trying to apply MULTIPLE AVCompositionTrack, of a single type AVMediaTypeVideo in this single composition. This is because I am using 2 different AVMediaTypeVideo sources with different CGSize's and preferredTransforms of the AVAsset's they come from.
So, the only way to apply their specified preferredTransforms is to provide them in 2 different tracks. But, for whatever reason, only the first track will actually provide any video, almost as if the second track is never there.
So, I have tried
1) using AVMutableVideoCompositionLayerInstruction's and applying an AVVideoComposition along with an AVAssetExportSession, which works okay, I am still working on the transforms, but is do-able. But the processing time's of the video's are WELL OVER 1 minute, which is just inapplicable in my situation.
2) Using multiple tracks, without AVAssetExportSession and the 2nd track of the same type never appears. Now, I could put it all on 1 track, but all the videos will then be the same size and preferredTransform as the first video, which I absolutely do not want, as it stretches them on all sides.
So my question is, is it possible
1) Applying instructions to just a track WITHOUT using AVAssetExportSession? //Preferred way BY FAR.
2) Decrease time of export? (I have tried using PresetPassthrough but you cannot use that if you have a exporter.videoComposition which are where my instructions are. This is the only place I know I can put instructions, not sure if I can place them somewhere else.
Here is some of my code (without the exporter as I don't need to export anything anywhere, just do stuff after the AVMutableComposition combines the items.
func merge() {
if let firstAsset = controller.firstAsset, secondAsset = self.asset {
let mixComposition = AVMutableComposition()
let firstTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
do {
//Don't need now according to not being able to edit first 14seconds.
if(CMTimeGetSeconds(startTime) == 0) {
self.startTime = CMTime(seconds: 1/600, preferredTimescale: Int32(600))
}
try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600)),
ofTrack: firstAsset.tracksWithMediaType(AVMediaTypeVideo)[0],
atTime: kCMTimeZero)
} catch _ {
print("Failed to load first track")
}
//This secondTrack never appears, doesn't matter what is inside of here, like it is blank space in the video from startTime to endTime (rangeTime of secondTrack)
let secondTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
// secondTrack.preferredTransform = self.asset.preferredTransform
do {
try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, secondAsset.duration),
ofTrack: secondAsset.tracksWithMediaType(AVMediaTypeVideo)[0],
atTime: CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600))
} catch _ {
print("Failed to load second track")
}
//This part appears again, at endTime which is right after the 2nd track is suppose to end.
do {
try firstTrack.insertTimeRange(CMTimeRangeMake(CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600), firstAsset.duration-endTime),
ofTrack: firstAsset.tracksWithMediaType(AVMediaTypeVideo)[0] ,
atTime: CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600))
} catch _ {
print("failed")
}
if let loadedAudioAsset = controller.audioAsset {
let audioTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: 0)
do {
try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, firstAsset.duration),
ofTrack: loadedAudioAsset.tracksWithMediaType(AVMediaTypeAudio)[0] ,
atTime: kCMTimeZero)
} catch _ {
print("Failed to load Audio track")
}
}
}
}
Edit
Apple states that "Indicates instructions for video composition via an NSArray of instances of classes implementing the AVVideoCompositionInstruction protocol.
For the first instruction in the array, timeRange.start must be less than or equal to the earliest time for which playback or other processing will be attempted
(note that this will typically be kCMTimeZero). For subsequent instructions, timeRange.start must be equal to the prior instruction's end time. The end time of
the last instruction must be greater than or equal to the latest time for which playback or other processing will be attempted (note that this will often be
the duration of the asset with which the instance of AVVideoComposition is associated)."
This just states that the entire composition must be layered inside instructions if you decide to use ANY instructions (this is what I am understanding). Why is this? How would I just apply instructions to say track 2 on this example without applying changing track 1 or 3 at all:
Track 1 from 0 - 10sec, Track 2 from 10 - 20sec, Track 3 from 20 - 30sec.
Any explanation on that would probably answer my question (if it is doable).
Ok, so for my exact problem, I had to apply specific transforms CGAffineTransform in Swift to get the specific result we wanted. The current one I am posting works with any picture taken/obtained as well as video
//This method gets the orientation of the current transform. This method is used below to determine the orientation
func orientationFromTransform(_ transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) {
var assetOrientation = UIImageOrientation.up
var isPortrait = false
if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 {
assetOrientation = .right
isPortrait = true
} else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 {
assetOrientation = .left
isPortrait = true
} else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 {
assetOrientation = .up
} else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 {
assetOrientation = .down
}
//Returns the orientation as a variable
return (assetOrientation, isPortrait)
}
//Method that lays out the instructions for each track I am editing and does the transformation on each individual track to get it lined up properly
func videoCompositionInstructionForTrack(_ track: AVCompositionTrack, _ asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction {
//This method Returns set of instructions from the initial track
//Create inital instruction
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
//This is whatever asset you are about to apply instructions to.
let assetTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]
//Get the original transform of the asset
var transform = assetTrack.preferredTransform
//Get the orientation of the asset and determine if it is in portrait or landscape - I forget which, but either if you take a picture or get in the camera roll it is ALWAYS determined as landscape at first, I don't recall which one. This method accounts for it.
let assetInfo = orientationFromTransform(transform)
//You need a little background to understand this part.
/* MyAsset is my original video. I need to combine a lot of other segments, according to the user, into this original video. So I have to make all the other videos fit this size.
This is the width and height ratios from the original video divided by the new asset
*/
let width = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width/assetTrack.naturalSize.width
var height = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height
//If it is in portrait
if assetInfo.isPortrait {
//We actually change the height variable to divide by the width of the old asset instead of the height. This is because of the flip since we determined it is portrait and not landscape.
height = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.width
//We apply the transform and scale the image appropriately.
transform = transform.scaledBy(x: height, y: height)
//We also have to move the image or video appropriately. Since we scaled it, it could be wayy off on the side, outside the bounds of the viewing.
let movement = ((1/height)*assetTrack.naturalSize.height)-assetTrack.naturalSize.height
//This lines it up dead center on the left side of the screen perfectly. Now we want to center it.
transform = transform.translatedBy(x: 0, y: movement)
//This calculates how much black there is. Cut it in half and there you go!
let totalBlackDistance = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width-transform.tx
transform = transform.translatedBy(x: 0, y: -(totalBlackDistance/2)*(1/height))
} else {
//Landscape! We don't need to change the variables, it is all defaulted that way (iOS prefers landscape items), so we scale it appropriately.
transform = transform.scaledBy(x: width, y: height)
//This is a little complicated haha. So because it is in landscape, the asset fits the height correctly, for me anyway; It was just extra long. Think of this as a ratio. I forgot exactly how I thought this through, but the end product looked like: Answer = ((Original height/current asset height)*(current asset width))/(Original width)
let scale:CGFloat = ((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)*(assetTrack.naturalSize.width))/MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width
transform = transform.scaledBy(x: scale, y: 1)
//The asset can be way off the screen again, so we have to move it back. This time we can have it dead center in the middle, because it wasn't backwards because it wasn't flipped because it was landscape. Again, another long complicated algorithm I derived.
let movement = ((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width-((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)*(assetTrack.naturalSize.width)))/2)*(1/MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)
transform = transform.translatedBy(x: movement, y: 0)
}
//This creates the instruction and returns it so we can apply it to each individual track.
instruction.setTransform(transform, at: kCMTimeZero)
return instruction
}
Now that we have those methods, we can now apply the correct and appropriate transformations to our assets appropriately and get everything fitting nice and clean.
func merge() {
if let firstAsset = MyAsset, let newAsset = newAsset {
//This creates our overall composition, our new video framework
let mixComposition = AVMutableComposition()
//One by one you create tracks (could use loop, but I just had 3 cases)
let firstTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
//You have to use a try, so need a do
do {
//Inserting a timerange into a track. I already calculated my time, I call it startTime. This is where you would put your time. The preferredTimeScale doesn't have to be 600000 haha, I was playing with those numbers. It just allows precision. At is not where it begins within this individual track, but where it starts as a whole. As you notice below my At times are different You also need to give it which track
try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600000)),
of: firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0],
at: kCMTimeZero)
} catch _ {
print("Failed to load first track")
}
//Create the 2nd track
let secondTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
do {
//Apply the 2nd timeRange you have. Also apply the correct track you want
try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.endTime-self.startTime),
of: newAsset.tracks(withMediaType: AVMediaTypeVideo)[0],
at: CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600000))
secondTrack.preferredTransform = newAsset.preferredTransform
} catch _ {
print("Failed to load second track")
}
//We are not sure we are going to use the third track in my case, because they can edit to the end of the original video, causing us not to use a third track. But if we do, it is the same as the others!
var thirdTrack:AVMutableCompositionTrack!
if(self.endTime != controller.realDuration) {
thirdTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo,
preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
//This part appears again, at endTime which is right after the 2nd track is suppose to end.
do {
try thirdTrack.insertTimeRange(CMTimeRangeMake(CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600000), self.controller.realDuration-endTime),
of: firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0] ,
at: CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600000))
} catch _ {
print("failed")
}
}
//Same thing with audio!
if let loadedAudioAsset = controller.audioAsset {
let audioTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: 0)
do {
try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.controller.realDuration),
of: loadedAudioAsset.tracks(withMediaType: AVMediaTypeAudio)[0] ,
at: kCMTimeZero)
} catch _ {
print("Failed to load Audio track")
}
}
//So, now that we have all of these tracks we need to apply those instructions! If we don't, then they could be different sizes. Say my newAsset is 720x1080 and MyAsset is 1440x900 (These are just examples haha), then it would look a tad funky and possibly not show our new asset at all.
let mainInstruction = AVMutableVideoCompositionInstruction()
//Make sure the overall time range matches that of the individual tracks, if not, it could cause errors.
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, self.controller.realDuration)
//For each track we made, we need an instruction. Could set loop or do individually as such.
let firstInstruction = videoCompositionInstructionForTrack(firstTrack, firstAsset)
//You know, not 100% why this is here. This is 1 thing I did not look into well enough or understand enough to describe to you.
firstInstruction.setOpacity(0.0, at: startTime)
//Next Instruction
let secondInstruction = videoCompositionInstructionForTrack(secondTrack, self.asset)
//Again, not sure we need 3rd one, but if we do.
var thirdInstruction:AVMutableVideoCompositionLayerInstruction!
if(self.endTime != self.controller.realDuration) {
secondInstruction.setOpacity(0.0, at: endTime)
thirdInstruction = videoCompositionInstructionForTrack(thirdTrack, firstAsset)
}
//Okay, now that we have all these instructions, we tie them into the main instruction we created above.
mainInstruction.layerInstructions = [firstInstruction, secondInstruction]
if(self.endTime != self.controller.realDuration) {
mainInstruction.layerInstructions += [thirdInstruction]
}
//We create a video framework now, slightly different than the one above.
let mainComposition = AVMutableVideoComposition()
//We apply these instructions to the framework
mainComposition.instructions = [mainInstruction]
//How long are our frames, you can change this as necessary
mainComposition.frameDuration = CMTimeMake(1, 30)
//This is your render size of the video. 720p, 1080p etc. You set it!
mainComposition.renderSize = firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize
//We create an export session (you can't use PresetPassthrough because we are manipulating the transforms of the videos and the quality, so I just set it to highest)
guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return }
//Provide type of file, provide the url location you want exported to (I don't have mine posted in this example).
exporter.outputFileType = AVFileTypeMPEG4
exporter.outputURL = url
//Then we tell the exporter to export the video according to our video framework, and it does the work!
exporter.videoComposition = mainComposition
//Asynchronous methods FTW!
exporter.exportAsynchronously(completionHandler: {
//Do whatever when it finishes!
})
}
}
There is a lot going on here, but it has to be done, for my example anyways! Sorry it took so long to post and let me know if you have questions.
Yes you can totally just apply an individual transform to a each layer of an AVMutableComposition.
Heres an overview of the process - Ive done this personally in Objective-C though so I cant give you the exact swift code, but I know these same functions work just the same in Swift.
Create an AVMutableComposition.
Create an AVMutableVideoComposition.
Set the render size and frame duration of the Video Composition.
Now for each AVAsset :
Create an AVAssetTrack and an AVAudioTrack.
Create an AVMutableCompositionTrack for each of those (one for video, one for audio) by adding each to the mutableComposition.
here it gets more complicated .. (sorry AVFoundation is not easy!)
Create an AVMutableCompositionLayerInstruction from the AVAssetTrack that refers to each video. For each AVMutableCompositionLayerInstruction, you can set the transform on it. You can also do things like set a crop rectangle.
Add each AVMutableCompositionLayerInstruction to an array of layerinstructions. When all the AVMutableCompositionLayerInstructions are created, the array gets set on the AVMutableVideoComposition.
And finally ..
And finally, you will have an AVPlayerItem that you will use to play this back (on an AVPlayer). You create the AVPlayerItem using the AVMutableComposition, and then you set the AVMutableVideoComposition on the AVPlayerItem itself (setVideoComposition..)
Easy eh?
It took me some weeks to get this stuff working well. Its totally unforgiving and as you have mentioned, if you do something wrong, it doesnt tell you what you did wrong - it just doesnt appear.
But when you crack it, it totally works quickly and well.
Finally, all the stuff I have outlined is available in the AVFoundation docs. Its a lengthy tome, but you need to know it to achieve what you are trying to do.
Best of luck!