Cast NSValue to CMTimeRange type swift 5.3 - swift5

Swift 5.0 normally cast NSValue to CMTimeRange type, it does not supported in swift 5.3?
For example this worked correctly in Xcode 11.7 (Swift 5.0)
let nsValue = NSValue(timeRange: CMTimeRange(start: CMTime(seconds: 12, preferredTimescale: 1), duration: CMTime(seconds: 22, preferredTimescale: 1)))
let value = nsValue as? CMTimeRange
But fails in Xcode 12 (Swift 5&) and show warning:
Cast from 'NSValue' to unrelated type 'CMTimeRange' always fails

NSValue has timeRangeValue which returns CMTimeRange without explicit cast.

dharmon is correct:
iOS13: let value = nsValue as? CMTimeRange
iOS14: let value = nsValue.timeRangeValue

Related

VideoComposition not honoring instructions

Since updating to iOS 13 my video composition I use to fade a video in and out is broken. This is my code which worked correctly up until installing iOS 13.
Now when I export the video there is sound and just a black screen.
let urlAsset = AVURLAsset(url: inputURL, options: nil)
guard let exportSession = AVAssetExportSession(asset: urlAsset, presetName: AVAssetExportPresetHighestQuality) else { handler(nil)
return
}
exportSession.outputURL = outputURL
exportSession.outputFileType = AVFileType.m4v
exportSession.shouldOptimizeForNetworkUse = true
let composition = AVMutableVideoComposition(propertiesOf: urlAsset)
let clipVideoTrack = urlAsset.tracks(withMediaType: AVMediaType.video)[0]
let timeDuration = CMTimeMake(value: 1, timescale: 1)
let layer = AVMutableVideoCompositionLayerInstruction(assetTrack: clipVideoTrack)
// MARK: Fade in effect
layer.setOpacityRamp(fromStartOpacity: 0.0, toEndOpacity: 1.0, timeRange: CMTimeRange(start: CMTime.zero, duration: timeDuration))
// MARK: Fade out effect
let startTime = CMTimeSubtract(urlAsset.duration, CMTimeMake(value: 1, timescale: 1))
layer.setOpacityRamp(
fromStartOpacity: 1.0,
toEndOpacity: 0.0,
timeRange: CMTimeRangeMake(start: startTime, duration: timeDuration)
)
let instruction = AVMutableVideoCompositionInstruction()
instruction.layerInstructions = [layer]
instruction.timeRange = CMTimeRange(start: CMTime.zero, duration: urlAsset.duration)
composition.instructions = [instruction]
exportSession.videoComposition = composition
exportSession.exportAsynchronously { () -> Void in
handler(exportSession)
print("composition has completed")
}
Apple said there was a bug affecting some instructions for compositions. This was fixed in iOS 13.1. I updated and ran the function and the fade in and out worked as it was before iOS 13 update.

How can I set offset of a video in Swift 4 ? Seek(to:) doesn't seem to work

I am having an issue setting offset of AVQueuePlayer video. I have tried seek(to function to set offset of the video but it doesn't seem to work. The video always starts from 0 Seconds. Other requirements are playing it with control and looping back which are working fine.
I am kind a stuck at playing the video from any other point other than 0 Seconds.
func getVideoView() -> UIView
{
var videoViewContainer = UIView(frame: CGRect(x: 0, y: 0, width: 375, height: 375))
let videoUrl = URL(string: "https://myvideourl.mp4")
let item = AVPlayerItem(url: videoUrl!)
player = AVQueuePlayer()
playerLooper = AVPlayerLooper(player: player!, templateItem: item)
let time = CMTime(seconds: 17.000000, preferredTimescale: CMTimeScale(1))
player?.seek(to: time, completionHandler: { (handler) in
} )
item.forwardPlaybackEndTime = CMTimeMake(20, 1) // For playing it for 20 Seconds
let layer: AVPlayerLayer = AVPlayerLayer(player: player)
layer.frame = videoViewContainer.bounds
layer.videoGravity = AVLayerVideoGravity.resizeAspectFill
videoViewContainer.layer.addSublayer(layer)
player?.play()
return videoViewContainer
}
I got the answer. SeekTo doesn't seem to work when use with AVPlayerLooper.
AVPlayerLooper itself has a property time range.
Example given below
playerLooper = AVPlayerLooper(player: self.player!,
templateItem: item!,
timeRange: CMTimeRange(start: CMTime(seconds: Double(start), preferredTimescale: 1) , duration: CMTime(seconds: Double(duration), preferredTimescale: 1)))

Type 'Range<CGFloat> does not conform to protocol Sequence' (Swift 3) [duplicate]

This question already has an answer here:
Swift 3: replace c style for-loop with float increment
(1 answer)
Closed 6 years ago.
I am trying to do a for loop here using CGFloat but I am getting an error saying that
Type 'Range does not conform to protocol Sequence'
The code that I am trying to run is as below. The error happens at the "for" loop at the end of the code.
func setupBackgroundSea(){
//putting the background//
let texture = SKTexture(imageNamed: "background")
texture.filteringMode = .nearest
//calculating the number of background images needed//
let needNumber = 2.0 + (self.frame.size.width / texture.size().width)
//creating animation//
let moveAnim = SKAction.moveBy(x: -(texture.size().width), y: 0.0, duration: TimeInterval(texture.size().width/10.0))
let resetAnim = SKAction.moveBy(x: texture.size().width, y: 0.0, duration: 0.0)
let repeatForeverAnim = SKAction.repeatForever(SKAction.sequence([moveAnim,resetAnim]))
//setting the position of the image and the animation//
for var i:CGFloat in CGFloat(0)..<needNumber{
let sprite = SKSpriteNode(texture: texture)
sprite.zPosition = -100.0
sprite.position = CGPoint(x: i*sprite.size.width, y: self.frame.size.height/2.0)
}
I am quite new at Swift, so this might be a pretty noob question but I would appreciate it if anyone could help :)
Range<T>, unlike CountableRange<T> isn't a sequence, because it's unclear how to iterate it. Add 1 every time? 0.1? 0.001?
You can use stride instead:
for i in stride(from: 0 as CGFloat, to: needNumber, by: +1 as CGFloat) { //...

iOS AVMutableComposition Add text overlay

can someone please advise.
I am trying to add a text overlay (title) to a video I am composing using AVFoundation. I found a few online resources (see http://stackoverflow.com/questions/21684549/add-a-text-overlay-with-avmutablevideocomposition-to-a-specific-timerange)
However all these resources are in Objective-C.
My project is in Swift and I cannot find any related resources in Swift.
I am not able to get the text to overlay properly is seems distorted as if the frame in which is gets rendered is skewed...
See picture Distorted text in AVPlayer
I have attempted to convert the Objective-C code I found to Swift but obviously I am missing something.
Below is the code I am using.
(I used some code for the player and the video file from:www.raywenderlich.com/90488/calayer-in-ios-with-swift-10-examples
func MergeUnWeldedVideoByUserPref(showInBounds: CGRect) -> (AVMutableComposition, AVMutableVideoComposition)
{
let fps: Int32 = 30
// 1 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances.
let mixComposition = AVMutableComposition()
// 2 - Create a video track for each of the video assests. Add your media data to the appropriate tracks
//let url = NSBundle.mainBundle().URLForResource("colorfulStreak", withExtension: "m4v")!
let url = NSBundle.mainBundle().URLForResource("colorfulStreak", withExtension: "m4v")!
let avAsset = AVAsset(URL: url)
let track = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
let segmentInMovie = CMTimeRangeMake(kCMTimeZero, avAsset.duration)
let videoTrack = avAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
do
{
try track.insertTimeRange(segmentInMovie, ofTrack: videoTrack, atTime: kCMTimeZero)
} catch{
print("Failed to load track")
}
let mainInstruction = AVMutableVideoCompositionInstruction()
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, avAsset.duration)
let instruction = videoCompositionInstructionForTrack(showInBounds, track: track, asset: avAsset)
mainInstruction.layerInstructions.append(instruction)
let mainComposition = AVMutableVideoComposition()
mainComposition.instructions = [mainInstruction]
mainComposition.frameDuration = CMTimeMake(1, fps)
mainComposition.renderSize = CGSize(width: showInBounds.width, height: showInBounds.height)
let textLayer = CATextLayer()
textLayer.backgroundColor = UIColor.clearColor().CGColor
textLayer.foregroundColor = UIColor.whiteColor().CGColor
textLayer.string = "T E S T"
textLayer.font = UIFont(name: "Arial", size: 18)
textLayer.shadowOpacity = 0.5
textLayer.alignmentMode = kCAAlignmentCenter
textLayer.frame = CGRectMake(5, 5, 100, 50)
textLayer.shouldRasterize = true
textLayer.rasterizationScale = showInBounds.width / videoTrack.naturalSize.width
let parentLayer = CALayer()
let videoLayer = CALayer()
parentLayer.frame = CGRectMake(0, 0, showInBounds.width, showInBounds.height);
videoLayer.frame = CGRectMake(0, 0, showInBounds.width, showInBounds.height);
parentLayer.addSublayer(videoLayer)
parentLayer.addSublayer(textLayer)
mainComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)
return (mixComposition, mainComposition)
}
There is nothing wrong with your Swift interpretation and is rather an issue with the rendering engine of the simulator. I tried your code on the simulator and it indeed looked skewed and distorted but when compiling to the device it worked beautifully.

Cast from AnyObject to UIEdgeInsets [duplicate]

As my question title says how can I convert CGPoint into NSValues so I can store then Into array In swift.
In objective-C we can do it like this:
// CGPoint converted to NSValue
CGPoint point = CGPointMake(9, 9);
NSValue *pointObj = [NSValue valueWithCGPoint:point];
But can anybody tell me that how can I do this in swift?
Thanks In Advance.
Try something like that:
let point = CGPointMake(9, 9)
var pointObj = NSValue(CGPoint: point)
Exactly as you'd imagine:
let point = CGPoint(x: 9.0, y: 9.0)
let pointObj = NSValue(CGPoint: point)
let newPoint = pointObj.CGPointValue()
If you aren't planning on using the array in Objective-C, and can keep it as a Swift Array, then you don't need to turn the point into an NSValue, you can just add the point to the array directly:
let point1 = CGPoint(x: 9.0, y: 9.0)
let point2 = CGPoint(x: 10.0, y: 10.0)
let pointArray = [point1, point2] // pointArray is inferred to be of type [CGPoint]
let valuesArray = pointsArray as [NSValue]
swift 3
let pointObj = NSValue(cgPoint: CGPoint(x:9, y:9))
https://developer.apple.com/reference/foundation/nsvalue/1624531-init
swift 4
let point = NSValue.init(cgPoint: mask.position)