How to get video natural resolution swift 3? - swift

I have problem that I can't solve, it is related to getting original/natural video size with AVFoundation. The only thing I receive back in width and height properties are strange CGFloat values. Here is the function caller and the function that is called.
let ZZZZZ = resolutionForLocalVideo(url: filePathClicked)
private func resolutionForLocalVideo(url: URL) -> CGSize? {
guard let track = AVURLAsset(url: url).tracks(withMediaType:
AVMediaTypeVideo).first else { return nil }
let size = track.naturalSize.applying(track.preferredTransform)
return size
}
When I am breakpointing the ZZZZZ variable, it gives me back -
ZZZZZ CGSize? (width = NaN, height = 6.9531301382845243E-310)
Sometimes it gives different values for width and height for even the same video.
The filePathClicked variable in breakpoint gives -
filePathClicked URL "file:///Users/ramix/Downloads/SampleVideo_1280x720_10mb%20copy%2010.mp4"
Nothing else is breaking, it is just these strange values I am receiving and I dont know what I can do with them. I wanted to get the resolution.
Thank you!

Related

Bounding box realignment from CoreML object detection

I am currently trying to render a bounding boxes inside a UIView, however currently I'm facing the issue that there is a misalignment in the X axis when trying to render the box as can be seen in the screenshot below.
When the object is on the left of the view the misalignment will be on the right like seen in the image. However when the object is on the right the misalignment will be to the left. The misalignment increases the further it gets to the edge of the screen.
Currently are use ARKit to capture the current frame as a pixel buffer.
let pixelBuffer = sceneView.session.currentFrame?.capturedImage
// Capture current device orientation
let orientation = CGImagePropertyOrientation(rawValue: UIDevice.current.exifOrientation)
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: orientation)
Additionally additionally my CoroML vision request looks as follows
findObjectRequest = VNCoreMLRequest(model: visionModel, completionHandler: visionRequestDidComplete)
findObjectRequest?.imageCropAndScaleOption = .scaleFit
I then try to reschedule the normalised bounding box to image Space like this:
public func scaleImageForCameraOutput(predictionRect finderrItem: FinderrItem, viewRect: CGRect) -> FinderrItem {
let scale = CGAffineTransform.identity.scaledBy(x: viewRect.width, y: viewRect.height)
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
let bgRect = finderrItem.box.applying(transform).applying(scale)
finderrItem.box = bgRect
return finderrItem
}
I also tried to follow the Apple developer documentation and using the API code to re-scale the banding boxes as follows
let newBox = VNImageRectForNormalizedRect(
boundingBox,
Int(self.sceneView.bounds.width),
Int(self.sceneView.bounds.height))
However this still has the same issue with another issue that the y-axis is now inverted.
Does anyone know why I'm having this problem I've been stuck on it for quite awhile now and can't seem to figure it out.

iOS Swift Detect Squares

I am using Swift to detect squares on an image and I can't seem to get it to detect them. It seems to detect rectangles sometimes not sure if this is the correct approach. I am new to swift and image detection so if there is something else I can be doing to detect squares I would greatly appreciate getting pointed in the right direction.
From what I have found on searches is issues around detecting squares / rectangles and perspective. Not sure if this is the issue or just my lack of knowledge of Swift and image detection.
Test image
lazy var rectangleDetectionRequest: VNDetectRectanglesRequest = {
let rectDetectRequest = VNDetectRectanglesRequest(completionHandler: self.handleDetectedRectangles)
// Customize & configure the request to detect only certain rectangles.
rectDetectRequest.maximumObservations = 8 // Vision currently supports up to 16.
rectDetectRequest.minimumConfidence = 0.6 // Be confident.
rectDetectRequest.minimumAspectRatio = 0.3 // height / width
return rectDetectRequest
}()
fileprivate func handleDetectedRectangles(request: VNRequest?, error: Error?) {
if let nsError = error as NSError? {
self.presentAlert("Rectangle Detection Error", error: nsError)
return
}
// Since handlers are executing on a background thread, explicitly send draw calls to the main thread.
DispatchQueue.main.async {
guard let drawLayer = self.pathLayer,
let results = request?.results as? [VNRectangleObservation] else {
return
}
self.draw(rectangles: results, onImageWithBounds: drawLayer.bounds)
drawLayer.setNeedsDisplay()
}
}
I have also changed minimumAspectRatio to 1.0 which from what information I have found would be a square and it still did not give the expected results.

Setting view width constraint from custom UIView class not updating frames

I've got a UIView in which I want to have a simple loader view by using a percentage to determine the width.
I've got the percentage code completed and I know it's working.
However I'm having trouble setting the view's width constraint. I find the width by getting the frame width and multiplying it by the percentage. I know it's getting the right width. But I can't seem to set the constraint from this function.
My code goes like this in my UIView subclass:
func initSubviews() {
// in here i do some other stuff and have an asyncronous call to an api
// so then I've got this code calling the next function
DispatchQueue.main.async {
self.setCompletionWidth(nextTimer: nextTimer!, oldDate: oldDate!)
}
}
func setCompletionWidth(nextTimer: Date, oldDate: Date) {
let date = Date()
// calculatePercent returns something like 0.49
let percent = calculatePercent(middleDate: date, endDate: nextTimer, originalDate: oldDate)
//this is returning a correct value
let width = (self.frame.width)*percent
// completionView is the view I'm trying to change the width of
self.completionView.widthAnchor.constraint(equalToConstant: width)
self.layoutSubviews()
}
What's happening is that the completionView isn't getting the right width.
I've also tried in the setCompletionWidth function
self.completionView.widthAnchor.constraint(equalToConstant: width)
containerView.layoutSubviews()
and
UIView.animate(withDuration: 0.2) {
self.completionView.widthAnchor.constraint(equalToConstant: width)
self.containerView.layoutSubviews()
//also tried self.layoutSubviews here
}
and
self.layoutIfNeeded()
self.completionView.widthAnchor.constraint(equalToConstant: width)
self.layoutIfNeeded()
I'm expecting the width of the completionView to be something like 150, but instead it's 350, which is the original width it had.
I think what's happening is the view isn't updating after me setting the constraint to a different value. However, I can't for the life of me get it to update. I'd love some help here.
You need .isActive = true
self.completionView.widthAnchor.constraint(equalToConstant: width).isActive = true
Also to change the width you need to create a width var like
var widthCon:NSLayoutConstraint!
widthCon = self.completionView.widthAnchor.constraint(equalToConstant: width)
widthCon.isActive = true
Then change the constant with
widthCon.constant = /// some value
self.superview!.layoutIfNeeded()

Scaling an image OSX Swift

Im currently trying to scale an image using swift. This shouldnt be a difficult task, since i've implemented a scaling solution in C# in 30 mins - however, i've been stuck for 2 days now.
I've tried googling/crawling through stack posts but to no avail. The two main solutions i have seen people use are:
A function written in Swift to resize an NSImage proportionately
and
resizeNSImage.swift
An Obj C Implementation of the above link
So i would prefer to use the most efficient/least cpu intensive solution, which according to my research is option 2. Due to option 2 using NSImage.lockfocus() and NSImage.unlockFocus, the image will scale fine on non-retina Macs, but double the scaling size on retina macs. I know this is due to the pixel density of Retina macs, and is to be expected, but i need a scaling solution that ignores HiDPI specifications and just performs a normal scale operation.
This led me to do more research into option 1. It seems like a sound function, however it literally doesnt scale the input image, and then doubles the filesize as i save the returned image (presumably due to pixel density). I found another stack post with someone else having the exact same problem as i am, using the exact same implementation (found here). Of the two suggested answers, the first one doesnt work, and the second is the other implementation i've been trying to use.
If people could post Swift-ified answers, as opposed to Obj C, i'd appreciate it very much!
EDIT:
Here's a copy of my implementation of the first solution - I've divided it into 2 functions:
func getSizeProportions(oWidth: CGFloat, oHeight: CGFloat) -> NSSize {
var ratio:Float = 0.0
let imageWidth = Float(oWidth)
let imageHeight = Float(oHeight)
var maxWidth = Float(0)
var maxHeight = Float(600)
if ( maxWidth == 0 ) {
maxWidth = imageWidth
}
if(maxHeight == 0) {
maxHeight = imageHeight
}
// Get ratio (landscape or portrait)
if (imageWidth > imageHeight) {
// Landscape
ratio = maxWidth / imageWidth;
}
else {
// Portrait
ratio = maxHeight / imageHeight;
}
// Calculate new size based on the ratio
let newWidth = imageWidth * ratio
let newHeight = imageHeight * ratio
return NSMakeSize(CGFloat(newWidth), CGFloat(newHeight))
}
func resizeImage(image:NSImage) -> NSImage {
print("original: ", image.size.width, image.size.height )
// Cast the NSImage to a CGImage
var imageRect:CGRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let imageRef = image.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
// Create a new NSSize object with the newly calculated size
let newSize = NSSize(width: CGFloat(450), height: CGFloat(600))
//let newSize = getSizeProportions(oWidth: CGFloat(image.size.width), oHeight: CGFloat(image.size.height))
// Create NSImage from the CGImage using the new size
let imageWithNewSize = NSImage(cgImage: imageRef!, size: newSize)
print("scaled: ", imageWithNewSize.size.width, imageWithNewSize.size.height )
return NSImage(data: imageWithNewSize.tiffRepresentation!)!
}
EDIT 2:
As pointed out by Zneak: i need to save the returned image to disk - Using both implementations, my save function writes the file to disk successfully. Although i dont think my save function could be screwing with my current resizing implementation, i've attached it anyways just in case:
func saveAction(image: NSImage, url: URL) {
if let tiffdata = image.tiffRepresentation,
let bitmaprep = NSBitmapImageRep(data: tiffdata) {
let props = [NSImageCompressionFactor: Appearance.imageCompressionFactor]
if let bitmapData = NSBitmapImageRep.representationOfImageReps(in: [bitmaprep], using: .JPEG, properties: props) {
let path: NSString = "~/Desktop/out.jpg"
let resolvedPath = path.expandingTildeInPath
try! bitmapData.write(to: URL(fileURLWithPath: resolvedPath), options: [])
print("Your image has been saved to \(resolvedPath)")
}
}
To anyone else experiencing this problem - I ended up spending countless hours trying to find a way to do this, and ended up just getting the scaling factor of the screen (1 for normal macs, 2 for retina)... The code looks like this:
func getScaleFactor() -> CGFloat {
return NSScreen.main()!.backingScaleFactor
}
Then once you have the scale factor you either scale normally or half the dimensions for retina:
if (scaleFactor == 2) {
//halve size proportions for saving on Retina Macs
return NSMakeSize(CGFloat(oWidth*ratio)/2, CGFloat(oHeight*ratio)/2)
} else {
return NSMakeSize(CGFloat(oWidth*ratio), CGFloat(oHeight*ratio))
}

Changing JUST .scale in UIImage?

Here, I'm creating a typical graphic (it's full-screen size, on all devices) on the fly...
func buildImage()->UIImage
{
let wrapperA:UIView = say, a picture
let wrapperB:UIView = say, some text to go on top
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
basicImage.drawHierarchy(in: basicImage.bounds, afterScreenUpdates:true)
wrapperA.drawHierarchy(in: wrapperA.bounds, afterScreenUpdates:true)
wrapperB.drawHierarchy(in: wrapperB.bounds, afterScreenUpdates: true)
let result:UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// so we've just created a nice big image for some reason,
// no problem so far
print( result?.scale )
// I want to change that image to have a scale of 1.
// I don't know how to do that, so I actually just
// make a new identical one, with scale of 1
let resultFixed:UIImage = UIImage(cgImage: result!.cgImage!,
scale: 1.0,
orientation: result!.imageOrientation)
print( resultFixed.scale )
print("Let's use only '1-scale' images to upload to things like Instagram")
// return result
return resultFixed
// be sure to ask on SO if there's a way to
// just "change the scale" rather than make new.
}
I need the final image to be .scale of 1 - but .scale is a read only property.
The only thing I know how to do is make a whole new image copy ... but set the scale to 1 as it's being created.
Is there a better way?
Handy tip -
This was motivated by: say you're saving a large image to the user's album, and also allowing UIActivityViewController so as to post to (example) Instagram. As a general rule, it seems to be best to make the scale 1 before sending to (example) Instagram; if the scale is say 3 you actually just get the top-left 1/3 of the image on your Instagram post. In terms of saving it to the iOS photo album, it does seem to be harmless (perhaps, better in some ways) to set the scale to 1. (I only say "better" as, if the image is, example, ultimately say emailed to a friend on PC, it can cause less confusion if the scale is 1.) Interestingly though, if you just use the iOS Photos album, and take a scale 2 or 3 image, and share it to Instagram: it does in fact appear properly on Instagram! (perhaps Apple's Photos indeed knows it os best to make it scale 1, before sending it to somewhere like Instagram!).
As you say, the scale property of UIImage is read-only – therefore you cannot change it directly.
However, using UIImage's init(cgImage:scale:orientation) initialiser doesn't really copy the image – the underlying CGImage that it's wrapping (which contains the actual bitmap data) is still the same instance. It's only a new UIImage wrapper that is created.
Although that being said, you could cut out the intermediate UIImage wrapper in this case by getting the CGImage from the context directly through CGContext's makeImage() method. For example:
func buildImage() -> UIImage? {
// ...
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
defer {
UIGraphicsEndImageContext()
}
// get the current context
guard let context = UIGraphicsGetCurrentContext() else { return nil }
// -- do drawing here --
// get the CGImage from the context by calling makeImage() – then wrap in a UIImage
// through using Optional's map(_:) (as makeImage() can return nil)
// by default, the scale of the UIImage is 1.
return context.makeImage().map(UIImage.init(cgImage:))
}
btw you can change scale of result image throw creating new image
let newScaleImage = UIImage(cgImage: oldScaleImage.cgImage!, scale: 1.0, orientation: oldScaleImage.imageOrientation)