AVCaptureVideoPreviewLayer and preview from camera position - swift

I'm developing an app that permits user to takes photo.
I've started using AVCam apple provides but i'm actually have a problem
Simply i cannot position the camera layer where i want but it's positioned automatically on center of the View
On the left side you can see what i actually have, on the right side what i'd like to have.
The View that contains the preview that comes from the camera is a UIView subclass and this is the code
class AVPreviewView : UIView {
override class func layerClass() -> AnyClass {
return AVCaptureVideoPreviewLayer.self
}
func session () -> AVCaptureSession {
return (self.layer as AVCaptureVideoPreviewLayer).session
}
func setSession(session : AVCaptureSession) -> Void {
(self.layer as AVCaptureVideoPreviewLayer).session = session;
(self.layer as AVCaptureVideoPreviewLayer).videoGravity = AVLayerVideoGravityResizeAspect;
}
}
Any help is appreciated

First get your screen size so you can calculate the aspect ratio
let screenWidth = UIScreen.mainScreen().bounds.size.width
let screenHeight = UIScreen.mainScreen().bounds.size.height
var aspectRatio: CGFloat = 1.0
var viewFinderHeight: CGFloat = 0.0
var viewFinderWidth: CGFloat = 0.0
var viewFinderMarginLeft: CGFloat = 0.0
var viewFinderMarginTop: CGFlaot = 0.0
Now calculate the size of the preview layer.
func setSession(session : AVCaptureSession) -> Void {
if screenWidth > screenHeight {
aspectRatio = screenHeight / screenWidth * aspectRatio
viewFinderWidth = self.bounds.width
viewFinderHeight = self.bounds.height * aspectRatio
viewFinderMarginTop *= aspectRatio
} else {
aspectRatio = screenWidth / screenHeight
viewFinderWidth = self.bounds.width * aspectRatio
viewFinderHeight = self.bounds.height
viewFinderMarginLeft *= aspectRatio
}
(self.layer as AVCaptureVideoPreviewLayer).session = session;
Set the layer's videoGravity to AVLayerVideoGravityResizeAspectFill so that the layer stretches to fill given your custom view.
(self.layer as AVCaptureVideoPreviewLayer).videoGravity = AVLayerVideoGravityResizeAspectFill;
Finally, set the frame of your preview layer to the values calculated above with any offset that you like.
(self.layer as AVCaptureVideoPreviewLayer).frame = CGRectMake(viewFinderMarginLeft, viewFinderMarginTop, viewFinderWidth, viewFinderHeight)
}
This may take some tweaking since I haven't tested it live, but you should be able to create a more flexible VideoPreviewArea delimited by the bounds of your APPreviewView.

What you're seeing isn't the layer being positioned as much as it is the aspect of the view/layer doesn't match the aspect ratio of the camera, so it's using the videoGravity property and aspect filling it (which always implies centered)
When you create the layer, size it so that the aspect ratio is correct, then position it at will. Or, in this case, resize the view to the correct aspect ratio, then the view can be positioned at will.

I had a similar problem I fix it doing this:
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewLayer.anchorPoint = videoLayer.bounds.origin
previewLayer.frame = CGRect(x: videoLayer.bounds.origin.x, y: videoLayer.bounds.origin.y, width: videoLayer.frame.size.width, height: videoLayer.frame.size.height)
videoLayer.layer.addSublayer(previewLayer)
captureSession.startRunning()
Hope this can help :)

I ran into this problem and the code provided did not fix the situation even though the code was working. I have been building my app using the interface builder.
In the the attributes inspector there under extend edges there are settings which all the view to be extended under tops bars and under bottom bars. My settings had these checked which was throwing off calculations for the positioning of the view.
Extend edges settings:

Related

Resizing NSWindow don't autoresize its contentView correctly

I have an NSWindow with it's contentView. In the awakeFromNib() of the NSWindow I have the following code:
override func awakeFromNib()
{
super.awakeFromNib()
/// Customize Window through XIBs
self.title = "Main Window"
let screenFrame = NSScreen.main?.frame
let windowPercentage: CGFloat = 0.9;
let offset: CGFloat = (1.0 - windowPercentage) / 2.0;
let windowFrame: NSRect = NSRect(x: (screenFrame?.width)! * offset, y: (screenFrame?.height)! * offset, width: (screenFrame?.width)! * windowPercentage, height: (screenFrame?.height)! * windowPercentage )
self.setFrame(windowFrame,display: true,animate: true)
self.backgroundColor = NSColor.lightGray
self.isRestorable = true
// Customize contentView
let viewPercentage: CGFloat = 0.6
self.contentView?.setFrameSize(NSSize(width: self.frame.size.width * viewPercentage, height: self.frame.size.height))
self.contentView?.setFrameOrigin( NSMakePoint( ( (self.frame.width) - (self.contentView?.frame.width)! )/2, ( (self.frame.height) - (self.contentView?.frame.height)!)/2) )
self.contentView?.autoresizingMask = [.width, .height, .minXMargin,.maxXMargin,.maxYMargin,.minYMargin]
}
I am trying to set up the contentView in the center and with a percentage of its NSWindow frame but it's failing when i resize the window. As soon as I start to resize the Window the contentView it's not resizing correctly, as you can see from the following image(the second one):
Image one
Image two
Should I override the resize(withOldSuperviewSize:) method to achieve this? (also the autoresize from interface builder don't resolve the issue)
You shouldn't attempt to change the size of the content view like that. I don't believe it's supported. The window controls the content view's size.
If you want a view of your own to occupy only a portion of the window's content area, you should add your view as a subview of the content view.

Rotate UIImageView inside UIScrollView in Swift

I'm working in a basic photo editor which is supposed to zoom, rotate and flip a photo. I'm using an image view (aspect fill) inside a scroll view which allows me to zoom easily. But when I try to rotate or flip the result is not what I would expect. The image view keeps the original frame and seems like rotating the image. The scroll view zoom scale changes. Any suggestions on how to do this?
It also would be great to have suggestions about setting the image view anchor point to match the scroll view anchor point before transforming cause I don't want to display a different portion of the image after transforming, just the same portion of the image, but rotated.
View stack before transform:
View stack after applying rotation:
My code so far:
override func viewDidLoad() {
super.viewDidLoad()
scrollView.delegate = self
setZoomScale()
scrollView.zoomScale = scrollView.minimumZoomScale
}
#IBAction func rotateAnticlockwise(_ sender: UIButton) {
rotationAngle -= 0.5
transformImage()
}
func transformImage(){
var transform = CGAffineTransform.identity
transform = transform.rotated(by: .pi * rotationAngle)
imageView.transform = transform
}
func setZoomScale(){
let imageSize = imageView.image!.size
let smallestDimension = min(imageSize.width, imageSize.height)
scrollView.minimumZoomScale = scrollView.bounds.width / smallestDimension
scrollView.maximumZoomScale = smallestDimension / scrollView.bounds.width
}
I think you are looking for, e.g. :
imageView.transform = CGAffineTransform(rotationAngle: 0.5)

Images being flipped when adding to NSAttributedString

I have a strange problem when resizing an image that's in a NSAttributedString. The resizing extension is working fine, but when the image is added to the NSAttributedString, it gets flipped vertically for some reason.
This is the resizing extension:
extension NSImage {
func resize(containerWidth: CGFloat) -> NSImage {
var scale : CGFloat = 1.0
let currentWidth = self.size.width
let currentHeight = self.size.height
if currentWidth > containerWidth {
scale = (containerWidth * 0.9) / currentWidth
}
let newWidth = currentWidth * scale
let newHeight = currentHeight * scale
self.size = NSSize(width: newWidth, height: newHeight)
return self
}
}
And here is the enumeration over the images in the attributed string:
newAttributedString.enumerateAttribute(NSAttributedStringKey.attachment, in: NSMakeRange(0, newAttributedString.length), options: []) { value, range, stop in
if let attachement = value as? NSTextAttachment {
let image = attachement.image(forBounds: attachement.bounds, textContainer: NSTextContainer(), characterIndex: range.location)!
let newImage = image.resize(containerWidth: markdown.bounds.width)
let newAttribute = NSTextAttachment()
newAttribute.image = newImage
newAttributedString.addAttribute(NSAttributedStringKey.attachment, value: newAttribute, range: range)
}
}
I've set breakpoints and inspected the images, and they are all in the correct rotation, except when it reaches this line:
newAttributedString.addAttribute(NSAttributedStringKey.attachment, value: newAttribute, range: range)
where the image gets flipped vertically.
I have no clue what could be causing this vertical flip. Is there a way to fix this?
If you look at the developer docs for NSTextAttachment:
https://developer.apple.com/documentation/uikit/nstextattachment
The bounds parameter is defined as follows:
“Defines the layout bounds of the receiver's graphical representation in the text coordinate system.”
I know that when using CoreText to layout text, you need to flip the coordinates, so I should imagine you need to transform your bounds parameter with a vertical reflection too.
Hope that helps.
I figured it out and it was so much simpler than I was making it.
Because the image was in a NSAttribuetdString being appended into a NSTextView I didn't need to resize each image in the NSAttributedString, rather I just had to set the attachment scaling inside the NSTextView with
markdown.layoutManager?.defaultAttachmentScaling = NSImageScaling.scaleProportionallyDown
One line is all it took

I need help integrating a specific UIImage resizing extension into my current draw CGRect function

I found this extension online, it allows me to have images adhere to aspect fit/fill even when drawn inside dynamically growing/shrinking image views (currently when image is saved to camera roll after my draw function the image reverts to "scale fill" regardless of what the content mode of the image view is. I suspect the reasoning for this is because I have it drawing the image to size/bounds of the image view, but since the image view is dynamic, i don't see any way around this without using this extension):
// MARK: - Image Scaling.
extension UIImage {
/// Scales an image to fit within a bounds with a size governed by the passed size. Also keeps the aspect ratio.
/// Switch MIN to MAX for aspect fill instead of fit.
///
/// - parameter newSize: newSize the size of the bounds the image must fit within.
///
/// - returns: a new scaled image.
func scaleImageToSize(newSize: CGSize) -> UIImage {
var scaledImageRect = CGRect.zero
let aspectWidth = newSize.width/size.width
let aspectheight = newSize.height/size.height
let aspectRatio = max(aspectWidth, aspectheight)
scaledImageRect.size.width = size.width * aspectRatio;
scaledImageRect.size.height = size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0;
UIGraphicsBeginImageContext(newSize)
draw(in: scaledImageRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
}
This is my current function I'm using for drawing the image on screen to be able to save it to camera roll (this function combines two images, a frame and an image from camera roll:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
// var newSize = currentImage.scaleImageToSize
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
All the tutorials I've found on how to use extensions don't cover how to pass in and out variables like this one requires. Any insight would be greatly appreciated.
I understand that you don't know how to use the extension, is that correct? Since it just adds a function to every UIImage, you can simply call it on your image like this: currentImage.scaleImageToSize(newSize: someSize) and pass the size you want the image to fit into.
Dorian Roy was telling me to use that call in place of using just "currentImage", and that's what worked!
(I commented on his initial answer saying I was having issues because I was trying to use the return value from the extension itself in place of "currentImage")

Image Cropping grabbing the wrong portion of UIImage during crop

I've been working on making a view controller that will crop an image down to a specific size with some draggable control points and the background image outside of the crop zone dimmed.
For some reason whenever the image is cropped, it is grabbing the wrong reference. I've looked at just about every other post on this to deal with cropping.
Here is my setup for the Storyboard:
I've asked a few other people including a tutor and mentor from a course that I'm taking, but we all seem to be stumped.
I can select a frame by dragging the UL UR DL DR corners around the view controller like this:
But when I press the button and use the crop function I've written, I get something that is not the correct crop based on the framed selection.
I also get this error message during the cropping proceedure:
2016-09-07 23:36:38.962 ImageCropView[33133:1056024]
<UIView: 0x7f9cfa42c730; frame = (0 0; 414 736); autoresize = W+H; layer = <CALayer: 0x7f9cfa408400>>'s window
is not equal to <ImageCropView.CroppedImageViewController: 0x7f9cfa43f9b0>'s view's window!
The offending part of the code must be somewhere in one of the functions below.
Here is the cropping function:
func cropImage(image: UIImage, toRect rect: CGRect) -> UIImage {
func rad(deg: CGFloat) -> CGFloat {
return deg / 180.0 * CGFloat(M_PI)
}
// determine the orientation of the image and apply a transformation to the crop rectangle to shift it to the correct position
var rectTransform: CGAffineTransform
switch image.imageOrientation {
case .Left:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -image.size.height)
case .Right:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -image.size.width, 0)
case .Down:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -image.size.width, -image.size.height)
default:
rectTransform = CGAffineTransformIdentity
}
// adjust the transformation scale based on the image scale
rectTransform = CGAffineTransformScale(rectTransform, UIScreen.mainScreen().scale, UIScreen.mainScreen().scale)
// apply the transformation to the rect to create a new, shifted rect
let transformedCropSquare = CGRectApplyAffineTransform(rect, rectTransform)
// use the rect to crop the image
let imageRef = CGImageCreateWithImageInRect(image.CGImage, transformedCropSquare)
// create a new UIImage and set the scale and orientation appropriately
let result = UIImage(CGImage: imageRef!, scale: image.scale, orientation: image.imageOrientation)
return result
}
Here are the functions to set and translate the mask view
func setTopMask(){
let path = CGPathCreateWithRect(cropViewMask.frame, nil)
topMaskLayer.path = path
topImageView.layer.mask = topMaskLayer
}
func translateMask(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(self.view)
sender.view!.center = CGPointMake(sender.view!.center.x + translation.x, sender.view!.center.y + translation.y)
// print(sender.translationInView(self.view))
sender.setTranslation(CGPointZero, inView: self.view)
// print("panned mask")
if sender.state == .Ended {
printFrames()
}
}
func setCropMaskFrame() {
let x = ulCorner.center.x
let y = ulCorner.center.y
let width = urCorner.center.x - ulCorner.center.x
let height = blCorner.center.y - ulCorner.center.y
cropViewMask.frame = CGRectMake(x, y, width, height)
setTopMask()
}
I know this was long time ago...Just a thought, I ran into similar problem and what I found is that the frames for cropping are most probably correct. The problem lies in the actual size of the picture you're trying to crop. I solved the issue by aligning sizes of my view which holds the picture, with the actual picture size (in points). Then the cropping area cropped what was selected. I know this is probably not a solution, just sharing my experience, hope it helps to turn on some lightbulbs :)