I was looking at this tutorial (https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture) and the problem is that when you rotate the device the view of the camera don't change frame size so you'll have a black space.
Any ideas to solve this problem?
In addition, I made my version of the app in SwiftUI (but the problem persists), so solution using SwiftUI are also appreciated!
Thanks in advance
I'm not sure how in SwiftUI, but here's how in UIKit:
Try setting the videoGravity:
cameraView.videoPreviewLayer.videoGravity = .resizeAspectFill
Then, this should take care of the orientation. This is based off of this answer.
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransition(to: size, with: coordinator)
let cameraPreviewTransform = self.cameraView.transform
coordinator.animate { (context) in
let deltaTransform = coordinator.targetTransform
let deltaAngle: CGFloat = atan2(deltaTransform.b, deltaTransform.a)
var currentRotation = atan2(cameraPreviewTransform.b, cameraPreviewTransform.a)
// Adding a small value to the rotation angle forces the animation to occur in a the desired direction, preventing an issue where the view would appear to rotate 2PI radians during a rotation from LandscapeRight -> LandscapeLeft.
currentRotation += -1 * deltaAngle + 0.0001;
self.cameraView.layer.setValue(currentRotation, forKeyPath: "transform.rotation.z")
self.cameraView.layer.frame = self.view.bounds
} completion: { (context) in
let currentTransform : CGAffineTransform = self.cameraView.transform
self.cameraView.transform = currentTransform
}
}
Related
I'm working in a basic photo editor which is supposed to zoom, rotate and flip a photo. I'm using an image view (aspect fill) inside a scroll view which allows me to zoom easily. But when I try to rotate or flip the result is not what I would expect. The image view keeps the original frame and seems like rotating the image. The scroll view zoom scale changes. Any suggestions on how to do this?
It also would be great to have suggestions about setting the image view anchor point to match the scroll view anchor point before transforming cause I don't want to display a different portion of the image after transforming, just the same portion of the image, but rotated.
View stack before transform:
View stack after applying rotation:
My code so far:
override func viewDidLoad() {
super.viewDidLoad()
scrollView.delegate = self
setZoomScale()
scrollView.zoomScale = scrollView.minimumZoomScale
}
#IBAction func rotateAnticlockwise(_ sender: UIButton) {
rotationAngle -= 0.5
transformImage()
}
func transformImage(){
var transform = CGAffineTransform.identity
transform = transform.rotated(by: .pi * rotationAngle)
imageView.transform = transform
}
func setZoomScale(){
let imageSize = imageView.image!.size
let smallestDimension = min(imageSize.width, imageSize.height)
scrollView.minimumZoomScale = scrollView.bounds.width / smallestDimension
scrollView.maximumZoomScale = smallestDimension / scrollView.bounds.width
}
I think you are looking for, e.g. :
imageView.transform = CGAffineTransform(rotationAngle: 0.5)
I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.
I've been working on making a view controller that will crop an image down to a specific size with some draggable control points and the background image outside of the crop zone dimmed.
For some reason whenever the image is cropped, it is grabbing the wrong reference. I've looked at just about every other post on this to deal with cropping.
Here is my setup for the Storyboard:
I've asked a few other people including a tutor and mentor from a course that I'm taking, but we all seem to be stumped.
I can select a frame by dragging the UL UR DL DR corners around the view controller like this:
But when I press the button and use the crop function I've written, I get something that is not the correct crop based on the framed selection.
I also get this error message during the cropping proceedure:
2016-09-07 23:36:38.962 ImageCropView[33133:1056024]
<UIView: 0x7f9cfa42c730; frame = (0 0; 414 736); autoresize = W+H; layer = <CALayer: 0x7f9cfa408400>>'s window
is not equal to <ImageCropView.CroppedImageViewController: 0x7f9cfa43f9b0>'s view's window!
The offending part of the code must be somewhere in one of the functions below.
Here is the cropping function:
func cropImage(image: UIImage, toRect rect: CGRect) -> UIImage {
func rad(deg: CGFloat) -> CGFloat {
return deg / 180.0 * CGFloat(M_PI)
}
// determine the orientation of the image and apply a transformation to the crop rectangle to shift it to the correct position
var rectTransform: CGAffineTransform
switch image.imageOrientation {
case .Left:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -image.size.height)
case .Right:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -image.size.width, 0)
case .Down:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -image.size.width, -image.size.height)
default:
rectTransform = CGAffineTransformIdentity
}
// adjust the transformation scale based on the image scale
rectTransform = CGAffineTransformScale(rectTransform, UIScreen.mainScreen().scale, UIScreen.mainScreen().scale)
// apply the transformation to the rect to create a new, shifted rect
let transformedCropSquare = CGRectApplyAffineTransform(rect, rectTransform)
// use the rect to crop the image
let imageRef = CGImageCreateWithImageInRect(image.CGImage, transformedCropSquare)
// create a new UIImage and set the scale and orientation appropriately
let result = UIImage(CGImage: imageRef!, scale: image.scale, orientation: image.imageOrientation)
return result
}
Here are the functions to set and translate the mask view
func setTopMask(){
let path = CGPathCreateWithRect(cropViewMask.frame, nil)
topMaskLayer.path = path
topImageView.layer.mask = topMaskLayer
}
func translateMask(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(self.view)
sender.view!.center = CGPointMake(sender.view!.center.x + translation.x, sender.view!.center.y + translation.y)
// print(sender.translationInView(self.view))
sender.setTranslation(CGPointZero, inView: self.view)
// print("panned mask")
if sender.state == .Ended {
printFrames()
}
}
func setCropMaskFrame() {
let x = ulCorner.center.x
let y = ulCorner.center.y
let width = urCorner.center.x - ulCorner.center.x
let height = blCorner.center.y - ulCorner.center.y
cropViewMask.frame = CGRectMake(x, y, width, height)
setTopMask()
}
I know this was long time ago...Just a thought, I ran into similar problem and what I found is that the frames for cropping are most probably correct. The problem lies in the actual size of the picture you're trying to crop. I solved the issue by aligning sizes of my view which holds the picture, with the actual picture size (in points). Then the cropping area cropped what was selected. I know this is probably not a solution, just sharing my experience, hope it helps to turn on some lightbulbs :)
I was wondering how to set the radius/blur factor of iOS new UIBlurEffectStyle.Light? I could not find anything in the documentation. But I want it to look similar to the classic UIImage+ImageEffects.h blur effect.
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
let blur = UIBlurEffect(style: UIBlurEffectStyle.Light)
let effectView = UIVisualEffectView(effect: blur)
effectView.frame = frame
addSubview(effectView)
}
Changing alpha is not a perfect solution. It does not affect blur intensity. You can setup an animation from nil to target blur effect and manually set time offset to get desired blur intensity. Unfortunately iOS will reset the animation offset when app returns from background.
Thankfully there is a simple solution that works on iOS >= 10. You can use UIViewPropertyAnimator. I didn't notice any issues with using it. I keeps custom blur intensity when app returns from background. Here is how you can implement it:
class CustomIntensityVisualEffectView: UIVisualEffectView {
/// Create visual effect view with given effect and its intensity
///
/// - Parameters:
/// - effect: visual effect, eg UIBlurEffect(style: .dark)
/// - intensity: custom intensity from 0.0 (no effect) to 1.0 (full effect) using linear scale
init(effect: UIVisualEffect, intensity: CGFloat) {
super.init(effect: nil)
animator = UIViewPropertyAnimator(duration: 1, curve: .linear) { [unowned self] in self.effect = effect }
animator.fractionComplete = intensity
}
required init?(coder aDecoder: NSCoder) {
fatalError()
}
// MARK: Private
private var animator: UIViewPropertyAnimator!
}
I also created a gist: https://gist.github.com/darrarski/29a2a4515508e385c90b3ffe6f975df7
You can change the alpha of the UIVisualEffectView that you add your blur effect to.
let blurEffect = UIBlurEffect(style: UIBlurEffectStyle.Light)
let blurEffectView = UIVisualEffectView(effect: blurEffect)
blurEffectView.alpha = 0.5
blurEffectView.frame = self.view.bounds
self.view.addSubview(blurEffectView)
This is not a true solution, as it doesn't actually change the radius of the blur, but I have found that it gets the job done with very little work.
Although it is a hack and probably it won't be accepted in the app store, it is still possible. You have to subclass the UIBlurEffect like this:
#import <objc/runtime.h>
#interface UIBlurEffect (Protected)
#property (nonatomic, readonly) id effectSettings;
#end
#interface MyBlurEffect : UIBlurEffect
#end
#implementation MyBlurEffect
+ (instancetype)effectWithStyle:(UIBlurEffectStyle)style
{
id result = [super effectWithStyle:style];
object_setClass(result, self);
return result;
}
- (id)effectSettings
{
id settings = [super effectSettings];
[settings setValue:#50 forKey:#"blurRadius"];
return settings;
}
- (id)copyWithZone:(NSZone*)zone
{
id result = [super copyWithZone:zone];
object_setClass(result, [self class]);
return result;
}
#end
Here blur radius is set to 50. You can change 50 to any value you need.
Then just use MyBlurEffect class instead of UIBlurEffect when creating your effect for UIVisualEffectView.
Recently developed Bluuur library to dynamically change blur radius of UIVisualEffectsView without usage any of private APIs: https://github.com/ML-Works/Bluuur
It uses paused animation of setting effect to achieve changing radius of blur. Solution based on this gist: https://gist.github.com/n00neimp0rtant/27829d87118d984232a4
And the main idea is:
// Freeze animation
blurView.layer.speed = 0;
blurView.effect = nil;
[UIView animateWithDuration:1.0 animations:^{
blurView.effect = [UIBlurEffect effectWithStyle:UIBlurEffectStyleLight];
}];
// Set animation progress from 0 to 1
blurView.layer.timeOffset = 0.5;
UPDATE:
Apple introduced UIViewPropertyAnimator class in iOS 10. Thats what we need exactly to animate .effect property of UIVisualEffectsView. Hope community will be able to back-port this functionality to previous iOS version.
This is totally doable. Use CIFilter in CoreImage module to customize blur radius. In fact, you can even achieve a blur effect with continuous varying (aka gradient) blur radius (https://stackoverflow.com/a/51603339/3808183)
import CoreImage
let ciContext = CIContext(options: nil)
guard let inputImage = CIImage(image: yourUIImage),
let mask = CIFilter(name: "CIGaussianBlur") else { return }
mask.setValue(inputImage, forKey: kCIInputImageKey)
mask.setValue(10, forKey: kCIInputRadiusKey) // Set your blur radius here
guard let output = mask.outputImage,
let cgImage = ciContext.createCGImage(output, from: inputImage.extent) else { return }
outUIImage = UIImage(cgImage: cgImage)
I'm afraid there's no such api currently. According to Apple's way of doing things, new functionality was always brought with restricts, and capabilities will bring out gradually. Maybe that will be possible on iOS 9 or maybe 10...
I have ultimate solution for this question:
fileprivate final class UIVisualEffectViewInterface {
func setIntensity(effectView: UIVisualEffectView, intensity: CGFloat){
let effect = effectView.effect
effectView.effect = nil
animator = UIViewPropertyAnimator(duration: 1, curve: .linear) { [weak effectView] in effectView?.effect = effect }
animator.fractionComplete = intensity
}
private var animator: UIViewPropertyAnimator! }
extension UIVisualEffectView{
private var key: UnsafeRawPointer? { UnsafeRawPointer(bitPattern: 16) }
private var interface: UIVisualEffectViewInterface{
if let key = key, let visualEffectViewInterface = objc_getAssociatedObject(self, key) as? UIVisualEffectViewInterface{
return visualEffectViewInterface
}
let visualEffectViewInterface = UIVisualEffectViewInterface()
if let key = key{
objc_setAssociatedObject(self, key, visualEffectViewInterface, objc_AssociationPolicy.OBJC_ASSOCIATION_RETAIN)
}
return visualEffectViewInterface
}
func intensity(_ value: CGFloat){
interface.setIntensity(effectView: self, intensity: value)
}}
This idea hits me after tried the above solutions, a little hacky but I got it working. Since we cannot modify the default radius which is set as "50", we can just enlarge it and scale it back down.
previewView.snp.makeConstraints { (make) in
make.centerX.centerY.equalTo(self.view)
make.width.height.equalTo(self.view).multipliedBy(4)
}
previewBlur.snp.makeConstraints { (make) in
make.edges.equalTo(previewView)
}
And then,
previewView.transform = CGAffineTransform(scaleX: 0.25, y: 0.25)
previewBlur.transform = CGAffineTransform(scaleX: 0.25, y: 0.25)
I got a 12.5 blur radius. Hope this will help :-)
Currently I didn't find any solution.
By the way you can add a little hack in order to let blur mask less "blurry", in this way:
let blurView = .. // here create blur view as usually
if let blurSubviews = self.blurView?.subviews {
for subview in blurSubviews {
if let filterView = NSClassFromString("_UIVisualEffectFilterView") {
if subview.isKindOfClass(filterView) {
subview.hidden = true
}
}
}
}
for iOS 11.*
in viewDidLoad()
let blurEffect = UIBlurEffect(style: .dark)
let blurEffectView = UIVisualEffectView()
view.addSubview(blurEffectView)
//always fill the view
blurEffectView.frame = self.view.bounds
blurEffectView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
UIView.animate(withDuration: 1) {
blurEffectView.effect = blurEffect
}
blurEffectView.pauseAnimation(delay: 0.5)
There is an undocumented way to do this. Not necessarily recommended, as it may get your app rejected by Apple. But it does work.
if let blurEffectType = NSClassFromString("_UICustomBlurEffect") as? UIBlurEffect.Type {
let blurEffectInstance = blurEffectType.init()
// set any value you want here. 40 is quite blurred
blurEffectInstance.setValue(40, forKey: "blurRadius")
let effectView: UIVisualEffectView = UIVisualEffectView(effect: blurEffectInstance)
// Now you have your blurred visual effect view
}
This works for me.
I put UIVisualEffectView in an UIView before add to my view.
I make this function to use easier. You can use this function to make blur any area in your view.
func addBlurArea(area: CGRect) {
let effect = UIBlurEffect(style: UIBlurEffectStyle.Dark)
let blurView = UIVisualEffectView(effect: effect)
blurView.frame = CGRect(x: 0, y: 0, width: area.width, height: area.height)
let container = UIView(frame: area)
container.alpha = 0.8
container.addSubview(blurView)
self.view.insertSubview(container, atIndex: 1)
}
For example, you can make blur all of your view by calling:
addBlurArea(self.view.frame)
You can change Dark to your desired blur style and 0.8 to your desired alpha value
If you want to accomplish the same behaviour as iOS spotlight search you just need to change the alpha value of the UIVisualEffectView (tested on iOS9 simulator)
The position of a UIView can obviously be determined by view.center or view.frame etc. but this only returns the position of the UIView in relation to it's immediate superview.
I need to determine the position of the UIView in the entire 320x480 co-ordinate system. For example, if the UIView is in a UITableViewCell it's position within the window could change dramatically irregardless of the superview.
Any ideas if and how this is possible?
That's an easy one:
[aView convertPoint:localPosition toView:nil];
... converts a point in local coordinate space to window coordinates. You can use this method to calculate a view's origin in window space like this:
[aView.superview convertPoint:aView.frame.origin toView:nil];
2014 Edit: Looking at the popularity of Matt__C's comment it seems reasonable to point out that the coordinates...
don't change when rotating the device.
always have their origin in the top left corner of the unrotated screen.
are window coordinates: The coordinate system ist defined by the bounds of the window. The screen's and device coordinate systems are different and should not be mixed up with window coordinates.
Swift 5+:
let globalPoint = aView.superview?.convert(aView.frame.origin, to: nil)
Swift 3, with extension:
extension UIView{
var globalPoint :CGPoint? {
return self.superview?.convert(self.frame.origin, to: nil)
}
var globalFrame :CGRect? {
return self.superview?.convert(self.frame, to: nil)
}
}
In Swift:
let globalPoint = aView.superview?.convertPoint(aView.frame.origin, toView: nil)
Here is a combination of the answer by #Mohsenasm and a comment from #Ghigo adopted to Swift
extension UIView {
var globalFrame: CGRect? {
let rootView = UIApplication.shared.keyWindow?.rootViewController?.view
return self.superview?.convert(self.frame, to: rootView)
}
}
For me this code worked best:
private func getCoordinate(_ view: UIView) -> CGPoint {
var x = view.frame.origin.x
var y = view.frame.origin.y
var oldView = view
while let superView = oldView.superview {
x += superView.frame.origin.x
y += superView.frame.origin.y
if superView.next is UIViewController {
break //superView is the rootView of a UIViewController
}
oldView = superView
}
return CGPoint(x: x, y: y)
}
Works well for me :)
extension UIView {
var globalFrame: CGRect {
return convert(bounds, to: window)
}
}
this worked for me
view.layoutIfNeeded() // this might be necessary depending on when you need to get the frame
guard let keyWindow = UIApplication.shared.windows.first(where: { $0.isKeyWindow }) else { return }
let frame = yourView.convert(yourView.bounds, to: keyWindow)
print("frame: ", frame)