I am working on UI Test for several System Alerts in a row (ie. a video app to get permission for Camera, Microphone and Photos). With a sample project, it seems the new method addUIInterruptionMonitorWithDescription is not working for Landscape mode.
I come across this post Swift UI Test - User Notifications System Alert, but the case is different for me.
My code looks like this:
let desc = "\u{201c}Alert\u{201d} Would Like to Access the Camera"
let app = XCUIApplication()
addUIInterruptionMonitorWithDescription(desc) { (alert) -> Bool in
let okButton = alert.buttons["OK"]
print(okButton.frame)
okButton.tap()
return true
}
app.buttons["Alert"].tap()
It works for Portrait, not Landscape. The case can be reproduced by Simulator and Device.
Moreover the okButton.frame I got in Portrait is
CGRect
▿ origin : CGPoint
- x : 207.0
- y : 387.666666666667
▿ size : CGSize
- width : 135.0
- height : 44.0
but the frame in Landscape shows like this
CGRect
▿ origin : CGPoint
- x : 143.333333333333
- y : 368.0
▿ size : CGSize
- width : 44.0
- height : 135.0
The test failure error I got is this one
test failure: -[AlertUITests testExample()] failed: UI Testing Failure - Failed to scroll to visible (by AX action) Button 0x14df73840: traits: 8589934593, {{277.0, 345.0}, {46.0, 30.0}}, label: 'Button', error: Error -25204 performing AXAction 2003
Any idea?
EDIT 1
Submitted to Radar rdar://23931990
It's a framework bug.
Try use something like this:
Helpers:
var device: XCUIDevice {return XCUIDevice.shared()}
extension XCUIDevice {
func type() -> String {
if XCUIApplication().windows.element(boundBy: 0).horizontalSizeClass == .compact || XCUIApplication().windows.element(boundBy: 0).verticalSizeClass == .compact {
return "iPhone"
} else {
return "iPad"
}
}
}
Method:
func handleOrientation(_ orientationAltered: Bool) -> Bool {
guard device.type() == "iPad" else {return false}
if !orientationAltered && device.orientation == .landscapeLeft {
device.orientation = .portrait
return true
} else if orientationAltered && (device.orientation == .landscapeLeft || device.orientation == .landscapeRight) {
device.orientation = .landscapeLeft
return false
} else {return false}
}
Related
I have zoom feature working(1x onwards) for custom camera implemented using AVFoundation. This is fine till the iPhone X models. But I wanted to have 0.5x zoom in iPhone 11 and iPhone 11 Pro devices.
Code that I wrote is not working to put it to 0.5x zoom. I have tried all the possible combinations of [.builtInTripleCamera, .builtInDualWideCamera, .builtInUltraWideCamera]. The capture device with the device type .builtinUltraWideCamera is not giving 0.5 for minAvailableVideoZoomFactor.
While testing on iPhone 11, I also removed [.builtInDualCamera, .builtInTelephotoCamera, .builtInWideAngleCamera, .builtInTrueDepthCamera] from the deviceTypes.
Appreciate any help to solve this. Below is the code which is working for 1x zoom onwards.
/// Called from -handlePinchGesture
private func zoom(_ scale: CGFloat) {
let captureDevice = cameraDevice(.back)
do {
try captureDevice?.lockForConfiguration()
var minZoomFactor: CGFloat = captureDevice?.minAvailableVideoZoomFactor ?? 1.0
let maxZoomFactor: CGFloat = captureDevice?.maxAvailableVideoZoomFactor ?? 1.0
if #available(iOS 13.0, *) {
if captureDevice?.deviceType == .builtInDualWideCamera || captureDevice?.deviceType == .builtInTripleCamera || captureDevice?.deviceType == .builtInUltraWideCamera {
minZoomFactor = 0.5
}
}
zoomScale = max(minZoomFactor, min(beginZoomScale * scale, maxZoomFactor))
captureDevice?.videoZoomFactor = zoomScale
captureDevice?.unlockForConfiguration()
} catch {
print("ERROR: locking configuration")
}
}
#objc private func handlePinchGesture(_ recognizer: UIPinchGestureRecognizer) {
var allTouchesOnPreviewLayer = true
let numTouch = recognizer.numberOfTouches
for i in 0 ..< numTouch {
let location = recognizer.location(ofTouch: i, in: view)
let convertedTouch = previewLayer.convert(location, from: previewLayer.superlayer)
if !previewLayer.contains(convertedTouch) {
allTouchesOnPreviewLayer = false
break
}
}
if allTouchesOnPreviewLayer {
zoom(recognizer.scale)
}
}
func cameraDevice(_ position: AVCaptureDevice.Position) -> AVCaptureDevice? {
var deviceTypes = [AVCaptureDevice.DeviceType]()
deviceTypes.append(contentsOf: [.builtInDualCamera, .builtInTelephotoCamera, .builtInWideAngleCamera, .builtInTrueDepthCamera])
if #available(iOS 13.0, *) {
deviceTypes.append(contentsOf: [.builtInTripleCamera, .builtInDualWideCamera, .builtInUltraWideCamera])
}
let availableCameraDevices = AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: .video, position: position).devices
guard availableCameraDevices.isEmpty == false else {
debugPrint("ERROR: No camera devices found!!!")
return nil
}
for device in availableCameraDevices {
if device.position == position {
return device
}
}
guard let defaultDevice = AVCaptureDevice.default(for: AVMediaType.video) else {
debugPrint("ERROR: Can't initialize default back camera!!!")
return nil
}
return defaultDevice
}
Updating for people who are looking to set the optical zoom level 0.5x
courtesy: https://github.com/NextLevel/NextLevel/issues/187
public class func primaryVideoDevice(forPosition position: AVCaptureDevice.Position) -> AVCaptureDevice? {
// -- Changes begun
if #available(iOS 13.0, *) {
let hasUltraWideCamera: Bool = true // Set this variable to true if your device is one of the following - iPhone 11, iPhone 11 Pro, & iPhone 11 Pro Max
if hasUltraWideCamera {
// Your iPhone has UltraWideCamera.
let deviceTypes: [AVCaptureDevice.DeviceType] = [AVCaptureDevice.DeviceType.builtInUltraWideCamera]
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: AVMediaType.video, position: position)
return discoverySession.devices.first
}
}
// -- Changes end
var deviceTypes: [AVCaptureDevice.DeviceType] = [AVCaptureDevice.DeviceType.builtInWideAngleCamera] // builtInWideAngleCamera // builtInUltraWideCamera
if #available(iOS 11.0, *) {
deviceTypes.append(.builtInDualCamera)
} else {
deviceTypes.append(.builtInDuoCamera)
}
// prioritize duo camera systems before wide angle
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: AVMediaType.video, position: position)
for device in discoverySession.devices {
if #available(iOS 11.0, *) {
if (device.deviceType == AVCaptureDevice.DeviceType.builtInDualCamera) {
return device
}
} else {
if (device.deviceType == AVCaptureDevice.DeviceType.builtInDuoCamera) {
return device
}
}
}
return discoverySession.devices.first
}
The minimum "zoomFactor" property of an AVCaptureDevice can't be less than 1.0 according to the Apple Docs. It's a little confusing becuase depending on what camera you've selected, a zoom factor of 1 will be a different field of view or optical view angle. The default iPhone camera app shows a label reading "0.5" but that's just a label for the ultra wide lens in relation to the standard camera's zoom factor.
You're already getting the minZoomFactor from the device, (which will probably be 1), so you should use the device's min and max that you're reading to set the bounds of the factor you input into "captureDevice.videoZoomFactor". Then when you;ve selected the ultra wide lens, setting the zoomfactor to 1 will be as wide as you can go! (a factor of 0.5 in relation to the standard lens's field of view).
The problem is when you try to get a device of some type from discoverySession.devices it returns the default device that can be not supporting ultrawide that you need.
That was the case for me for iPhone 12Pro Max, returning only one device for Back position, reporting type BuiltInWideAngleCamera, but that was just lyes, it was the middle camera, not wide, not telephoto. Dunno why apple devs made it like that, looks like an outdated legacy architecture.
The solution was not obvious: use AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) to get the real device capable of zooming from 1 (your logical 0.5).
We cannot set the zoom factor to less than 1.
I resolve this issue by using ".builtInDualWideCamera".
In this case, we use "Ultra-Wide Camera" with the zoom factor 2.0 (will be the default value) equal to the normal zoom factor on the "Wide Angle Camera". (minimum value will be 1.0)
If your iPhone doesn't support ".builtInDualWideCamera", we will using ".builtInWideAngleCamera" as normally and the zoom factor is 1.0 (minimum value)
func getCameraDevices() -> [AVCaptureDevice] {
var deviceTypes = [AVCaptureDevice.DeviceType]()
if #available(iOS 13.0, *) {
deviceTypes.append(contentsOf: [.builtInDualWideCamera])
self.isUltraWideCamera = true
self.defaultZoomFactor = 2.0
}
if(deviceTypes.isEmpty){
deviceTypes.append(contentsOf: [.builtInWideAngleCamera])
self.isUltraWideCamera = false
self.defaultZoomFactor = 1.0
}
return AVCaptureDevice.DiscoverySession(deviceTypes: deviceTypes, mediaType: .video, position: .unspecified).devices
}
I have struggled with odd differences in behavior between iPhone and iPad (both in simulators as well as real devices) and despite trying different ways to diagnosis this with many visits to Stackoverflow, I am still struggling to get to a root cause. Specifically, I have a simple Test view controller that performs as expected with iPad but the same code behaves differently and not as expected on iPhone. I have one UIImageView centered on each device in portrait mode with 10px margins left, right and top. When I rotate the device, the objective is to resize the image in landscape so it remains 10px for these margins IE it gets scaled to fit the new geometry and the image always appears in its original orientation. The iPad does this perfectly without a lot of code. However the iPhone performs the scaling correctly but the image does not stay in its original orientation ... it rotates with the device rotation. How can the same code produce two different results?
I can solve this by detecting iPhone and writing code to rotate the image and determine the new origin for placement, in fact I have this working. However, it doesn't seem right to me to have different logic for iPhone versus iPad.
Some details: I am using Swift 5 Xcode 12 MacOS 10.15.6 Simulator 11.5 iPhone 11 with IOS 14.0.1 and iPad 7th Gen IOS 14.0.1
Using interface builder to build layout initially and linking to code with IBOutlet however I am using translatesAutoresizingMaskIntoConstraints = false and anchor constraints programmatically to place the UIImageView. I am using Notification Center to add and remove an observer to trigger rotation events. I am using begin and endGeneratingDeviceOrientationNotifications(). I override shouldAutorotate as true, supportedInterfaceOrientations as all, and preferredInterfaceOrientationForPresentation as portrait my Test VC as well as creating extensions for UINavigationController and UITabBarController in SceneDelegate to propagate these values given Test VC is embedded in a Nav Controller and uses tab bar. Info plist lists all 4 modes for supported interface orientations and the general tab for the Xcode project selects iPhone and iPad as deployable and all 4 orientation modes are unselected for Device Orientation.
I can add code here if helpful as well as screenshots. If anyone has had a similar experience or any ideas about this I would be grateful! Here is the code for TestVC:
import UIKit
class Test: UIViewController {
#IBOutlet weak var testImage: UIImageView!
let debug = true
let program = "TestViewController"
var deviceSize = CGRect.zero
var deviceWidth: CGFloat = 0
var deviceHeight: CGFloat = 0
let imageAsset = UIImage(named: "Cera.jpg")
var aspectRatio: CGFloat = 0.0
override var shouldAutorotate: Bool { return true }
override var supportedInterfaceOrientations: UIInterfaceOrientationMask { return UIInterfaceOrientationMask.all }
override var preferredInterfaceOrientationForPresentation: UIInterfaceOrientation { return UIInterfaceOrientation.portrait }
// This routine triggered the first time this view controiller is loaded
override func viewDidLoad() {
super.viewDidLoad()
let rtn = "viewDidLoad"
top = self.view.topAnchor
lead = self.view.leadingAnchor
deviceSize = UIScreen.main.bounds
deviceWidth = deviceSize.width
deviceHeight = deviceSize.height
UIDevice.current.beginGeneratingDeviceOrientationNotifications()
NotificationCenter.default.addObserver(self, selector: #selector(deviceRotated), name: UIDevice.orientationDidChangeNotification, object: nil)
if debug { print(">>> \(program): \(rtn): device width[\(deviceWidth)] device height[\(deviceHeight)]") }
determineOrientation()
if debug { print(">>> \(program): \(rtn): rotated device width[\(rotatedDeviceWidth)] rotated device height[\(rotatedDeviceHeight)]") }
testImage.image = imageAsset
let imageWidth = testImage.image!.size.width
let imageHeight = testImage.image!.size.height
aspectRatio = imageHeight / imageWidth
calculateContraints()
}
// This routine triggered every time this view controller is presented
override func viewWillAppear(_ animated: Bool) {
let rtn = "viewWillAppear"
if debug { print(">>> \(program): \(rtn): device width[\(deviceWidth)] device height[\(deviceHeight)]") }
determineOrientation()
if debug { print(">>> \(program): \(rtn): rotated device width[\(rotatedDeviceWidth)] rotated device height[\(rotatedDeviceHeight)]") }
}
// This routine added to remove observer for rotation events
override func viewWillDisappear(_ animated: Bool) {
NotificationCenter.default.removeObserver(self, name: UIDevice.orientationDidChangeNotification, object: nil)
UIDevice.current.endGeneratingDeviceOrientationNotifications()
}
var orientation = "Portrait"
var rotatedDeviceWidth: CGFloat = 0
var rotatedDeviceHeight: CGFloat = 0
// This routine called by "viewWillTransition" to determoine "orientation" value
func determineOrientation() {
let rtn = "determineOrientation"
if debug { print(">>> \(program): \(rtn)") }
if UIDevice.current.orientation == UIDeviceOrientation.portrait { orientation = "Portrait" }
if UIDevice.current.orientation == UIDeviceOrientation.landscapeLeft { orientation = "LandscapeLeft" }
if UIDevice.current.orientation == UIDeviceOrientation.landscapeRight { orientation = "LandscapeRight" }
if UIDevice.current.orientation == UIDeviceOrientation.portraitUpsideDown { orientation = "PortraitUpsideDown" }
if orientation == "Portrait" || orientation == "PortraitUpsideDown" {
rotatedDeviceWidth = deviceWidth
rotatedDeviceHeight = deviceHeight
} else {
rotatedDeviceWidth = deviceHeight
rotatedDeviceHeight = deviceWidth
}
}
var imageWidth: CGFloat = 0
var imageHeight: CGFloat = 0
var imageXpos: CGFloat = 0
var imageYpos: CGFloat = 0
var v: CGFloat = 0
var h: CGFloat = 0
var w: CGFloat = 0
var ht: CGFloat = 0
// This routine determines the position of the display object "testImage"
func calculateContraints() {
let rtn = "calculateContraints"
if debug { print(">>> \(program): \(rtn): orientation[\(orientation)]") }
if orientation == "Portrait" {
imageWidth = deviceWidth / 2 - 20
imageHeight = imageWidth * CGFloat(aspectRatio)
imageXpos = 10
imageYpos = 10
if debug { print(">>> \(imageWidth): \(imageHeight)") }
}
if orientation == "LandscapeLeft" {
imageWidth = rotatedDeviceWidth / 2 - 20
imageHeight = imageWidth * CGFloat(aspectRatio)
imageXpos = 10
imageYpos = 10
if debug { print(">>> \(imageWidth): \(imageHeight)") }
}
if orientation == "LandscapeRight" {
imageWidth = rotatedDeviceWidth / 2 - 20
imageHeight = imageWidth * CGFloat(aspectRatio)
imageXpos = 10
imageYpos = 10
if debug { print(">>> \(imageWidth): \(imageHeight)") }
}
if orientation == "PortraitUpsideDown" {
imageWidth = deviceWidth / 2 - 20
imageHeight = imageWidth * CGFloat(aspectRatio)
imageXpos = 10
imageYpos = 10
if debug { print(">>> \(imageWidth): \(imageHeight)") }
}
layoutConstraints(v: imageXpos, h: imageYpos, w: imageWidth, ht: imageHeight)
}
var testImageTopConstraint: NSLayoutConstraint!
var testImageLeftConstraint: NSLayoutConstraint!
var testImageWidthConstraint: NSLayoutConstraint!
var testImageHeightConstraint: NSLayoutConstraint!
var top: NSLayoutYAxisAnchor!
var lead: NSLayoutXAxisAnchor!
var trail: NSLayoutXAxisAnchor!
var bot: NSLayoutYAxisAnchor!
// This routine lays out the display object "testImage"
func layoutConstraints(v: CGFloat, h: CGFloat, w: CGFloat, ht: CGFloat) {
let rtn = "layoutConstraints"
if debug { print(">>> \(program): \(rtn)") }
testImage.translatesAutoresizingMaskIntoConstraints = false
if testImageTopConstraint != nil { testImageTopConstraint.isActive = false }
if testImageLeftConstraint != nil { testImageLeftConstraint.isActive = false }
if testImageWidthConstraint != nil { testImageWidthConstraint.isActive = false }
if testImageHeightConstraint != nil { testImageHeightConstraint.isActive = false }
testImageTopConstraint = testImage.topAnchor.constraint(equalTo: top, constant: v)
testImageLeftConstraint = testImage.leadingAnchor.constraint(equalTo: lead, constant: h)
testImageWidthConstraint = testImage.widthAnchor.constraint(equalToConstant: w)
testImageHeightConstraint = testImage.heightAnchor.constraint(equalToConstant: ht)
testImageTopConstraint.isActive = true
testImageLeftConstraint.isActive = true
testImageWidthConstraint.isActive = true
testImageHeightConstraint.isActive = true
}
}
#objc extension Test {
func deviceRotated(_ notification: NSNotification) {
let device = notification.object as! UIDevice
let deviceOrientation = device.orientation
switch deviceOrientation {
case .landscapeLeft: print("<<<Landscape Left>>>")
case .landscapeRight: print("<<<Landscape Right>>>")
case .portrait: print("<<<Portrait>>>")
case .portraitUpsideDown: print("<<<Portrait Upside Down>>>")
case .faceDown: print("<<<Face Down>>>")
case .faceUp: print("<<<Face Up>>>")
case .unknown: print("<<<Unknown>>>")
#unknown default: print("<<<Default>>>")
}
let rtn = "deviceRotated2"
determineOrientation()
if debug { print(">>> \(program): \(rtn): Device rotated to: \(orientation)") }
if debug { print(">>> \(program): \(rtn): rotated device width[\(rotatedDeviceWidth)] rotated device height[\(rotatedDeviceHeight)]") }
calculateContraints()
}
}
Here is the code in SceneDelegate.swift
extension UINavigationController {
override open var shouldAutorotate: Bool {
get {
if let visibleVC = visibleViewController { return visibleVC.shouldAutorotate }
return super.shouldAutorotate } }
override open var preferredInterfaceOrientationForPresentation: UIInterfaceOrientation {
get {
if let visibleVC = visibleViewController { return visibleVC.preferredInterfaceOrientationForPresentation }
return super.preferredInterfaceOrientationForPresentation } }
override open var supportedInterfaceOrientations: UIInterfaceOrientationMask {
get {
if let visibleVC = visibleViewController { return visibleVC.supportedInterfaceOrientations }
return super.supportedInterfaceOrientations } }
}
// ===================================================================================
// UITabBarController Extension - used to manage tab bar style
//
extension UITabBarController {
open override var childForStatusBarStyle: UIViewController? {
return selectedViewController?.childForStatusBarStyle ?? selectedViewController
}
}
// ===================================================================================
// UITabBarController Extension - used to manage rotation
//
extension UITabBarController {
override open var shouldAutorotate: Bool {
if let viewController = self.viewControllers?[self.selectedIndex] { return viewController.shouldAutorotate }
return super.shouldAutorotate }
override open var preferredInterfaceOrientationForPresentation: UIInterfaceOrientation {
if let viewController = self.viewControllers?[self.selectedIndex] { return viewController.preferredInterfaceOrientationForPresentation }
return super.preferredInterfaceOrientationForPresentation }
override open var supportedInterfaceOrientations: UIInterfaceOrientationMask {
if let viewController = self.viewControllers?[self.selectedIndex] { return viewController.supportedInterfaceOrientations }
return super.supportedInterfaceOrientations }
}
Here are the rotation results for the iPhone in the simulator:
Cera rotations for iPhone
... and iPad:
Cera rotations for iPad
I am working on iPhone application and I have multiple UITextFields for input.
the problem with small devices only like iPhone 5 , 6. when The keyboard appears , the bottom textFields hide.
it is working fine with big iPhone screen like XR , XS Max
how can I add condition that check if the bottom textfields Hidden or not ?
guard let keyboardReact = (notification.userInfo?[UIResponder.keyboardFrameEndUserInfoKey] as? NSValue)?.cgRectValue else {
return
}
let screen = view.frame.size.height - keyboardReact.height
let safeAreHeight = self.view.frame.height - self.topLayoutGuide.length - self.bottomLayoutGuide.length
if safeAreHeight + keyboardReact.height > view.frame.size.height {
if currentTappedTextField == phoneTextField || currentTappedTextField == employeeEmailTextField || currentTappedTextField == relationTextField {
if notification.name == UIResponder.keyboardWillShowNotification || notification.name == UIResponder.keyboardWillChangeFrameNotification{
view.frame.origin.y = -(keyboardReact.height)
} else {
view.frame.origin.y = 0
}
}
}
This is work with all screen sizes I want it work only when the keyboard hide the textFields
Now you can calculate your keyboard height and move your view accodingly
func liftViewUp(notification: NSNotification){
if let keyboardSize = notification.userInfo?[UIKeyboardFrameEndUserInfoKey] as? CGRect {
// manage your view accordingly here
if currentTappedTextField == phoneTextField || currentTappedTextField == employeeEmailTextField || currentTappedTextField == relationTextField {
if notification.name == UIResponder.keyboardWillShowNotification || notification.name == UIResponder.keyboardWillChangeFrameNotification{
let textFieldPosition = currentTappedTextField.frame.origin.y + currentTappedTextField.frame.size.height
// check if textfield will hide behind keyboard
if textFieldPosition > (view.frame.size.height - keyboardReact.height){
view.frame.origin.y = -(keyboardReact.height)
}else {
view.frame.origin.y = 0
}
} else {
view.frame.origin.y = 0
}
}
}
}
or You can try this third party library IQKeyboardManager
You may got answer here
For the app I'm working on, I need to provide a functionality that lets users apply filters to their videos( not real time, applying filters on a saved video and filePath is provided).
addFilterToVideo is called when the user taps on a filter, a video composition is passed as a parameter to the initPlayer function and if the "none" video filter is tapped nil is passed
Whenever a new filter is tapped on I just change the video composition of the playerItem and the file is loaded only the first time
func addFilterToVideo(filterName: String) {
if filterName != "" {
let filter = CIFilter(name: filterName)
if #available(iOS 9.0, *) {
let composition = AVVideoComposition(asset: (self.playerItem?.asset)!) { (request) in
let source = request.sourceImage.clampingToExtent()
filter?.setValue(source, forKey: kCIInputImageKey)
let output = filter?.outputImage!.cropping(to: request.sourceImage.extent)
request.finish(with: output!, context: FilterView.context)
}
self.selectedComposition = composition
self.initPlayer(composition: composition)
} else {
// Fallback on earlier versions
}} else {
self.selectedComposition = nil
self.initPlayer(composition: nil)
}
}
func playerSetup(){
self.playerItem = AVPlayerItem(url: URL(fileURLWithPath: self.filePath!))
self.player = AVPlayer(playerItem: playerItem)
self.playerLayer.player = self.player
self.playerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.playerLayer.contentsGravity = kCAGravityResizeAspectFill
self.layer.addSublayer(self.playerLayer)
self.player?.play()
flag = true
}
func initPlayer(composition: AVVideoComposition?){
if composition != nil {
if !flag {
self.playerSetup()
}
playerItem?.videoComposition = composition
} else {
self.playerSetup()
}
}
So the issue is that the video is getting rotated like this:
check the screen recording here
but when I tried using a sample video which I added to the project it was working fine
check the screen recording here
so I checked the preferredTransform by importing them as AVAssets
For the video file recorded by the device:
(lldb) po videoTrack?.preferredTransform
- a : 0.0
- b : 1.0
- c : -1.0
- d : 0.0
- tx : 1080.0
- ty : 0.0
(lldb) po videoTrack?.naturalSize
- width : 1920.0
- height : 1080.0
For the video that I added to the project
(lldb) po videoTrack?.preferredTransform
- a : 1.0
- b : 0.0
- c : 0.0
- d : 1.0
- tx : 0.0
- ty : 0.0
(lldb) po videoTrack?.naturalSize
- width : 1080.0
- height : 1920.0
so the issue is with the video at the filePath, it has a preferredTransform that should rotate the video 90 degrees and even the height and width are off. I'm not sure how to go about doing that. I tried applying CGAffineTransform to the playerLayer but It didn't work It did rotate the video but the aspect ratio is still off and what does it mean when an asset has a preferred Transform that is not identity?
is it possible to determine whether my UIView is visible to the user or not?
My View is added as subview several times into a Tab Bar Controller.
Each instance of this view has a NSTimer that updates the view.
However I don't want to update a view which is not visible to the user.
Is this possible?
Thanks
For anyone else that ends up here:
To determine if a UIView is onscreen somewhere, rather than checking superview != nil, it is better to check if window != nil. In the former case, it is possible that the view has a superview but that the superview is not on screen:
if (view.window != nil) {
// do stuff
}
Of course you should also check if it is hidden or if it has an alpha > 0.
Regarding not wanting your NSTimer running while the view is not visible, you should hide these views manually if possible and have the timer stop when the view is hidden. However, I'm not at all sure of what you're doing.
You can check if:
it is hidden, by checking view.hidden
it is in the view hierarchy, by checking view.superview != nil
you can check the bounds of a view to see if it is on screen
The only other thing I can think of is if your view is buried behind others and can't be seen for that reason. You may have to go through all the views that come after to see if they obscure your view.
This will determine if a view's frame is within the bounds of all of its superviews (up to the root view). One practical use case is determining if a child view is (at least partially) visible within a scrollview.
Swift 5.x:
func isVisible(view: UIView) -> Bool {
func isVisible(view: UIView, inView: UIView?) -> Bool {
guard let inView = inView else { return true }
let viewFrame = inView.convert(view.bounds, from: view)
if viewFrame.intersects(inView.bounds) {
return isVisible(view: view, inView: inView.superview)
}
return false
}
return isVisible(view: view, inView: view.superview)
}
Older swift versions
func isVisible(view: UIView) -> Bool {
func isVisible(view: UIView, inView: UIView?) -> Bool {
guard let inView = inView else { return true }
let viewFrame = inView.convertRect(view.bounds, fromView: view)
if CGRectIntersectsRect(viewFrame, inView.bounds) {
return isVisible(view, inView: inView.superview)
}
return false
}
return isVisible(view, inView: view.superview)
}
Potential improvements:
Respect alpha and hidden.
Respect clipsToBounds, as a view may exceed the bounds of its superview if false.
The solution that worked for me was to first check if the view has a window, then to iterate over superviews and check if:
the view is not hidden.
the view is within its superviews bounds.
Seems to work well so far.
Swift 3.0
public func isVisible(view: UIView) -> Bool {
if view.window == nil {
return false
}
var currentView: UIView = view
while let superview = currentView.superview {
if (superview.bounds).intersects(currentView.frame) == false {
return false;
}
if currentView.isHidden {
return false
}
currentView = superview
}
return true
}
I benchmarked both #Audrey M. and #John Gibb their solutions.
And #Audrey M. his way performed better (times 10).
So I used that one to make it observable.
I made a RxSwift Observable, to get notified when the UIView became visible.
This could be useful if you want to trigger a banner 'view' event
import Foundation
import UIKit
import RxSwift
extension UIView {
var isVisibleToUser: Bool {
if isHidden || alpha == 0 || superview == nil {
return false
}
guard let rootViewController = UIApplication.shared.keyWindow?.rootViewController else {
return false
}
let viewFrame = convert(bounds, to: rootViewController.view)
let topSafeArea: CGFloat
let bottomSafeArea: CGFloat
if #available(iOS 11.0, *) {
topSafeArea = rootViewController.view.safeAreaInsets.top
bottomSafeArea = rootViewController.view.safeAreaInsets.bottom
} else {
topSafeArea = rootViewController.topLayoutGuide.length
bottomSafeArea = rootViewController.bottomLayoutGuide.length
}
return viewFrame.minX >= 0 &&
viewFrame.maxX <= rootViewController.view.bounds.width &&
viewFrame.minY >= topSafeArea &&
viewFrame.maxY <= rootViewController.view.bounds.height - bottomSafeArea
}
}
extension Reactive where Base: UIView {
var isVisibleToUser: Observable<Bool> {
// Every second this will check `isVisibleToUser`
return Observable<Int>.interval(.milliseconds(1000),
scheduler: MainScheduler.instance)
.map { [base] _ in
return base.isVisibleToUser
}.distinctUntilChanged()
}
}
Use it as like this:
import RxSwift
import UIKit
import Foundation
private let disposeBag = DisposeBag()
private func _checkBannerVisibility() {
bannerView.rx.isVisibleToUser
.filter { $0 }
.take(1) // Only trigger it once
.subscribe(onNext: { [weak self] _ in
// ... Do something
}).disposed(by: disposeBag)
}
Tested solution.
func isVisible(_ view: UIView) -> Bool {
if view.isHidden || view.superview == nil {
return false
}
if let rootViewController = UIApplication.shared.keyWindow?.rootViewController,
let rootView = rootViewController.view {
let viewFrame = view.convert(view.bounds, to: rootView)
let topSafeArea: CGFloat
let bottomSafeArea: CGFloat
if #available(iOS 11.0, *) {
topSafeArea = rootView.safeAreaInsets.top
bottomSafeArea = rootView.safeAreaInsets.bottom
} else {
topSafeArea = rootViewController.topLayoutGuide.length
bottomSafeArea = rootViewController.bottomLayoutGuide.length
}
return viewFrame.minX >= 0 &&
viewFrame.maxX <= rootView.bounds.width &&
viewFrame.minY >= topSafeArea &&
viewFrame.maxY <= rootView.bounds.height - bottomSafeArea
}
return false
}
I you truly want to know if a view is visible to the user you would have to take into account the following:
Is the view's window not nil and equal to the top most window
Is the view, and all of its superviews alpha >= 0.01 (threshold value also used by UIKit to determine whether it should handle touches) and not hidden
Is the z-index (stacking value) of the view higher than other views in the same hierarchy.
Even if the z-index is lower, it can be visible if other views on top have a transparent background color, alpha 0 or are hidden.
Especially the transparent background color of views in front may pose a problem to check programmatically. The only way to be truly sure is to make a programmatic snapshot of the view to check and diff it within its frame with the snapshot of the entire screen. This won't work however for views that are not distinctive enough (e.g. fully white).
For inspiration see the method isViewVisible in the iOS Calabash-server project
The simplest Swift 5 solution I could come up with that worked in my situation (I was looking for a button embedded in my tableViewFooter).
John Gibbs solution also worked but in my cause I did not need all the recursion.
func scrollViewDidScroll(_ scrollView: UIScrollView) {
let viewFrame = scrollView.convert(targetView.bounds, from: targetView)
if viewFrame.intersects(scrollView.bounds) {
// targetView is visible
}
else {
// targetView is not visible
}
}
In viewWillAppear set a value "isVisible" to true, in viewWillDisappear set it to false. Best way to know for a UITabBarController subviews, also works for navigation controllers.
Another useful method is didMoveToWindow()
Example: When you push view controller, views of your previous view controller will call this method. Checking self.window != nil inside of didMoveToWindow() helps to know whether your view is appearing or disappearing from the screen.
This can help you figure out if your UIView is the top-most view. Can be helpful:
let visibleBool = view.superview?.subviews.last?.isEqual(view)
//have to check first whether it's nil (bc it's an optional)
//as well as the true/false
if let visibleBool = visibleBool where visibleBool { value
//can be seen on top
} else {
//maybe can be seen but not the topmost view
}
try this:
func isDisplayedInScreen() -> Bool
{
if (self == nil) {
return false
}
let screenRect = UIScreen.main.bounds
//
let rect = self.convert(self.frame, from: nil)
if (rect.isEmpty || rect.isNull) {
return false
}
// 若view 隐藏
if (self.isHidden) {
return false
}
//
if (self.superview == nil) {
return false
}
//
if (rect.size.equalTo(CGSize.zero)) {
return false
}
//
let intersectionRect = rect.intersection(screenRect)
if (intersectionRect.isEmpty || intersectionRect.isNull) {
return false
}
return true
}
In case you are using hidden property of view then :
view.hidden (objective C) or view.isHidden(swift) is read/write property. So you can easily read or write
For swift 3.0
if(view.isHidden){
print("Hidden")
}else{
print("visible")
}