Mapbox v10 iOS Updating the camera while dragging viewAnnotation outside screen bounds causes rapid camera movement - mapbox

With the Mapbox v10 iOS SDK, a lot of APIs changed including the drag and camera options. Basically, when using v6, everything is working perfectly fine when dragging an annotation view(subclasss MGLAnnotationView) outside of map bounds just by using the mapView.setCenter and passing in the screenCoordinates (please check code snippet).
As of v10, there is no more MGLAnnotationView and I used ViewAnnotations(https://docs.mapbox.com/ios/maps/guides/annotations/view-annotations/) to display my custom annotations. Additionally, we need to create a camera options instance and pass in the screen coordinates and use that to set camera.
The problem is using v10, whenever I drag the annotation view outside the map/screen bounds, it moves rapidly. Did anyone encounter it using v10 and what fix did you do?
Appreciate any help.
Using Mapbox iOS SDK v6
func handleDragging(_ annotationView: AnnotationView) { // AnnotationView is a subclass of MGLAnnotationView
guard let gesture = annotationView.gestureRecognizers?.first as? UIPanGestureRecognizer else { return }
let gesturePoint = gesture.location(in: view)
let screenCoordinate = mapView.convert(gesturePoint, toCoordinateFrom: nil)
let mapBounds = CGRect(x: UIScreen.main.bounds.origin.x + 30, y: UIScreen.main.bounds.origin.y + 30, width: UIScreen.main.bounds.size.width - 60, height: UIScreen.main.bounds.size.height - 60)
if !mapBounds.contains(gesturePoint) {
mapView.setCenter(screenCoordinate, zoomLevel: 15, animated: true)
}
}
Using Mapbox iOS SDK v10.4.3
func handleDragging(_ annotationView: AnnotationView) { // AnnotationView is a subclass of UIView only
guard let gesture = annotationView.gestureRecognizers?.first as? UIPanGestureRecognizer else { return }
let gesturePoint = gesture.location(in: view)
let screenCoordinate = self.mapView.mapboxMap.coordinate(for: gesturePoint)
let mapBounds = CGRect(x: UIScreen.main.bounds.origin.x + 30, y: UIScreen.main.bounds.origin.y + 30, width: UIScreen.main.bounds.size.width - 60, height: UIScreen.main.bounds.size.height - 60)
if !mapBounds.contains(gesturePoint) {
let cameraOptions = CameraOptions(center: screenCoordinate, zoom: self.mapView.cameraState.zoom, bearing: self.mapView.cameraState.bearing, pitch: self.mapView.cameraState.pitch)
self.mapView.mapboxMap.setCamera(to: cameraOptions)
}
}

This issue has been resolved with the 'MapboxMaps', '~> 10.7.0' version. Additionally, set self.mapView.viewport.options.transitionsToIdleUponUserInteraction = false.
This is a MapboxMaps version issue.

Related

NSToolbar Flexible space not working in Swift 5.x

I developed this app using Xcode 10.1 (Swift 4.2 I believe) and toolbar looks good:
But I ported in over to my Xcode 11.6 (Swift 5.2 I believe), now the right-most item on the toolbar is no longer on the far right:
Before the port, I added that plus button - the checkbox (Show Hidden) was one of the first items and was never changed since.
Also the buttons function as expected - their actions fire OK.
Note: This app uses no Storyboards or IB things.
Since my code in split over several files (with lots of unrelated code), I will give an overview.
class ToolbarController: NSObject, NSToolbarDelegate {
let toolbar = NSToolbar(identifier: NSToolbar.Identifier("toolbar"))
lazy var toolbarItem1: NSToolbarItem = {
let toolbarItem = NSToolbarItem(itemIdentifier: NSToolbarItem.Identifier(rawValue: "Btn1"))
let button = NSButton(frame: NSRect(x: 0, y: 0, width: 40, height: 1))
button.target = self
...
toolbarItem.view = button
return toolbarItem
}() // defined as "lazy" to allow "self"
var toolbarItemSpace: NSToolbarItem = {
let toolbarItem = NSToolbarItem(itemIdentifier: NSToolbarItem.Identifier.space)
return toolbarItem
}()
...
var toolbarItemFlexSpace: NSToolbarItem = {
let toolbarItem = NSToolbarItem(itemIdentifier: NSToolbarItem.Identifier.flexibleSpace)
return toolbarItem
}()
lazy var toolbarItemShowHidden: NSToolbarItem = {
let toolbarItem = NSToolbarItem(itemIdentifier: NSToolbarItem.Identifier(rawValue: "Checkbox"))
let box = NSBox(frame: NSRect(x: 0, y: 0, width: 120, height: 40))
let button = NSButton() // put in NSBox to use isolated action
...
button.target = self
box.contentView = button
box.sizeToFit() // tightly fit the contents (button)
toolbarItem.view = box
return toolbarItem
}()
lazy var tbItems: [NSToolbarItem] = [toolbarItem1, ...]
That class defines the toolbar, add the items, implements all those toolbar (delegate) methods and makes it the delegate. My NSWindowController then set it to the application toolbar.
Can anyone see an issue with my code that causes the above?
Otherwise, is this a bug?
Had the same problem with Swift 5.0 ... even though the documentation says minSize and maxSize are deprecated, I solved this like:
let toolbarItem = NSToolbarItem(itemIdentifier: NSToolbarItem.Identifier.flexibleSpace)
toolbarItem.minSize = NSSize(width: 1, height: 1)
toolbarItem.maxSize = NSSize(width: 1000, height: 1) //just some large value
return toolbarItem
Maybe this works as well for Swift 5.2
Found a way without living w/ the warning & not using the deprecated properties.
We create a transparent subview for the NASToolbarItem with the min / max constraints in advance, so the system has a way to calc the min and max size itself.
let toolbarItem = NSToolbarItem(itemIdentifier: NSToolbarItem.Identifier.flexibleSpace)
// view to be hosted in the flexible space toolbar item
let view = NSView(frame: CGRect(origin: .zero, size: CGSize(width: MIN_TOOLBAR_ITEM_W, height: MIN_TOOLBAR_H)))
view.widthAnchor.constraint(lessThanOrEqualToConstant: MAX_TOOLBAR_ITEM_W).isActive = true
view.widthAnchor.constraint(greaterThanOrEqualToConstant: MIN_TOOLBAR_ITEM_W).isActive = true
// set the view and return
toolbarItem?.view = view
return toolbarItem

Identifying Objects in Firebase PreBuilt UI in Swift

FirebaseUI has a nice pre-buit UI for Swift. I'm trying to position an image view above the login buttons on the bottom. In the example below, the imageView is the "Hackathon" logo. Any logo should be able to show in this, if it's called "logo", since this shows the image as aspectFit.
According to the Firebase docs page:
https://firebase.google.com/docs/auth/ios/firebaseui
You can customize the signin screen with this function:
func authPickerViewController(forAuthUI authUI: FUIAuth) -> FUIAuthPickerViewController {
return FUICustomAuthPickerViewController(nibName: "FUICustomAuthPickerViewController",
bundle: Bundle.main,
authUI: authUI)
}
Using this code & poking around with subviews in the debuggers, I've been able to identify and color code views in the image below. Unfortunately, I don't think that the "true" size of these subview frames is set until the view controller presents, so trying to access the frame size inside these functions won't give me dimensions that I can use for creating a new imageView to hold a log. Plus accessing the views with hard-coded index values like I've done below, seems like a pretty bad idea, esp. given that Google has already changed the Pre-Built UI once, adding a scroll view & breaking the code of anyone who set the pre-built UI's background color.
func authPickerViewController(forAuthUI authUI: FUIAuth) -> FUIAuthPickerViewController {
// Create an instance of the FirebaseAuth login view controller
let loginViewController = FUIAuthPickerViewController(authUI: authUI)
// Set background color to white
loginViewController.view.backgroundColor = UIColor.white
loginViewController.view.subviews[0].backgroundColor = UIColor.blue
loginViewController.view.subviews[0].subviews[0].backgroundColor = UIColor.red
loginViewController.view.subviews[0].subviews[0].tag = 999
return loginViewController
}
I did get this to work by adding a tag (999), then in the completion handler when presenting the loginViewController I hunt down tag 999 and call a function to add an imageView with a logo:
present(loginViewController, animated: true) {
if let foundView = loginViewController.view.viewWithTag(999) {
let height = foundView.frame.height
print("FOUND HEIGHT: \(height)")
self.addLogo(loginViewController: loginViewController, height: height)
}
}
func addLogo(loginViewController: UINavigationController, height: CGFloat) {
let logoFrame = CGRect(x: 0 + logoInsets, y: self.view.safeAreaInsets.top + logoInsets, width: loginViewController.view.frame.width - (logoInsets * 2), height: self.view.frame.height - height - (logoInsets * 2))
// Create the UIImageView using the frame created above & add the "logo" image
let logoImageView = UIImageView(frame: logoFrame)
logoImageView.image = UIImage(named: "logo")
logoImageView.contentMode = .scaleAspectFit // Set imageView to Aspect Fit
// loginViewController.view.addSubview(logoImageView) // Add ImageView to the login controller's main view
loginViewController.view.addSubview(logoImageView)
}
But again, this doesn't seem safe. Is there a "safe" way to deconstruct this UI to identify the size of this button box at the bottom of the view controller (this size will vary if there are multiple login methods supported, such as Facebook, Apple, E-mail)? If I can do that in a way that avoids the hard-coding approach, above, then I think I can reliably use the dimensions of this button box to determine how much space is left in the rest of the view controller when adding an appropriately sized ImageView. Thanks!
John
This should address the issue - allowing a logo to be reliably placed above the prebuilt UI login buttons buttons + avoiding hard-coding the index values or subview locations. It should also allow for properly setting background color (also complicated when Firebase added the scroll view + login button subview).
To use: Create a subclass of FUIAuthDelegate to hold a custom view controller for the prebuilt Firebase UI.
The code will show the logo at full screen behind the buttons if there isn't a scroll view or if the class's private constant fullScreenLogo is set to false.
If both of these conditions aren't meant, the logo will show inset taking into account the class's private logoInsets constant and the safeAreaInsets. The scrollView views are set to clear so that a background image can be set, as well via the private let backgroundColor.
Call it in any signIn function you might have, after setting authUI.providers. Call would be something like this:
let loginViewController = CustomLoginScreen(authUI: authUI!)
let loginNavigationController = UINavigationController(rootViewController: loginViewController)
loginNavigationController.modalPresentationStyle = .fullScreen
present(loginNavigationController, animated: true, completion: nil)
And here's one version of the subclass:
class CustomLoginScreen: FUIAuthPickerViewController {
private var fullScreenLogo = false // false if you want logo just above login buttons
private var viewContainsButton = false
private var buttonViewHeight: CGFloat = 0.0
private let logoInsets: CGFloat = 16
private let backgroundColor = UIColor.white
private var scrollView: UIScrollView?
private var viewContainingButton: UIView?
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// set color of scrollView and Button view inside scrollView to clear in viewWillAppear to avoid a "color flash" when the pre-built login UI first appears
self.view.backgroundColor = UIColor.white
guard let foundScrollView = returnScrollView() else {
print("😡 Couldn't get a scrollView.")
return
}
scrollView = foundScrollView
scrollView!.backgroundColor = UIColor.clear
guard let foundViewContainingButton = returnButtonView() else {
print("😡 No views in the scrollView contain buttons.")
return
}
viewContainingButton = foundViewContainingButton
viewContainingButton!.backgroundColor = UIColor.clear
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
// Create the UIImageView at full screen, considering logoInsets + safeAreaInsets
let x = logoInsets
let y = view.safeAreaInsets.top + logoInsets
let width = view.frame.width - (logoInsets * 2)
let height = view.frame.height - (view.safeAreaInsets.top + view.safeAreaInsets.bottom + (logoInsets * 2))
var frame = CGRect(x: x, y: y, width: width, height: height)
let logoImageView = UIImageView(frame: frame)
logoImageView.image = UIImage(named: "logo")
logoImageView.contentMode = .scaleAspectFit // Set imageView to Aspect Fit
logoImageView.alpha = 0.0
// Only proceed with customizing the pre-built UI if you found a scrollView or you don't want a full-screen logo.
guard scrollView != nil && !fullScreenLogo else {
print("No scrollView found.")
UIView.animate(withDuration: 0.25, animations: {logoImageView.alpha = 1.0})
self.view.addSubview(logoImageView)
self.view.sendSubviewToBack(logoImageView) // otherwise logo is on top of buttons
return
}
// update the logoImageView's frame height to subtract the height of the subview containing buttons. This way the buttons won't be on top of the logoImageView
frame = CGRect(x: x, y: y, width: width, height: height - (viewContainingButton?.frame.height ?? 0.0))
logoImageView.frame = frame
self.view.addSubview(logoImageView)
UIView.animate(withDuration: 0.25, animations: {logoImageView.alpha = 1.0})
}
private func returnScrollView() -> UIScrollView? {
var scrollViewToReturn: UIScrollView?
if self.view.subviews.count > 0 {
for subview in self.view.subviews {
if subview is UIScrollView {
scrollViewToReturn = subview as? UIScrollView
}
}
}
return scrollViewToReturn
}
private func returnButtonView() -> UIView? {
var viewContainingButton: UIView?
for view in scrollView!.subviews {
viewHasButton(view)
if viewContainsButton {
viewContainingButton = view
break
}
}
return viewContainingButton
}
private func viewHasButton(_ view: UIView) {
if view is UIButton {
viewContainsButton = true
} else if view.subviews.count > 0 {
view.subviews.forEach({viewHasButton($0)})
}
}
}
Hope this helps any who have been frustrated trying to configure the Firebase pre-built UI in Swift.

How to set/update zoom level to GMSMapView in Swift?

Following code is how I got new marker position and update mapview.
if self.state.dropOff != nil {
let loc = response
let position = CLLocationCoordinate2D(latitude: loc.latitude!, longitude: loc.longitude!)
self.getPolylineRoute(from: self.state.pickUp!.coordinate, to: self.state.dropOff!.coordinate)
CATransaction.begin()
CATransaction.setAnimationDuration(1.0)
if self.acceptedCabMarker == nil {
self.acceptedCabMarker = GMSMarker(position: position)
}
self.acceptedCabMarker!.position = position
self.acceptedCabMarker!.isFlat = true
self.acceptedCabMarker!.icon = UIImage(named: markerIcon)
self.acceptedCabMarker!.setIconSize(scaledToSize: .init(width: 40, height: 40))
self.acceptedCabMarker!.appearAnimation = .pop
self.acceptedCabMarker!.rotation = CLLocationDegrees(loc.bearing ?? 0)
CATransaction.commit()
DispatchQueue.main.async {
self.acceptedCabMarker!.map = self.mapView
}
}
Problem is everytime this code is executed, mapview zoom level became to its original state. Which mean user can't zoom the map for long.
I tried to save the zoom using method.
extension SomeHomeViewController: GMSMapViewDelegate {
func mapView(_ mapView: GMSMapView, idleAt position: GMSCameraPosition) {
print("Camera Zoom: \(position.zoom)")
currentPosition = position
}
}
But I can't reuse currentPosition because
self.mapView?.camera.zoom = currentPosition?.zoom
is not allowed.
You can simply set the zoom on GMSMapView in this way
let camera = GMSCameraPosition.camera(withLatitude: loc.latitude, longitude: loc.longitude, zoom: 10.0)
self.mapView.camera = camera
You need to use
- (void)animateToZoom:(float)zoom;
method which is defined in GMSMapView (Animation) Category. For more info you can refer this link.

SwiftyGif Remote Gif is running but will not display

For the life of me I can't get the GIF to display using the SwiftyGif library. Is there something I'm missing here?
var outgoingMessageView: UIImageView!
outgoingMessageView = UIImageView(frame:
CGRect(x: llamaView.frame.maxX - 50,
y: llamaView.frame.minY + 75,
width: bubbleImageSize.width,
height: bubbleImageSize.height))
outgoingMessageView.delegate = self
if textIsValidURL == true {
print("URL is valid")
outgoingMessageView.image = bubbleImage
let maskView = UIImageView(image: bubbleImage)
maskView.frame = outgoingMessageView.bounds
outgoingMessageView.mask = maskView
outgoingMessageView.frame.origin.y = llamaView.frame.minY - 25
let url = URL(string: text)
outgoingMessageView.setGifFromURL(url, manager: .defaultManager, loopCount: -1, showLoader: true)
} else {
outgoingMessageView.image = bubbleImage
}
// Set the animations
label.animation = "zoomIn"
//outgoingMessageView.animation = "zoomIn"
// Add the Subviews
view.addSubview(outgoingMessageView)
print("outgoingMessageView added")
The delegate lets me know it runs successfully via:
gifDidStart
gifURLDidFinish
Checking outgoingMessageView.isAnimatingGif() tells me it's still running.
Checking outgoingMessageView.isDisplayedInScreen(outgoingMessageView) tells me it's not being displayed
It "finishes" almost immediately, but it's the same in the example project, yet the gif still loops and displays in the project. I've changed loop counts, imageviews, not running via a mask as I intended and instead just a UIImageView, changed the GIF urls, all to no avail. Is this problem related to my view structure?
I am calling this function based on actions in a collectionView.Image Example Here
Using the latest SwiftyGIF version.
I just made a sample about this with the following code:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
testSwiftyGif()
}
public func testSwiftyGif() {
let imgPath = "https://github.com/kirualex/SwiftyGif/blob/master/SwiftyGifExample/1.gif?raw=true"
let imgUrl = URL(string: imgPath)!
var outgoingMessageView: UIImageView!
outgoingMessageView = UIImageView(frame:
CGRect(x: llamaView.frame.maxX - 50,
y: llamaView.frame.minY + 75,
width: 200,
height: 200))
outgoingMessageView.setGifFromURL(imgUrl, manager: .defaultManager, loopCount: -1, showLoader: true)
self.view.addSubview(outgoingMessageView)
print("outgoingMessageView added")
}
And it adds the gif as intended:
Aparently the issue is your view structure. The image is being added to the view, but the view is not visible due mask, frame or superview position.
Try to check the view hierarchy using the xCode View Hierarchy Debugger

Google maps w/ clustering. Check whether a marker has already been rendered

I use Google Maps ios utils clustering and have set up a custom iconView for the marker & cluster like this:
func renderer(_ renderer: GMUClusterRenderer, willRenderMarker marker: GMSMarker) {
// Check if marker or cluster
if marker.userData is PlaceMarker {
if let userData = marker.userData as? PlaceMarker {
marker.iconView = MarkerView(caption: userData.caption)
}
marker.groundAnchor = CGPoint(x: 0.5, y: 1)
marker.isFlat = true
marker.appearAnimation = kGMSMarkerAnimationPop
} else {
// Apply custom view for cluster
marker.iconView = ClusterViewIcon(caption: userData.caption)
// Show clusters above markers
marker.zIndex = 1000;
marker.groundAnchor = CGPoint(x: 0.5, y: 1)
marker.isFlat = true
marker.appearAnimation = kGMSMarkerAnimationPop
}
}
func renderer(_ renderer: GMUClusterRenderer, willRenderMarker marker: GMSMarker) { }
get's called every time there was a zoom level change even if no clustering / declustering happened, and makrer.iconView is always nil there even if it was set up before.
How can one implement a guard to only setup iconView and other marker properties only when the marker is rendered for the first time? otherwise it is just a waste of resources.. (and also the animation happens on every zoom level change)
EDIT: One way I can think of it is to store already rendered markers id in an array and check against it.. but that's a dirty way..
Reference: https://github.com/googlemaps/google-maps-ios-utils/issues/96