Can't hide share button in USDZ + QLPreviewController - swift

I got a project that involves a few USDZ files for the augmented reality features embedded in the app. While this works great, and we're really happy with how it performs, the built-in share button of the QLPreviewController is something that we'd like to remove. Subclassing the object doesn't have any effect, and trying to hide the rightBarButtonItem with the controller returned in delegate method still shows the button when a file is selected. The implementation of USDZ + QLPreviewController we're using is pretty basic. Is there a way around this issue?
func numberOfPreviewItems(in controller: QLPreviewController) -> Int {
return 1
}
func previewController(_ controller: QLPreviewController, previewItemAt index: Int) -> QLPreviewItem {
let url = Bundle.main.url(forResource: models[selectedObject], withExtension: "usdz")! controller.navigationItem.rirButtonItems = nil.
// <- no effect return url as QLPreviewItem
}
#IBAction func userDidSelectARExperience(_ sender: Any) {
let previewController = QLPreviewController()
previewController.dataSource = self
previewController.delegate = self
present(previewController, animated: true)
}

This is the official answer from Apple.
Use ARQuickLookPreviewItem instead of QLPreviewItem. And set its canonicalWebPageURL to a URL (can be any URL).
func previewController(_ controller: QLPreviewController, previewItemAt index: Int) -> QLPreviewItem {
guard let path = Bundle.main.path(forResource: "Experience", ofType: "usdz") else { fatalError("Couldn't find the supported input file.") }
let url = URL(fileURLWithPath: path)
if #available(iOS 13.0, *) {
let item = ARQuickLookPreviewItem(fileAt: url)
item.canonicalWebPageURL = URL(string: "http://www.google.com")
return item
} else { }
return url as QLPreviewItem
}
The version check is optional.

My approach is to add the QLPreviewController as an subview.
container is an UIView in storyboard.
let preview = QLPreviewController()
preview.dataSource = self
preview.view.frame = CGRect(origin: CGPoint(x: 0, y: -45), size: CGSize(width: container.frame.size.width, height: container.frame.size.height+45) )
container.addSubview(preview.view)
preview.didMove(toParent: self)
The y offset of the frame's origin and size may vary. This will ensure the AR QuickLook view to be the same size as the UIView, and hide the buttons (unfortunately, all of them) at the same time.

Instead of returning QLPreviewItem, use ARQuickLookPreviewItem which conforms to this protocol.
https://developer.apple.com/documentation/arkit/arquicklookpreviewitem
Then, assign a url that you would want to share (that will appear in share sheet) in canonicalWebPageURL property. By default, this property shares the file url (in this case, the USDZ file url). Doing so would not expose your file URL(s).

TLDR: I don't think you can.
I haven't seen any of the WWDC session even mention this and I can't seem to find any supporting developer documentation. I'm pretty sure the point of the ARKit QLPreviewController is so you don't have to do any actual coding on the AR side. I can see the appeal for this and for customisation in general, however, I'd suggest instead looking at some of the other ARKit projects that Apple has released and attempting to re-create those from the ground up as opposed to stripping this apart.
Please advise if this changes as I'd like to do something similar, especially within Safari.

I couldn't get to the share button at all to hide or disable it. Spent days to overcome this. I did rather unprofessional way of overcoming it. Subview QLPreviewController to a ViewController and subview a button or view on top of image view on top of share button and setting my company logo as image. It will be there all the time, even the top bar hides on full screen in AR mode. Not a clean solution. But works.

Related

WatchOS complication getPlaceholderTemplate is being ignored

I have a complication working perfectly well on my Apple Watch (simulator and hardware). However, I cannot seem to get the representation to display correctly when you choose which app to assign to a complication area. In my case, graphic corner. Its showing the display name of the app with "----" under it. In my getPlaceholderTemplate protocol method, I am using CLKComplicationTemplateGraphicCornerTextImage - which is being ignored.
Here is my protocol method.
func getPlaceholderTemplate(for complication: CLKComplication,
withHandler handler: #escaping
(CLKComplicationTemplate?) -> Void)
{
let ft = CLKSimpleTextProvider(text: "Aware")
let img = UIImage(systemName: "headphones")
let tintedImageProvider = CLKImageProvider(onePieceImage: img!)
let finalImage = CLKFullColorImageProvider(fullColorImage: img!, tintedImageProvider: tintedImageProvider)
if complication.family == .graphicCorner {
let thisTemplate = CLKComplicationTemplateGraphicCornerTextImage(textProvider: ft, imageProvider: finalImage)
handler(thisTemplate)
return
} else {
print("Complication not supported.")
}
handler(nil)
}
So this seemingly isn't being used. For the Simulator I have done the "Device > Erase all content and settings" just to make sure nothing old is cached. Any idea why it's defaulting to a UI I would prefer not have? Again, this is in the complication picker only, everywhere else it's working and looking great.
Example screenshot of how it's being represented.screenshot

How to implement SwiftUI’s .onDrag modifier with NSImage (macOS)

I am building a sandboxed app for macOS with SwiftUI. I have a NSImage that is displayed as Image(nsImage: myNSImage). I want that View to support drag and drop, meaning that it can be dragged to any location that can receive image files.
Here’s my approach:
Image(nsImage: myNSImage)
.onDrag({
let itemProvider = NSItemProvider()
itemProvider.suggestedName = "image.png"
itemProvider.registerDataRepresentation(for: UTType.png) {
loadHandler in
loadHandler(nsImage.pngRepresentation, nil)
print("loadHandler completed") // Never prints !!
return nil
}
return itemProvider
})
This way I can drag the View. But it seems like it isn’t able to provide the image.
The “drop”, i.e. on the Desktop, is simply not working. I would expect that it saves an "image.png" in the destination URL, but it doesn’t.
How can I implement .onDrag so that it provides a image file based on NSImage?
Edit: I already have tried different UTTypes, for example .tiff in combination with nsImage.tiffRepresentation without luck.

SwiftUI - Saving Image to Share Sheet causes image to save blurry/low res

I have a bit of code in my app that generates a QR Code and scales it up (code reference I used from this link from Hackng with Swift. Now, I'm using the share sheet to allow the user to save the qr code to their camera roll and, it is working, but saving the image low res, and it saves to the camera roll blurry (and i assume if it is shared via other methods it will also be blurry)
Here is the code of my share sheet function:
struct ActivityView: UIViewControllerRepresentable {
let activityItems: [Any]
let applicationActivities: [UIActivity]?
func makeUIViewController(context: UIViewControllerRepresentableContext<ActivityView>) -> UIActivityViewController {
return UIActivityViewController(activityItems: activityItems, applicationActivities: applicationActivities)
}
func updateUIViewController(_ uiViewController: UIActivityViewController, context: UIViewControllerRepresentableContext<ActivityView>) {
}
}
and here's the code in my view struct:
.sheet(isPresented: $showShareSheet) {
ShareSheet(activityItems: [self.qrCodeImage])
}
Is there a trick to remove the interpolation on the image when it saves to the share sheet like the .interpolation(.none) on the image view itself?
Your problem is that the QR code image is actually tiny! Like really tiny:
Printing description of image:
<UIImage:0x60000202cc60 anonymous {23, 23}>
When you share this image, the way it will be displayed is dependant on the program or app that will display it, and is out of control of your app as far as I know.
However,
there is a way that you could potentially make it "pretty" in other apps, and this would be to increase the resolution to a larger amount so that when it's rendered it'll appear to have "sharp" pixels.
How would this be accomplished? I think I have an example buried somewhere in old code, I'll dig into it and see if I can find you an example ;)
Edit
I found the code:
extension UIImage {
func resized(toWidth width: CGFloat) -> UIImage? {
let canvasSize = CGSize(width: round(width), height: CGFloat(ceil(width/size.width * size.height)))
UIGraphicsBeginImageContextWithOptions(canvasSize, false, scale)
defer { UIGraphicsEndImageContext() }
let context = UIGraphicsGetCurrentContext();
context?.interpolationQuality = .none
// Set the quality level to use when rescaling
draw(in: CGRect(origin: .zero, size: canvasSize))
let r = UIGraphicsGetImageFromCurrentImageContext()
return r
}
}
The trick is to provide a way to scale the image, but the real magic is on line 7:
context?.interpolationQuality = .none
If you exclude this line, you'll get blurry images, which is what the OS does by default because you don't generally want to see the pixel edges in images.
You could use this extension like so:
.sheet(isPresented: $showShareSheet) {
ShareSheet(activityItems: [self.qrCodeImage.resized(toWidth: 512) ?? UIImage()])
}
However, this may be resizing the image way more often than necessary. Optimally you'd resize it in the same function that you generate it.

How to get size and position of a sceneWindow under mac Catalyst

I have been able to obtain the size of a sceneWindow when I resize it using
func windowScene(_ windowScene: UIWindowScene, didUpdate previousCoordinateSpace: UICoordinateSpace, interfaceOrientation previousInterfaceOrientation: UIInterfaceOrientation, traitCollection previousTraitCollection: UITraitCollection) {
print("movement trapped \(windowScene.coordinateSpace.bounds)"
}
within the sceneDelegate. But the x,y coordinates are always 0,0 regardless of where I drag the window to. Looking to be able to dictate where the new sceneWindow is located on the mac's screen relative to the "default" sceneWindow.
you can try to convert window frame from windowScene coordinateSpace to UIScreen coordinateSpace
windowFrame = [window convertRect:window.frame toCoordinateSpace:window.windowScene.screen.coordinateSpace];
Well
I didn't find a way to get this done in UIWindow but remember, Catalyst does support AppKit and you can call it anytime use objc runtime.
So here comes up the idea:
Get NSWindows from NSApplication
Get UIWindow from view.window
Compare some magic and lookup our target NSWindow
Get the frame of that NSWindow
var targetNSWindow: AnyObject? = nil
let nsWindows = (NSClassFromString("NSApplication")?.value(forKeyPath: "sharedApplication.windows") as? [AnyObject])!
for nsWindow in nsWindows {
let uiWindows = nsWindow.value(forKeyPath: "uiWindows") as? [UIWindow] ?? []
if uiWindows.contains(view.window!) {
targetNSWindow = nsWindow
}
}
if let found = targetNSWindow {
print(found.value(forKeyPath: "_frame")!)
}
And here is a sample output.
NSRect: {{818, 296}, {964, 614}}
A little bit more, you can have your window information from sceneDelegate and compare the magic with it in a similar way. But be careful, sometimes you don't have any window when the app just loads. Do the jobs in DispatchQueue.main.async block if that happens.

Access to Photos on iOS(Swift), have to try twice to get picture library to show up. It doesn't show up the first time, but show's the second time

I call the function. Alright
func tabBarController(_ tabBarController: UITabBarController, shouldSelect viewController: UIViewController) -> Bool {
let index = viewControllers?.index(of: viewController)
if index == 2 {
let layout = UICollectionViewFlowLayout()
let photoSelectorController = PhotoSelectorController(collectionViewLayout: layout)
let navController = UINavigationController(rootViewController: photoSelectorController)
present(navController, animated: true, completion: nil)
return false }
return true
}
Photos not showing on first time
I have all of the right things asking for permission and everything..
I then call for the images with these functions. It works, but the second time I hit the button after canceling posting a post..
I'm not sure how to get the images from the library for the first call.
After that it works like a charm, but most users have been telling me this isn't a good experience , if they have to try twice.
I'm trying to reduce friction in the app usage.
It should show the pictures right after the user "Allows" the app access to the pictures so they can post, but I'm not sure what I'm doing wrong for it to show the pictures soon as someone grants access.
var selectedImage: UIImage?
var images = [UIImage]()
var assets = [PHAsset]()
fileprivate func assetsFetchOptions() -> PHFetchOptions {
let fetchOptions = PHFetchOptions()
fetchOptions.fetchLimit = 100
let sortDescriptor = NSSortDescriptor(key: "creationDate", ascending: false)
fetchOptions.sortDescriptors = [sortDescriptor]
return fetchOptions
}
fileprivate func fetchPhotos() {
let allPhotos = PHAsset.fetchAssets(with: .image, options: assetsFetchOptions())
DispatchQueue.global(qos: .background).async {
allPhotos.enumerateObjects { (asset, count, stop) in
print(asset)
let imageManager = PHImageManager.default()
let targetSize = CGSize(width: 200, height: 200)
let options = PHImageRequestOptions()
options.isSynchronous = true
imageManager.requestImage(for: asset, targetSize: targetSize, contentMode: .aspectFit, options: options, resultHandler: { (image, info) in
if let image = image {
self.images.append(image)
self.assets.append(asset)
if self.selectedImage == nil {
self.selectedImage = image
}
}
if count == allPhotos.count - 1 {
DispatchQueue.main.async {
self.collectionView?.reloadData()
}
}
})
}
}
}
If you fetchAssets before the user grants privacy access to your app, you'll get a PHFetchResult that's empty.
However, if before making that fetch you register as a photo library observer, you'll get a photoLibraryDidChange callback as soon as the user approves privacy access for the app... from that callback you can access an updated version of your original fetch result (see changeDetails(for:)) that has all of the assets your fetch should have found. Then you can tell your UI to update and display those assets. (This is how Apple's canonical PhotoKit example code works.)
Also, once you have a populated fetch result, please don't request thumbnails for the whole thing the way you're doing.
Users commonly have photo libraries with tens of thousands of assets, many of which are in iCloud and not on the local device. If you synchronously get all thumbnails, you'll take forever, use tons of memory and CPU resources, and generate all kinds of network traffic (slowing things down even more) for resources your user may never see.
PhotoKit is designed to allow easy use in conjunction with UI elements like UICollectionView. A collection view only loads cells that are currently (or soon to be) on screen, even if you've told it you have zillions of items in your collection — similarly, you can request thumbnails only for assets that are visible in your collection view. Wherever you have your per-cell UI setup logic is where you should have your PHImageManager request. (Again, this is what the canonical PhotoKit example code does.)
You can optimize even further by "preheating" the thumbnail fetch/generation process for assets that are soon to be onscreen. And then by managing your "preheating" to cancel such work in progress when further UI updates (e.g. fast scrolling of large collection) make it unnecessary. PHCachingImageManager does this. (And yet again, it's what the canonical Apple sample does. Actually, that sample's a bit out of date, and as such does more work than it needs to on this front — it does its own calculation of what cells are just outside the scroll rect, but since iOS 10 the UICollectionViewDataSourcePrefetching protocol manages that for you.)