PDFKit's scaleFactorForSizeToFit isn't working to set zoom in SwiftUI (UIViewRepresentable) - swift

I'm working on an app that displays a PDF using PDFKit, and I need to be able to set the minimum zoom level - otherwise the user can just zoom out forever. I've tried to set minScaleFactor and maxScaleFactor, and because these turn off autoScales, I need to set the scaleFactor to pdfView.scaleFactorForSizeToFit. However, this setting doesn't result in the same beginning zoom as autoScales and despite changing the actual scaleFactor number, the beginning zoom doesn't change. This photo is with autoScales on:
[![image with autoscales on][1]][1]
and then what happens when I use the scaleFactorForSizeToFit:
[![image with scaleFactorForSizeToFit][2]][2]
To quote the apple documentation for scaleFactorForSizeToFit, this is the
"size to fit" scale factor that autoScales would use for scaling the current document and layout.
I've pasted my code below. Thank you for your help.
import PDFKit
import SwiftUI
import Combine
class DataLoader : ObservableObject {
#Published var data : Data?
var cancellable : AnyCancellable?
func loadUrl(url: URL) {
cancellable = URLSession.shared.dataTaskPublisher(for: url)
.map { $0.data }
.receive(on: RunLoop.main)
.sink(receiveCompletion: { (completion) in
switch completion {
case .failure(let failureType):
print(failureType)
//handle potential errors here
case .finished:
break
}
}, receiveValue: { (data) in
self.data = data
})
}
}
struct PDFSwiftUIView : View {
#StateObject private var dataLoader = DataLoader()
var StringToBeLoaded: String
var body: some View {
VStack {
if let data = dataLoader.data {
PDFRepresentedView(data: data)
.navigationBarHidden(false)
} else {
CustomProgressView()
//.navigationBarHidden(true)
}
}.onAppear {
dataLoader.loadUrl(url: URL(string: StringToBeLoaded)!)
}
}
}
struct PDFRepresentedView: UIViewRepresentable {
typealias UIViewType = PDFView
let data: Data
let singlePage: Bool = false
func makeUIView(context _: UIViewRepresentableContext<PDFRepresentedView>) -> UIViewType {
let pdfView = PDFView()
// pdfView.autoScales = true
// pdfView.maxScaleFactor = 0.1
pdfView.minScaleFactor = 1
pdfView.scaleFactor = pdfView.scaleFactorForSizeToFit
pdfView.maxScaleFactor = 10
if singlePage {
pdfView.displayMode = .singlePage
}
return pdfView
}
func updateUIView(_ pdfView: UIViewType, context: UIViewRepresentableContext<PDFRepresentedView>) {
pdfView.document = PDFDocument(data: data)
}
func canZoomIn() -> Bool {
return false
}
}
struct ContentV_Previews: PreviewProvider {
static var previews: some View {
PDFSwiftUIView(StringToBeLoaded: "EXAMPLE_STRING")
.previewInterfaceOrientation(.portrait)
}
}

maybe it is to do with the sequence. This seems to work for me:
pdfView.scaleFactor = pdfView.scaleFactorForSizeToFit
pdfView.maxScaleFactor = 10.0
pdfView.minScaleFactor = 1.0
pdfView.autoScales = true

I was eventually able to solve this. The following code is how I managed to solve it:
if let document = PDFDocument(data: data) {
pdfView.displayDirection = .vertical
pdfView.autoScales = true
pdfView.document = document
pdfView.setNeedsLayout()
pdfView.layoutIfNeeded()
pdfView.minScaleFactor = UIScreen.main.bounds.height * 0.00075
pdfView.maxScaleFactor = 5.0
}
For some reason, the pdfView.scaleFactorForSizeToFit doesn't work - it always returns 0. This might be an iOS 15 issue - I noticed in another answer that someone else had the same issue. In the code above, what I did was I just scaled the PDF to fit the screen by screen height. This allowed me to more or less "autoscale" on my own. The code above autoscales the PDF correctly and prevents the user from zooming out too far.

This last solution works but you are kind of hardcoding the scale factor. so it only works if the page height is always the same. And bear in mind that macOS does not have a UIScreen and even under iPadOS there can be several windows and the window can have a different height than the screen height with the new StageManager... this worked for me:
first, wrap your swiftUIView (in the above example PDFRepresentedView) in a geometry reader. pass the view height (proxy.size.height) into the PDFRepresentedView.
in makeUIView, set
pdfView.maxScaleFactor = some value > 1
pdfView.autoScales = true
in updateUiView, set:
let pdfView = uiView
if let pageHeight = pdfView.currentPage?.bounds(for: .mediaBox).height {
let scaleFactor:CGFloat = self.viewHeight / pageHeight
pdfView.minScaleFactor = scaleFactor
}
Since my app also support macOS, I have written an NSViewRepresentable in the same way.
Happy coding!

Related

ImageAnalysisInteraction in UIViewRepresentable not working correctly

I have a simple UIViewRepresentable wrapper for the live text feature (ImageAnalysisInteraction). It was working without issues until I started updating the UIImage inside the updateUIView(...) function.
I have always been seeing this error in the console which originates from this view:
[api] -[CIImage initWithCVPixelBuffer:options:] failed because the buffer is nil.
When I change the image, it's updating correctly, but the selectableItemsHighlighted overlay stays the same and I can still select the text of the old image (even though it's no longer visible).
import UIKit
import SwiftUI
import VisionKit
#MainActor
struct LiveTextInteraction: UIViewRepresentable {
#Binding var image: UIImage
let interaction = ImageAnalysisInteraction()
let imageView = LiveTextImageView()
let analyzer = ImageAnalyzer()
let configuration = ImageAnalyzer.Configuration([.text])
func makeUIView(context: Context) -> UIImageView {
interaction.setSupplementaryInterfaceHidden(true, animated: true)
imageView.image = image
imageView.addInteraction(interaction)
imageView.contentMode = .scaleAspectFit
return imageView
}
func updateUIView(_ uiView: UIImageView, context: Context) {
Task {
uiView.image = image
do {
if let image = uiView.image {
let analysis = try await analyzer.analyze(image, configuration: configuration)
interaction.analysis = analysis;
interaction.preferredInteractionTypes = .textSelection
interaction.selectableItemsHighlighted = true
interaction.setContentsRectNeedsUpdate()
}
} catch {
// catch
}
}
}
}
class LiveTextImageView: UIImageView {
// Use intrinsicContentSize to change the default image size
// so that we can change the size in our SwiftUI View
override var intrinsicContentSize: CGSize {
.zero
}
}
What am I doing wrong here?
It looks like a bug. Try use dispatch
let highlighted = interaction.selectableItemsHighlighted
interaction.analysis = analysis // highlighted == false
if highlighted
{
DispatchQueue.main.async
{
interaction.selectableItemsHighlighted = highlighted
}
}
You don't need interaction.setContentsRectNeedsUpdate() if the interaction is added to a UIImageView

Unable To Print Multipage Print Preview With NStableView Data Cocoa App

I want to Print NSTableVIew data as multi page print preview but print preview shows only first few records rest pages are empty.I am using below code to print NStableView data.
let printInfo = NSPrintInfo.shared
printInfo.paperSize = NSSize(width: self.reporttableview.frame.width , height: 800.00)
printInfo.verticalPagination = .automatic
let operation: NSPrintOperation = NSPrintOperation(view: self.reporttableview, printInfo: printInfo)
operation.printPanel.options.insert([.showsPaperSize, .showsOrientation])
operation.run()
Above code working fine with minimum records like 30 -40 rows but when record is around 100 or more than 100 its only print first few records and rest pages are empty.Any help will be really appreciated.I have attached tableview and Print preview screen shots for better understanding.
TableView Records:
Table view screen shot
Print Preview:
Print preview screen shot
You can see in above screen shot only first few pages having records rest are empty.And if I scroll table view in that case whole print Preview is empty.
Empty print preview screen shot
I am not able to understand what I am doing wrong.
I had the same problem. I believe it is caused by the method AppKit uses to display NSTableViews efficiently. This seems to optimise, i.e. reduce, the memory load of large NSTableViews by only generating display data for about 3 "pages" of the view. To get the entire NSTableView to print, set the NSTableView instance property var usesStaticContents: Bool { get set } to true before printing (and return its value to false once printing is completed). I have tested this successfully on a table of 33 x A4 landscape pages and it works fine (although it takes ~5 seconds before the print panel is displayed).
Here is the code that worked for me:
struct ReportsDetailView: View {
#Binding var reportNodes: [ReportNode]
#State private var viewToPrint = NSView()
#State private var printing = false
var body: some View {
VStack {
ReportsDetailTableVC(reportNodes: $reportNodes, viewToPrint: $viewToPrint, printing: $printing)
HStack {
Spacer()
Button("Print Report") {
printing = true
let scale: CGFloat = 800/viewToPrint.frame.width
let printInfo = NSPrintInfo()
printInfo.horizontalPagination = .automatic
printInfo.verticalPagination = .automatic
printInfo.isVerticallyCentered = true
printInfo.isHorizontallyCentered = true
printInfo.printer = NSPrinter(name: NSPrinter.printerNames[0])!
printInfo.paperSize = NSSize(width: 595.28, height: 841.89)
printInfo.topMargin = 10
printInfo.bottomMargin = 10
printInfo.leftMargin = 10
printInfo.rightMargin = 10
printInfo.orientation = .landscape
printInfo.scalingFactor = scale
let printOperation = NSPrintOperation(view: viewToPrint, printInfo: printInfo)
printOperation.run()
printing = false
}
.disabled(printing)
}
.padding(10)
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
}
}
struct ReportsDetailTableVC: NSViewControllerRepresentable {
#Environment(\.colorScheme) var colorScheme
#Binding var reportNodes: [ReportNode]
#Binding var viewToPrint: NSView
#Binding var printing: Bool
func makeNSViewController(context: Context) -> some NSViewController {
let reportsDetailVC = ReportsDetailTableViewController()
return reportsDetailVC
}
func updateNSViewController(_ nsViewController: NSViewControllerType, context: Context) {
guard let reportsDetailVC = nsViewController as? ReportsDetailTableViewController else {return}
reportsDetailVC.setContents(reportNodes: reportNodes)
if printing {
reportsDetailVC.tableView.scrollRowToVisible(0)
reportsDetailVC.tableView.usesStaticContents = true
reportsDetailVC.tableView.appearance = NSAppearance(named: .aqua)
reportsDetailVC.tableView.usesAlternatingRowBackgroundColors = false
}
else {
reportsDetailVC.tableView.usesStaticContents = false
reportsDetailVC.tableView.appearance = colorScheme == .light ? NSAppearance(named: .aqua) : NSAppearance(named: .darkAqua)
reportsDetailVC.tableView.usesAlternatingRowBackgroundColors = true
}
DispatchQueue.main.async {viewToPrint = reportsDetailVC.tableView}
}
}

Use MetalView with SwiftUI? How do I put something to display in there?

I'm stuck with SwiftUI and Metal up to the point of being about to give up.
I got this example from https://developer.apple.com/forums/thread/119112?answerId=654964022#654964022 :
import MetalKit
struct MetalView: NSViewRepresentable {
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
func makeNSView(context: NSViewRepresentableContext<MetalView>) -> MTKView {
let mtkView = MTKView()
mtkView.delegate = context.coordinator
mtkView.preferredFramesPerSecond = 60
mtkView.enableSetNeedsDisplay = true
if let metalDevice = MTLCreateSystemDefaultDevice() {
mtkView.device = metalDevice
}
mtkView.framebufferOnly = false
mtkView.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
mtkView.drawableSize = mtkView.frame.size
mtkView.enableSetNeedsDisplay = true
return mtkView
}
func updateNSView(_ nsView: MTKView, context: NSViewRepresentableContext<MetalView>) {
}
class Coordinator : NSObject, MTKViewDelegate {
var parent: MetalView
var metalDevice: MTLDevice!
var metalCommandQueue: MTLCommandQueue!
init(_ parent: MetalView) {
self.parent = parent
if let metalDevice = MTLCreateSystemDefaultDevice() {
self.metalDevice = metalDevice
}
self.metalCommandQueue = metalDevice.makeCommandQueue()!
super.init()
}
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
}
func draw(in view: MTKView) {
guard let drawable = view.currentDrawable else {
return
}
let commandBuffer = metalCommandQueue.makeCommandBuffer()
let rpd = view.currentRenderPassDescriptor
rpd?.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1)
rpd?.colorAttachments[0].loadAction = .clear
rpd?.colorAttachments[0].storeAction = .store
let re = commandBuffer?.makeRenderCommandEncoder(descriptor: rpd!)
re?.endEncoding()
commandBuffer?.present(drawable)
commandBuffer?.commit()
}
}
}
... but I can't get my head around how to use this MetalView(), which does seem to work when I call it from a SwiftUI view, to display data. I want to use it to display a CIImage which will be filtered and manipulated with CIFilters...
Can someone please point me in the right direction on how to tell this view how to display something? I think I need it to display the content of a texture but tried countless hours and ended up starting from scratch for more countless times...
This is how I run my image filters now but it results in very slow sliders, which is why I decided to try learning about Metal... but it's been really time-consuming and. frustrating due to the lack of documentation...
func ciExposure (inputImage: CIImage, inputEV: Double) -> CIImage {
let filter = CIFilter(name: "CIExposureAdjust")!
filter.setValue(inputImage, forKey: kCIInputImageKey)
filter.setValue(inputEV, forKey: kCIInputEVKey)
return filter.outputImage!
}
I think I need to take that filter.outputImage and pass it on to the MetalView somehow?
Any help is really, really appreciated...
Apple's WWDC 2022 contained a tutorial/video entitled "Display EDR Content with Core Image, Metal, and SwiftUI" which describes how to blend Core Image with Metal and SwiftUI. It points to some new sample code entitled "Generating an Animation with a Core Image Render Destination" (here).
This sample project is very CoreImage-centric (which should suit your purposes nicely), but I wish Apple would post more sample-code examples showing Metal integrated with SwiftUI.
I have a small Core Image + SwiftUI sample project on Github that might be a good starting point for you. It doesn't cover a lot yet, but it demonstrates how to display filtered camera frames already.
Especially check out the draw function of the view. It's used to render a CIImage into the MTKView (you can do the same in your delegate's draw function).
Ok so this does the trick for me:
func draw(in view: MTKView) {
guard let drawable = view.currentDrawable else {
return
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
let commandBuffer = metalCommandQueue.makeCommandBuffer()
let rpd = view.currentRenderPassDescriptor
rpd?.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1)
rpd?.colorAttachments[0].loadAction = .clear
rpd?.colorAttachments[0].storeAction = .store
let re = commandBuffer?.makeRenderCommandEncoder(descriptor: rpd!)
re?.endEncoding()
context.render((AppState.shared.rawImage ?? AppState.shared.rawImageOriginal)!,
to: drawable.texture,
commandBuffer: commandBuffer,
bounds: AppState.shared.rawImageOriginal!.extent,
colorSpace: colorSpace)
commandBuffer?.present(drawable)
commandBuffer?.commit()
}
AppState.shared.rawImage is my CIImage texture I got from my filtering function.
The context is made somewhere else but should be:
context = CIContext(mtlDevice: metalDevice)
Next up is adding the centering part of the code provided by Frank Schlegel.

Accessing GeometryReader in values set in makeUIView in SwiftUI

so I have rendered a Mapbox Map using the UIViewRepresentable protocol and the following function:
func makeUIView(context: Context) -> MGLMapView {
let map = MGLMapView()
DispatchQueue.main.async {
map.styleURL = self.mapStyle
map.delegate = context.coordinator
map.showsUserLocation = true
map.attributionButtonPosition = .topLeft
map.logoViewPosition = .topLeft
map.logoViewMargins.y = 15
map.logoViewMargins.x = 95
map.logoViewMargins.x = 12
map.attributionButtonMargins.x = 100
map.attributionButtonMargins.y = 15
self.configure(map)
}
return map
}
The only problem is that because I am hardcoding the values for the y margin, the positioning is not ideal across multiple devices. I'd like to use a GeometryReader to access the safeAreaInsets and then make the y padding a function of that. Does anyone know how to do this?
You can pass GeometryProxy as argument of view representable constructor, like below
struct ContentView: View {
var body: some View {
GeometryReader {
DemoView(proxy: $0)
}
}
}
struct DemoView: UIViewRepresentable {
let proxy: GeometryProxy
func makeUIView(context: Context) -> MGLMapView {
// use self.proxy.size here
}
// ... other code
}

Displaying PHAssets (Photos) using PHCachingImageManager in SwiftUI app - IF statement in body View not working

I am trying to load a photo using PHCachingImageManager in my SwiftUI app. I am able to get the photo but unable to get my SwiftUI view to display it.
I have a view called PhotoCell as shown below. I pass the Binding<UIImage> to a function which uses PHCachingImageManager to load the asset. This works and returns a 90x120 image to the Binding's set function. Inside the set function you can see that hasLoadedImage is set to true.
The body of the View is composed of an if statement. This is never executed except for the first time when the value of hasLoadedImage is false. I have no idea what else to do. Just as a test I replaced the Image inside the if with a Text but even that does not display once hasLoadedImage is set to true. I've used if statements frequently in body getters.
I feel like I've overlooked something obvious.
struct PhotoCell: View {
#State var photoImage: UIImage = UIImage()
#State var hasLoadedImage: Bool = false
var photoAsset: PHAsset
var photoBinding: Binding<UIImage> {
Binding<UIImage>(
get: {
return self.photoImage
},
set: {(newValue) in
print("Loaded image \(newValue.size)") // this prints with the correct size
self.photoImage = newValue
self.hasLoadedImage = true // seems to have no effect
})
}
init(photoAsset: PHAsset) {
self.photoAsset = photoAsset
let options: PHImageRequestOptions = PHImageRequestOptions()
options.deliveryMode = .fastFormat
options.isNetworkAccessAllowed = true
options.resizeMode = .fast
PhotosManager.shared.loadAsset(asset: photoAsset, size: CGSize(width: 48, height: 48), options: options, image: self.photoBinding)
}
var body: some View {
if self.hasLoadedImage { // shouldn't this execute once the value is changed??
return AnyView(
Image(uiImage: self.photoImage)
.resizable()
.frame(width: 48, height: 48)
)
} else {
return AnyView(
Image(systemName: "goforward")
)
}
}
}
So out of frustration I came up with a work-around which is actually simpler than what I was trying to do. I constructed a UIViewRepresentable wrapper for UIImageView:
struct MyImage: UIViewRepresentable {
var photoAsset: PHAsset
func makeUIView(context: Context) -> UIImageView {
let uiView = UIImageView(image: UIImage(systemName: "goforward") ?? UIImage())
return uiView
}
func updateUIView(_ uiView: UIImageView, context: Context) {
let options: PHImageRequestOptions = PHImageRequestOptions()
options.deliveryMode = .fastFormat
options.isNetworkAccessAllowed = true
options.resizeMode = .fast
PhotosManager.shared.loadAsset(asset: photoAsset, into: uiView, options: options)
}
}
The loadAsset function is just this, if you want to test it, where imageManager is an instance of PHCachingImageManager. I would like to understand why my posted question is not working. I suspect it has something to do with the closure from requestImage and capturing self inappropriately but I just don't see it.
func loadAsset(asset: PHAsset, into imageView: UIImageView, options: PHImageRequestOptions?) {
let size = CGSize(width: 48, height: 48)
imageManager.requestImage(for: asset, targetSize: size, contentMode: .aspectFit, options: options, resultHandler: { (result, info) in
if let result = result {
imageView.image = result
}
})
}