How do I use background fetch in iOS 13 to update pdfs? - swift

I watched many videos and read many articles about the new background fetch in iOS 13, but I am still in the dark. I am making a dining app. Amongst other things I am presenting the menu for "today", which is changing every day. So far I use the PDFKit to show an example menu of a restaurant and have the menu for "today" and "tomorrow" downloaded into my project folder, but I want to use background fetch to get the menu updated every day. So far I only understood that I will have to check the boxes for "background fetch" and "background processing", where the refresh task is used to update content and the processing is used to clean up.
I am also wondering if I need my own website where I upload the pdf to retrieve the url from there. In the end I have to update an image and some labels as well, but I hope that I'll be able to do that on my own once I understood the principles.
I show you my code and screenshot of my view controller to give you a better understanding of my app so far.
let today = "AmericanDinerMenu" // PDF 1 with menu for today
let tomorrow = "ConniesDinerMenu" // PDF 2 with menu for tomorrow
func activePDF(PDF: String) {
if let path = Bundle.main.path(forResource: PDF, ofType: "pdf") {
let url = URL(fileURLWithPath: path)
if let pdfDocument = PDFDocument(url: url) {
pdfView.displayMode = .singlePageContinuous
pdfView.autoScales = true
pdfView.document = pdfDocument
}
}
}
I hope you can help me. Thank you!

Related

ARVideoNode in RealityKit

I was experimenting with Open AI's ChatGPT and when asked to give me a code for playing a video in a RealityKit's AR scene when reference image is tracked it used ARVideoNode instead of my expected AVPlayer and VideoMaterial solution. It even gave me an answer why ARVideoNode is better than AVPlayer on a VideoMaterial when I asked, but I never heard of ARVideoNode in RealityKit.
Am I missing something or is it just a flaw in the AI?
import RealityKit
// Set up image tracking
let configuration = ARImageTrackingConfiguration()
configuration.detectionImages = ["reference-image-1", "reference-image-2"]
arView.session.run(configuration)
// Create a dictionary to map reference image names to video file names
let videoFileNames = ["reference-image-1": "video-1.mp4",
"reference-image-2": "video-2.mp4"]
// Track the reference images and display the corresponding
// videos on top of them
var videoNodes = [String: ARVideoNode]()
arView.scene.subscribe(to: ARImageAnchor.self) { (anchor: ARImageAnchor) in
// Get the video file name for the tracked image
guard let videoFileName = videoFileNames[anchor.name] else { return }
// Load the video file
let videoURL = URL(fileURLWithPath: "path/to/\(videoFileName)")
let videoAsset = VideoAsset(url: videoURL)
// Create an ARVideoNode and add it to the scene
let videoNode = ARVideoNode(asset: videoAsset)
arView.scene.anchors.append(videoNode)
videoNodes[anchor.name] = videoNode
// Position the video node on top of the image
videoNode.transform = anchor.transform
// Play the video
videoNode.play()
}
// Monitor the tracking status of the reference images
// and pause/resume the videos as needed
arView.scene.subscribe(to: ARImageAnchor.self) { (anchor: ARImageAnchor) in
// Get the video node for the tracked image
guard let videoNode = videoNodes[anchor.name] else { return }
if anchor.isTracked {
// Resume playing the video if the image is being tracked
videoNode.play()
} else {
// Pause the video if the image is not being tracked
videoNode.pause()
}
}
It's not a flaw, it's rather ChatGPT's wrong answer. As far as I know, Kudan's ARVideoNode is a subclass of an ARNode parent class that is used to render video content. It has nothing to do with RealityKit even in terms of programming language - KudanAR SDK natively uses Objective-C for iOS and Java for Android.
Here's SO temporary policy regarding ChatGPT:
Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers.

Image loading taking more time in swift

I am working on an application, where I am loading some images from the server and showing it on table view using Alamofire. But the problem is it is taking so much time to load the image. I want to show the image, like a blurry image until the time image is not loading. Please, someone, help me.
cell.profileImageView?.pin_setImage(from: URL(string: modelArray[indexPath.row]))
This one is the solution using Kingfisher pod. You can set an activity Indicator while image is loaded.
let imageUrl = modelArray[indexPath.row]
//set activity indicator until image is loaded
cell.profileImageView?.kf.indicatorType = .activity
if let url = URL(string: imageUrl) {
let resource = ImageResource(downloadURL: url)
cell.profileImageView?.kf.setImage(with: resource)
}

How to create apple watchOS5 complication?

I've never worked in WatchOS5 and want to develop a horizontal complication (Modular large) for AppleWatch, like "Heart Rate". The idea is that I would display heart rate data in a different way. Right now I want to deploy the complication on development watch.
I have created a new project with a checkbox for "complication" added. I see that this added a complications controller with timeline configuration placeholders.
There is also an storyboard with a bunch of empty screens. I'm not sure as to how much effort I need to put into an apple watch app before I can deploy it. I see this Apple doc, but it does not describe how to layout my complication. Some section seem to have missing links.
Can I provide one style of complication only (large horizontal - modular large)
Do I need to provide any iPhone app content beyond managing the
complication logic, or can I get away without having a view controller?
Do I control the appearance of my complication by adding something to the assets folder (it has a bunch of graphic slots)?
Sorry for a complete beginner project, I have not seen a project focusing specifically on the horizontal complication for watch OS 5
You should be able to deploy it immediately, though it won't do anything. Have a look at the wwdc video explaining how to create a complication: video
You can't layout the complication yourself, you can chose from a set of templates that you fill with data. The screens you are seeing are for your watch app, not the complication.
You don't have to support all complication styles.
The complication logic is part of your WatchKit Extension, so technically you don't need anything in the iOS companion app, I'm not sure how much functionality you have to provide to get past the app review though.
Adding your graphics to the asset catalog won't do anything, you have to reference them when configuring the templates.
Here's an example by Apple of how to communicate with the apple watch app. You need to painstakingly read the readme about 25 times to get all the app group identifiers changed in that project.
Your main phone app assets are not visible to the watch app
Your watch storyboard assets go in WatchKit target
Your programmatically accessed assets go into the watch extension target
Original answers:
Can I provide one style of complication only (large horizontal -
modular large) - YES
Do I need to provide any iPhone app content beyond
managing the complication logic, or can I get away without having a
view controller? YES - watch apps have computation limits imposed on them
Do I control the appearance of my complication by
adding something to the assets folder (it has a bunch of graphic
slots)? See below - it's both assets folder and placeholders
Modify the example above to create a placeholder image displayed on the watch (when you are selecting a complication while modifying the screen layout)
func getPlaceholderTemplate(for complication: CLKComplication, withHandler handler: #escaping (CLKComplicationTemplate?) -> Void) {
// Pass the template to ClockKit.
if complication.family == .graphicRectangular {
// Display a random number string on the body.
let template = CLKComplicationTemplateGraphicRectangularLargeImage()
template.textProvider = CLKSimpleTextProvider(text: "---")
let image = UIImage(named: "imageFromWatchExtensionAssets") ?? UIImage()
template.imageProvider = CLKFullColorImageProvider(fullColorImage: image)
// Pass the entry to ClockKit.
handler(template)
}else {
handler(nil);
return
}
}
sending small packets to the watch (will not send images!)
func updateHeartRate(with sample: HKQuantitySample){
let context: [String: Any] = ["title": "String from phone"]
do {
try WCSession.default.updateApplicationContext(context)
} catch {
print("Failed to transmit app context")
}
}
Transferring images and files:
func uploadImage(_ image: UIImage, name: String, title: String = "") {
let data: Data? = UIImagePNGRepresentation(image)
do {
let fileManager = FileManager.default
let documentDirectory = try fileManager.url(for: .cachesDirectory,
in: .userDomainMask,
appropriateFor:nil,
create:true)
let fileURL = try FileManager.fileURL("\(name).png")
if fileManager.fileExists(atPath: fileURL.path) {
try fileManager.removeItem(at: fileURL)
try data?.write(to: fileURL, options: Data.WritingOptions.atomic)
} else {
try data?.write(to: fileURL, options: Data.WritingOptions.atomic)
}
if WCSession.default.activationState != .activated {
print("session not activated")
}
fileTransfer = WCSession.default.transferFile(fileURL, metadata: ["name":name, "title": title])
}
catch {
print(error)
}
print("Completed transfer \(name)")
}

Is it possible to (programmatically) set wallpapers for each separate "space"/desktop in macOS?

I'm making a small app for myself to change the desktop background image periodically.
My program contains this block of code:
let screen = NSScreen.main()!
let newWallpaperURL = URL(/* ... */)
// ...
try! NSWorkspace.shared().setDesktopImageURL(newWallpaperURL, for: screen, options: [:])
This works, but only for the current "space" the keyboard is focused on.
e.g. if I'm in a fullscreen app, only the background of the Space occupied fullscreen app will be changed (not the background of my normal desktop).
If I have two Spaces/desktops, it only changes the background image of one of them.
Is it possible to individually set wallpapers for each space programmatically?
You can get all the screens and set all of them.
let screens = NSScreen.screens
let newWallpaperURL = URL(/* ... */)
for i in screens {
try! NSWorkspace.shared().setDesktopImageURL(newWallpaperURL, for: i, options: [:])
}
Use this in Xcode 8.x:
if let screens = NSScreen.screens() {
let newWallpaperURL = URL(/* ... */))
for screen in screens {
try? NSWorkspace.shared().setDesktopImageURL(newWallpaperURL, for: screen, options: [:])
}
}
Unlike other solutions posted here this one will work in the current Xcode 8. NSScreen.screens is a class var in Xcode 9 (currently beta) but a class func in Xcode 8 which is why you need to put .screens() instead of .screens. Also, screens returns an optional so you need to safely unwrap it before passing it to the for loop.

Swift UIPasteboard not copying PNG

My problem is really odd. In the simulator the .png copies to clipboard fine and I can paste the image in the Contacts app on Simulator. But when I put the app on the phone, the png is not copied to clipboard.
let img = UIImage(named: "myimage")
let data = NSData(data: UIImagePNGRepresentation(img) )
UIPasteboard.generalPasteboard().setData(data, forPasteboardType: "public.png")
That's the code I'm using but like I said it does not copy to the clipboard. I'm using this code within the context of a keyboard, although that shouldn't matter when copying to a clipboard. If anyone has any ideas please let me know. Thanks in advance! Oh this is my first app in Swift and my first iOS app, so I don't have the seasoned experience to know if this is a Swift issue or something I'm just missing. =\
Make sure the code runs fine in your host app (not a keyboard extension app).
For example, check if the read image has the same resolution:
//the Pasteboard is nil if full access is not granted
let pbWrapped: UIPasteboard? = UIPasteboard.generalPasteboard()
if let pb = pbWrapped {
var type = UIPasteboardTypeListImage[0] as! String
if (count(type) > 0) && (image != nil) {
pb.setData(UIImagePNGRepresentation(image), forPasteboardType: type)
var readDataWrapped: NSData? = pb.dataForPasteboardType(type)
if let readData = readDataWrapped {
var readImage = UIImage(data: readData, scale: 2)
println("\(image) == \(pb.image) == \(readImage)")
}
}
}
If the pasteboard object is nil in your keyboard app that means you haven't provided full access to the keyboard: Copying and pasting image into a textbook in simulator
I believe you can use this line to do what you want (not able to test it out right now):
let image = UIImage(named: "myimage.png")
UIPasteboard.generalPasteboard().image = image;
Hopefully that works, I'm a little rusty with UIPasteboard.
There are lots of bugs and issues with the UIPasteboard class, so I'm really not surprised that you're having issues with something that so obviously is supposed to work. The documentation isn't that helpful either, to be honest. But try this; this worked for me on a physical device, and it's different to the above methods that are supposed to work but evidently don't for a bunch of people.
guard let imagePath = NSBundle.mainBundle().pathForResource("OliviaWilde", ofType: "jpg") else
{ return }
guard let imageData = NSData(contentsOfFile: imagePath) else { return }
let pasteboard = UIPasteboard.generalPasteboard()
pasteboard.setData(imageData, forPasteboardType: "public.jpeg")
You can use either "public.jpeg" or "public.png" if the source file is .jpg; it still works. I think it only changes the format of the thing that gets pasted?
Also, did you try adding the file extension in your first line of code where you create the UIImage? That might make it work too.
Evidently use of this class is temperamental, not just in this use case. So even though we're doing same thing, only difference in this code is we're creating the NSData from a path rather than a UIImage. Lol let me know if that works for you.
Ensure that RequestsOpenAccess is set to YES under NSExtension > NSExtensionAttributes in the extension's info.plist