I am trying to add a "Drag and Drop" gesture / function to my SwiftUI Mac application.
I want to drop files from my System/ Desktop into my Application. It is possbile in regular Swift, which I found. I am trying to do this in SwiftUI now.
I find a onDrop() function in SwiftUI for Views. However, it looks like that this is only for internal gestures inside my application. I want to drag files from outside.
In Swift you need to register your NSView, for dragged Types.
registerForDraggedTypes([kUTTypeFileURL,kUTTypeImage])
I thought of creating a NSViewRepresentable and wrap that into my SwiftUI view.
This is the code I came up with, however I can not call registerForDraggedTyped.
final class DragDropView: NSViewRepresentable {
func makeNSView(context: NSViewRepresentableContext<DragDropView>) -> NSView {
let view = NSView()
view.registerForDraggedTypes([NSPasteboard.PasteboardType.pdf, NSPasteboard.PasteboardType.png])
return view
}
func updateNSView(_ nsView: NSView, context: NSViewRepresentableContext<DragDropView>) {
}
Is there a simpler solution for that in SwiftUI? I would love to use that onDrop() function, but this is not working for external files, is it?
Here is a demo of drag & drop, tested with Xcode 11.4 / macOS 10.15.4.
Initial image is located on assets library, accepts drop (for simplicity only) as file url from Finder/Desktop (drop) and to TextEdit (drag), registers drag for TIFF representation.
struct TestImageDragDrop: View {
#State var image = NSImage(named: "image")
#State private var dragOver = false
var body: some View {
Image(nsImage: image ?? NSImage())
.onDrop(of: ["public.file-url"], isTargeted: $dragOver) { providers -> Bool in
providers.first?.loadDataRepresentation(forTypeIdentifier: "public.file-url", completionHandler: { (data, error) in
if let data = data, let path = NSString(data: data, encoding: 4), let url = URL(string: path as String) {
let image = NSImage(contentsOf: url)
DispatchQueue.main.async {
self.image = image
}
}
})
return true
}
.onDrag {
let data = self.image?.tiffRepresentation
let provider = NSItemProvider(item: data as NSSecureCoding?, typeIdentifier: kUTTypeTIFF as String)
provider.previewImageHandler = { (handler, _, _) -> Void in
handler?(data as NSSecureCoding?, nil)
}
return provider
}
.border(dragOver ? Color.red : Color.clear)
}
}
As OP noted, this is only for a Mac OS application.
If you want to
open from Finder
drag from Finder
AND drag from elsewhere (like a website or iMessage)
then try this (drag image from this view not included).
XCode 11+ / SwiftUI 1.0+ / Swift 5:
Required Extension for opening from Finder:
extension NSOpenPanel {
static func openImage(completion: #escaping (_ result: Result<NSImage, Error>) -> ()) {
let panel = NSOpenPanel()
panel.allowsMultipleSelection = false
panel.canChooseFiles = true
panel.canChooseDirectories = false
panel.allowedFileTypes = ["jpg", "jpeg", "png", "heic"]
panel.canChooseFiles = true
panel.begin { (result) in
if result == .OK,
let url = panel.urls.first,
let image = NSImage(contentsOf: url) {
completion(.success(image))
} else {
completion(.failure(
NSError(domain: "", code: 0, userInfo: [NSLocalizedDescriptionKey: "Failed to get file location"])
))
}
}
}
}
The SwiftUI Views
struct InputView: View {
#Binding var image: NSImage?
var body: some View {
VStack(spacing: 16) {
HStack {
Text("Input Image (PNG,JPG,JPEG,HEIC)")
Button(action: selectFile) {
Text("From Finder")
}
}
InputImageView(image: self.$image)
}
}
private func selectFile() {
NSOpenPanel.openImage { (result) in
if case let .success(image) = result {
self.image = image
}
}
}
}
struct InputImageView: View {
#Binding var image: NSImage?
var body: some View {
ZStack {
if self.image != nil {
Image(nsImage: self.image!)
.resizable()
.aspectRatio(contentMode: .fit)
} else {
Text("Drag and drop image file")
.frame(width: 320)
}
}
.frame(height: 320)
.background(Color.black.opacity(0.5))
.cornerRadius(8)
.onDrop(of: ["public.url","public.file-url"], isTargeted: nil) { (items) -> Bool in
if let item = items.first {
if let identifier = item.registeredTypeIdentifiers.first {
print("onDrop with identifier = \(identifier)")
if identifier == "public.url" || identifier == "public.file-url" {
item.loadItem(forTypeIdentifier: identifier, options: nil) { (urlData, error) in
DispatchQueue.main.async {
if let urlData = urlData as? Data {
let urll = NSURL(absoluteURLWithDataRepresentation: urlData, relativeTo: nil) as URL
if let img = NSImage(contentsOf: urll) {
self.image = img
print("got it")
}
}
}
}
}
}
return true
} else { print("item not here") return false }
}
}
}
Note: Have not needed to use the "public.image" identifier.
Optional Extensions if you need the result as PNG data (I did to upload to Firebase Storage):
extension NSBitmapImageRep {
var png: Data? { representation(using: .png, properties: [.compressionFactor:0.05]) }
}
extension Data {
var bitmap: NSBitmapImageRep? { NSBitmapImageRep(data: self) }
}
extension NSImage {
var png: Data? { tiffRepresentation?.bitmap?.png }
}
// usage
let image = NSImage(...)
if let data = image.png {
// do something further with the data
}
Can’t you use onDrop(of:isTargeted:perform:)? You can pass your array of supported types in the of argument.
Related
Xcode ver is 14.2.
Target iOS for 16.0.
I want to upload a image from photo library using PHPickerConfiguration().
It can't appear in the UserForm view.
struct UserForm: View {
#State var profileImage: Image?
#Binding var selectedImage: UIImage?
var body: some View {
if profileImage == nil{
Button(action:{
isShowAction.toggle()
}) {
Image(systemName: "person.crop.rectangle.badge.plus.fill")
.resizable()
.scaledToFill()
.frame(width: 100, height: 100)
.foregroundColor(.gray)
.padding(.vertical)
}
}else if let image = profileImage{
image
.resizable()
.scaledToFill()
.frame(width: 100,height: 100)
.clipShape(Circle())
.clipped()
.foregroundColor(.gray)
}
.sheet(isPresented: $imagePickerPresented, onDismiss: loadImage, content:{
AlbumPicker(image: $selectedImage)
}
)
}
}
extension UserForm{
func loadImage(){
guard let selectedImage = selectedImage else{ return }
profileImage = Image(uiImage: selectedImage)
}
}
import SwiftUI
import PhotosUI
struct AlbumPicker: UIViewControllerRepresentable {
#Binding var image: UIImage?
#Environment(\.presentationMode) var mode
func makeUIViewController(context: UIViewControllerRepresentableContext<AlbumPicker>) -> PHPickerViewController {
var configuration = PHPickerConfiguration()
configuration.filter = .images
configuration.selectionLimit = 1
configuration.preferredAssetRepresentationMode = .current
let picker = PHPickerViewController(configuration: configuration)
picker.delegate = context.coordinator
return picker
}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
func updateUIViewController(_ uiViewController: PHPickerViewController, context: UIViewControllerRepresentableContext<AlbumPicker>) {}
class Coordinator: NSObject, UINavigationControllerDelegate, PHPickerViewControllerDelegate {
var parent: AlbumPicker
init(_ parent: AlbumPicker) {
self.parent = parent
}
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
guard let itemProvider = results.first?.itemProvider else {
return
}
let typeChecked = itemProvider.registeredTypeIdentifiers.map { itemProvider.hasItemConformingToTypeIdentifier($0) }
guard !typeChecked.contains(false) else {
return
}
itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.image.identifier) { (url, error) in
guard let url = url else {
return
}
guard let imageData = try? Data(contentsOf: url) else {
return
}
self.parent.image = UIImage(data: imageData)
self.parent.mode.wrappedValue.dismiss()
}
}
}
}
When I push the button, immediately I got findWriterForTypeAndAlternateType:119: unsupported file format 'public.heic' using actual device.
And it can't hold profileImage in the view after I chose a image in photo library.
How does it be fixed?
Thank you
I'm new to SwiftUI and was looking how to download images from a URL. I've found out that in iOS15 you can use AsyncImage to handle all the phases of an Image. The code looks like this.
AsyncImage(url: URL(string: urlString)) { phase in
switch phase {
case .success(let image):
image
.someModifers
case .empty:
Image(systemName: "Placeholder Image")
.someModifers
case .failure(_):
Image(systemName: "Error Image")
.someModifers
#unknown default:
Image(systemName: "Placeholder Image")
.someModifers
}
}
I would launch my app and every time I would scroll up & down on my List, it would download the images again. So how would I be able to add a cache. I was trying to add a cache the way I did in Swift. Something like this.
struct DummyStruct {
var imageCache = NSCache<NSString, UIImage>()
func downloadImageFromURLString(_ urlString: String) {
guard let url = URL(string: urlString) else { return }
URLSession.shared.dataTask(with: url) { data, response, error in
if let _ = error {
fatalError()
}
guard let data = data, let image = UIImage(data: data) else { return }
imageCache.setObject(image, forKey: NSString(string: urlString))
}
.resume()
}
}
But it didn't go to good. So I was wondering is there a way to add caching to AsyncImage? Thanks would appreciate any help.
I had the same problem as you. I solved it by writing a CachedAsyncImage that kept the same API as AsyncImage, so that they could be interchanged easily, also in view of future native cache support in AsyncImage.
I made a Swift Package to share it.
CachedAsyncImage has the exact same API and behavior as AsyncImage, so you just have to change this:
AsyncImage(url: logoURL)
to this:
CachedAsyncImage(url: logoURL)
In addition to AsyncImage initializers, you have the possibilities to specify the cache you want to use (by default URLCache.shared is used):
CachedAsyncImage(url: logoURL, urlCache: .imageCache)
// URLCache+imageCache.swift
extension URLCache {
static let imageCache = URLCache(memoryCapacity: 512*1000*1000, diskCapacity: 10*1000*1000*1000)
}
Remember when setting the cache the response (in this case our image) must be no larger than about 5% of the disk cache (See this discussion).
Here is the repo.
Hope this can help others.
I found this great video which talks about using the code below to build a async image cache function for your own use.
import SwiftUI
struct CacheAsyncImage<Content>: View where Content: View{
private let url: URL
private let scale: CGFloat
private let transaction: Transaction
private let content: (AsyncImagePhase) -> Content
init(
url: URL,
scale: CGFloat = 1.0,
transaction: Transaction = Transaction(),
#ViewBuilder content: #escaping (AsyncImagePhase) -> Content
){
self.url = url
self.scale = scale
self.transaction = transaction
self.content = content
}
var body: some View{
if let cached = ImageCache[url]{
let _ = print("cached: \(url.absoluteString)")
content(.success(cached))
}else{
let _ = print("request: \(url.absoluteString)")
AsyncImage(
url: url,
scale: scale,
transaction: transaction
){phase in
cacheAndRender(phase: phase)
}
}
}
func cacheAndRender(phase: AsyncImagePhase) -> some View{
if case .success (let image) = phase {
ImageCache[url] = image
}
return content(phase)
}
}
fileprivate class ImageCache{
static private var cache: [URL: Image] = [:]
static subscript(url: URL) -> Image?{
get{
ImageCache.cache[url]
}
set{
ImageCache.cache[url] = newValue
}
}
}
AsyncImage uses default URLCache under the hood. The simplest way to manage the cache is to change the properties of the default URLCache
URLCache.shared.memoryCapacity = 50_000_000 // ~50 MB memory space
URLCache.shared.diskCapacity = 1_000_000_000 // ~1GB disk cache space
User like this
ImageView(url: URL(string: "https://wallpaperset.com/w/full/d/2/b/115638.jpg"))
.frame(width: 300, height: 300)
.cornerRadius(20)
ImageView(url: URL(string: "https://ba")) {
// Placeholder
Text("⚠️")
.font(.system(size: 120))
}
.frame(width: 300, height: 300)
.cornerRadius(20)
ImageView.swift
import SwiftUI
struct ImageView<Placeholder>: View where Placeholder: View {
// MARK: - Value
// MARK: Private
#State private var image: Image? = nil
#State private var task: Task<(), Never>? = nil
#State private var isProgressing = false
private let url: URL?
private let placeholder: () -> Placeholder?
// MARK: - Initializer
init(url: URL?, #ViewBuilder placeholder: #escaping () -> Placeholder) {
self.url = url
self.placeholder = placeholder
}
init(url: URL?) where Placeholder == Color {
self.init(url: url, placeholder: { Color("neutral9") })
}
// MARK: - View
// MARK: Public
var body: some View {
GeometryReader { proxy in
ZStack {
placholderView
imageView
progressView
}
.frame(width: proxy.size.width, height: proxy.size.height)
.task {
task?.cancel()
task = Task.detached(priority: .background) {
await MainActor.run { isProgressing = true }
do {
let image = try await ImageManager.shared.download(url: url)
await MainActor.run {
isProgressing = false
self.image = image
}
} catch {
await MainActor.run { isProgressing = false }
}
}
}
.onDisappear {
task?.cancel()
}
}
}
// MARK: Private
#ViewBuilder
private var imageView: some View {
if let image = image {
image
.resizable()
.scaledToFill()
}
}
#ViewBuilder
private var placholderView: some View {
if !isProgressing, image == nil {
placeholder()
}
}
#ViewBuilder
private var progressView: some View {
if isProgressing {
ProgressView()
.progressViewStyle(.circular)
}
}
}
#if DEBUG
struct ImageView_Previews: PreviewProvider {
static var previews: some View {
let view = VStack {
ImageView(url: URL(string: "https://wallpaperset.com/w/full/d/2/b/115638.jpg"))
.frame(width: 300, height: 300)
.cornerRadius(20)
ImageView(url: URL(string: "https://wallpaperset.com/w/full/d/2/b/115638")) {
Text("⚠️")
.font(.system(size: 120))
}
.frame(width: 300, height: 300)
.cornerRadius(20)
}
view
.previewDevice("iPhone 11 Pro")
.preferredColorScheme(.light)
}
}
#endif
ImageManager.swift
import SwiftUI
import Combine
import Photos
final class ImageManager {
// MARK: - Singleton
static let shared = ImageManager()
// MARK: - Value
// MARK: Private
private lazy var imageCache = NSCache<NSString, UIImage>()
private var loadTasks = [PHAsset: PHImageRequestID]()
private let queue = DispatchQueue(label: "ImageDataManagerQueue")
private lazy var imageManager: PHCachingImageManager = {
let imageManager = PHCachingImageManager()
imageManager.allowsCachingHighQualityImages = true
return imageManager
}()
private lazy var downloadSession: URLSession = {
let configuration = URLSessionConfiguration.default
configuration.httpMaximumConnectionsPerHost = 90
configuration.timeoutIntervalForRequest = 90
configuration.timeoutIntervalForResource = 90
return URLSession(configuration: configuration)
}()
// MARK: - Initializer
private init() {}
// MARK: - Function
// MARK: Public
func download(url: URL?) async throws -> Image {
guard let url = url else { throw URLError(.badURL) }
if let cachedImage = imageCache.object(forKey: url.absoluteString as NSString) {
return Image(uiImage: cachedImage)
}
let data = (try await downloadSession.data(from: url)).0
guard let image = UIImage(data: data) else { throw URLError(.badServerResponse) }
queue.async { self.imageCache.setObject(image, forKey: url.absoluteString as NSString) }
return Image(uiImage: image)
}
}
Maybe later to the party, but I came up to this exact problem regarding poor performances of AsyncImage when used in conjunction with ScrollView / LazyVStack layouts.
According to this thread, seams that the problem is in someway due to Apple's current implementation and sometime in the future it will be solved.
I think that the most future-proof approach we can use is something similar to the response from Ryan Fung but, unfortunately, it uses an old syntax and miss the overloaded init (with and without placeholder).
I extended the solution, covering the missing cases on this GitHub's Gist. You can use it like current AsyncImage implementation, so that when it will support cache consistently you can swap it out.
I'm using a photo picker in my SwiftUI class to load photos and videos into an array. Right now I'm displaying those images after they've been selected in the picker. Works fine.
Instead, I'd like to run a function when I click the "Add" button and upload the objects in that array to Cloudinary for processing and storage. I can manually make this happen with a separate button outside the picker, but for the best UX, I think this function should run automatically when that "Add" button is selected.
How do I run a function when that "Add" button is clicked? Do I need to check if the array is not empty and some other condition exists instead?
Here's an image of the picker:
Here's the picker code:
import SwiftUI
import PhotosUI
struct PhotoPicker: UIViewControllerRepresentable {
typealias UIViewControllerType = PHPickerViewController
#ObservedObject var mediaItems: PickedMediaItems
var didFinishPicking: (_ didSelectItems: Bool) -> Void
func makeUIViewController(context: Context) -> PHPickerViewController {
var config = PHPickerConfiguration()
config.filter = .any(of: [.images, .videos, .livePhotos])
config.selectionLimit = 0
config.preferredAssetRepresentationMode = .current
let controller = PHPickerViewController(configuration: config)
controller.delegate = context.coordinator
return controller
}
func updateUIViewController(_ uiViewController: PHPickerViewController, context: Context) {
}
func makeCoordinator() -> Coordinator {
Coordinator(with: self)
}
class Coordinator: PHPickerViewControllerDelegate {
var photoPicker: PhotoPicker
init(with photoPicker: PhotoPicker) {
self.photoPicker = photoPicker
}
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
photoPicker.didFinishPicking(!results.isEmpty)
guard !results.isEmpty else {
return
}
for result in results {
let itemProvider = result.itemProvider
guard let typeIdentifier = itemProvider.registeredTypeIdentifiers.first,
let utType = UTType(typeIdentifier)
else { continue }
if utType.conforms(to: .image) {
self.getPhoto(from: itemProvider, isLivePhoto: false)
} else if utType.conforms(to: .movie) {
self.getVideo(from: itemProvider, typeIdentifier: typeIdentifier)
} else {
self.getPhoto(from: itemProvider, isLivePhoto: true)
}
}
}
private func getPhoto(from itemProvider: NSItemProvider, isLivePhoto: Bool) {
let objectType: NSItemProviderReading.Type = !isLivePhoto ? UIImage.self : PHLivePhoto.self
if itemProvider.canLoadObject(ofClass: objectType) {
itemProvider.loadObject(ofClass: objectType) { object, error in
if let error = error {
print(error.localizedDescription)
}
if !isLivePhoto {
if let image = object as? UIImage {
DispatchQueue.main.async {
self.photoPicker.mediaItems.append(item: PhotoPickerModel(with: image))
}
}
} else {
if let livePhoto = object as? PHLivePhoto {
DispatchQueue.main.async {
self.photoPicker.mediaItems.append(item: PhotoPickerModel(with: livePhoto))
}
}
}
}
}
}
private func getVideo(from itemProvider: NSItemProvider, typeIdentifier: String) {
itemProvider.loadFileRepresentation(forTypeIdentifier: typeIdentifier) { url, error in
if let error = error {
print(error.localizedDescription)
}
guard let url = url else { return }
let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
guard let targetURL = documentsDirectory?.appendingPathComponent(url.lastPathComponent) else { return }
do {
if FileManager.default.fileExists(atPath: targetURL.path) {
try FileManager.default.removeItem(at: targetURL)
}
try FileManager.default.copyItem(at: url, to: targetURL)
DispatchQueue.main.async {
self.photoPicker.mediaItems.append(item: PhotoPickerModel(with: targetURL))
}
} catch {
print(error.localizedDescription)
}
}
}
}
}
I added this class to the View:
class PickerStatus: ObservableObject {
var status: Bool = false
}
Then added this line to the PhotoPicker:
#ObservedObject var finishedSelection: PickerStatus
Then in the Coordinator, I added this:
for result in results {
self.photoPicker.finishedSelection.status = true
...
}
Now in my View I can set the instance of the ObservedObject, pass it into my child views including the PhotoPicker and check the value of that same Observed Object:
#ObservedObject var finishedSelection = PickerStatus()
Hey 👋 So I am downloading images from AWS S3 and show them in my app using a swiftUI LazyVGrid.
My code to download is the following:
class S3CacheFetcher: Fetcher {
typealias KeyType = MediaItemCacheInfo
typealias OutputType = NSData
func get(_ key: KeyType) -> AnyPublisher<OutputType, Error> {
return download(mediaItem: key).eraseToAnyPublisher()
}
private func download(mediaItem: KeyType) -> AnyPublisher<OutputType, Error>{
let BUCKET = "someBucket"
return Deferred {
Future { promise in
guard let key:String = S3CacheFetcher.getItemKey(mediaItem: mediaItem) else { fatalError("UserPoolID Error") }
print("Downloading image with key: \(key)")
AWSS3TransferUtility.default().downloadData(fromBucket: BUCKET,
key: key,
expression: nil) { (task, url, data, error) in
if let error = error{
print(error)
promise(.failure(error))
}else if let data = data{
// EDIT--------
let encrypt = S3CacheFetcher.encrypt(data: data)
let decrypt = S3CacheFetcher.decrypt(data: encrypt)
// EDIT--------
promise(.success(decrypt as NSData))
}
}
}
}
.eraseToAnyPublisher()
}
....
// EDIT---------
// In my code I have a static function that decrypts the images using CryptoKit.AES.GCM
// To test my problem I added these two functions that should stand for my decryption.
static var symmetricKey = SymmetricKey(size: .bits256)
static func encrypt(data: Data) -> Data{
return try! AES.GCM.seal(data, using: S3CacheFetcher.symmetricKey).combined!
}
static func decrypt(data: Data) -> Data{
return try! AES.GCM.open(AES.GCM.SealedBox(combined: data), using: S3CacheFetcher.symmetricKey)
}
}
My GridView:
struct AllPhotos: View {
#StateObject var mediaManager = MediaManager()
var body: some View {
ScrollView{
LazyVGrid(columns: columns, spacing: 3){
ForEach(mediaManager.mediaItems) { item in
VStack{
ImageView(downloader: ImageLoader(mediaItem: item, size: .large, parentAlbum: nil))
}
}
}
}
}
My ImageView I am using inside my GridView:
struct ImageView: View{
#StateObject var downloader: ImageLoader
var body: some View {
Image(uiImage: downloader.image ?? UIImage(systemName: "photo")!)
.resizable()
.aspectRatio(contentMode: .fill)
.onAppear(perform: {
downloader.load()
})
.onDisappear {
downloader.cancel()
}
}
}
And last but not least the ImageDownloader which is triggered when the image view is shown:
class ImageLoader: ObservableObject {
#Published var image: UIImage?
private(set) var isLoading = false
private var cancellable: AnyCancellable?
private(set) var mediaItem:MediaItem
private(set) var size: ThumbnailSizes
private(set) var parentAlbum: GetAlbum?
init(mediaItem: MediaItem, size: ThumbnailSizes, parentAlbum: GetAlbum?) {
self.mediaItem = mediaItem
self.size = size
self.parentAlbum = parentAlbum
}
deinit {
cancel()
self.image = nil
}
func load() {
guard !isLoading else { return }
// I use the Carlos cache library but for the sake of debugging I just use my Fetcher like below
cancellable = S3CacheFetcher().get(.init(parentAlbum: self.parentAlbum, size: self.size, cipher: self.mediaItem.cipher, ivNonce: self.mediaItem.ivNonce, mid: self.mediaItem.mid))
.map{ UIImage(data: $0 as Data)}
.replaceError(with: nil)
.handleEvents(receiveSubscription: { [weak self] _ in self?.onStart() },
receiveCompletion: { [weak self] _ in self?.onFinish() },
receiveCancel: { [weak self] in self?.onFinish() })
.receive(on: DispatchQueue.main)
.sink { [weak self] in self?.image = $0 }
}
func cancel() {
cancellable?.cancel()
self.image = nil
}
private func onStart() {
isLoading = true
}
private func onFinish() {
isLoading = false
}
}
So first of all before I describe my problem. Yes I know I have to cache those images for a smother experience. I did that but for the sake of debugging my memory issue I do not cache those images for now.
Expected behavior: Downloads images and displays them if the view is shown. Purges the images out of memory if the image view is not shown.
Actual behavior: Downloads the images and displays them but it does not purge them from memory once the image view has disappeared. If I scroll up and down for some period of time the memory usage is up in the Gb range and the app crashes.
If I use my persistent cache which grabs the images from disk with more or less the same logic for grabbing and displaying the images than everything works as expected and the memory usage is not higher than 50 Mb.
I am fairly new to Combine as well as SwiftUI so any help is much appreciated.
According to WWDC20 and other articles it seems its quite easy to fetch the image url from a given url. Below is my starter code. That just lists a random list of urls and is supposed to fetch the imageurl for the rich link previews of the urls.
One simply fetches the metadata using the LPMetadataProvider. But i can't get it to show the image url. Does someone know how its done in SwiftUI?
import SwiftUI
import LinkPresentation
struct ExampleHTTPLinks: View {
var links = [ "https://www.google.com", "https://www.hotmail.com"]
let metadataProvider = LPMetadataProvider()
var body: some View {
List(links, id:\.self) { item in
HStack {
Text(item)
Image(systemName: "heart.fill")
metadataProvider.startFetchingMetadata(for: URL(string: item)!) { metadata, error in
if error != nil {
// The fetch failed; handle the error.
// Examples: server doesn't respond, is too slow, user doesn't have network.
return
}
let linkView = LPLinkView(metadata: metadata)
Image(linkView.image)
// Make use of the fetched metadata.
}
}
}
}
}
struct ExampleHTTPLinks_Previews: PreviewProvider {
static var previews: some View {
ExampleHTTPLinks()
}
}
Here's a working version:
class LinkViewModel : ObservableObject {
let metadataProvider = LPMetadataProvider()
#Published var metadata: LPLinkMetadata?
#Published var image: UIImage?
init(link : String) {
guard let url = URL(string: link) else {
return
}
metadataProvider.startFetchingMetadata(for: url) { (metadata, error) in
guard error == nil else {
assertionFailure("Error")
return
}
DispatchQueue.main.async {
self.metadata = metadata
}
guard let imageProvider = metadata?.imageProvider else { return }
imageProvider.loadObject(ofClass: UIImage.self) { (image, error) in
guard error == nil else {
// handle error
return
}
if let image = image as? UIImage {
// do something with image
DispatchQueue.main.async {
self.image = image
}
} else {
print("no image available")
}
}
}
}
}
struct MetadataView : View {
#StateObject var vm : LinkViewModel
var body: some View {
VStack {
if let metadata = vm.metadata {
Text(metadata.title ?? "")
}
if let uiImage = vm.image {
Image(uiImage: uiImage)
.resizable()
.frame(width: 100, height: 100)
}
}
}
}
struct ContentView: View {
var links = [ "https://www.google.com", "https://www.hotmail.com"]
let metadataProvider = LPMetadataProvider()
var body: some View {
List(links, id:\.self) { item in
HStack {
Text(item)
Image(systemName: "heart.fill")
MetadataView(vm: LinkViewModel(link: item))
}
}
}
}
LPMetadataProvider complains if you try to use it for multiple calls, so I've moved it to a view model.
The image is vended by an NSImageProvider -- you can see the loadObject call is what gets the UIImage out of it.
Note that you could use LPLinkView if you wanted to use the out-of-the-box presentation that Apple gives you. Because it's a UIView, to use it in SwiftUI, you'd have to wrap it with UIViewRepresentable:
struct LPLinkViewRepresented: UIViewRepresentable {
var metadata: LPLinkMetadata
func makeUIView(context: Context) -> LPLinkView {
return LPLinkView(metadata: metadata)
}
func updateUIView(_ uiView: LPLinkView, context: Context) {
}
}
struct MetadataView : View {
#StateObject var vm : LinkViewModel
var body: some View {
if let metadata = vm.metadata {
LPLinkViewRepresented(metadata: metadata)
} else {
EmptyView()
}
}
}
class LinkViewModel : ObservableObject {
let metadataProvider = LPMetadataProvider()
#Published var metadata: LPLinkMetadata?
init(link : String) {
guard let url = URL(string: link) else {
return
}
metadataProvider.startFetchingMetadata(for: url) { (metadata, error) in
guard error == nil else {
assertionFailure("Error")
return
}
DispatchQueue.main.async {
self.metadata = metadata
}
}
}
}