Check if File exists in AWS S3 Storage with Amplify - swift

I'm using AWS S3 Storage with Amplify and to avoid multiple uploads of the same file i want to check if the file already exists.
Currently by getting the download url via Amplify but it also generates an url if the file doesn't exists. I was hoping it returns an error:
_ = Amplify.Storage.getURL(key: "myKey") { event in
switch event {
case let .success(url):
print("Completed: \(url)")
case let .failure(storageError):
print("Failed: \(storageError.errorDescription). \(storageError.recoverySuggestion)")
}
}
Are there any other ways to check if files exists in amplify?
Without downloading it of course.
The whole point is to save traffic.

It looks like you might be able to do something similar with Amplify.Storage.list as shown here https://docs.amplify.aws/lib/storage/list/q/platform/ios
_ = Amplify.Storage.list { event in
switch event {
case .success(let listResult):
let keys = listResult.items.map { $0.key }
if !keys.contains("myKey") {
// upload unique file
}
case .failure(let error):
print("Failed: \(error.errorDescription).")
}
}

Related

Swift AWS S3 Bucket Setup

I am trying to set up my S3 bucket to be able to upload an image and that image have a url from AWS. Currently I use the Amplify package to turn the image into an object and then upload it to AWS with a certain name key for that image. Here is the function and link to the package:
https://github.com/aws-amplify/amplify-swift
let profileImage = imageSelected
let profileImageData = profileImage.jpegData(compressionQuality: 1)!
Amplify.Storage.uploadData(key: imageKey, data: profileImageData) { result in
switch result {
case .success(let uploadedData):
print(uploadedData)
case .failure(let error):
print(error)
}
}
My question is, is it possible to make a function that uploads to my bucket and create a AWS url to that image so that I can just save the url in my other database without having to use pods. If this is possible could I have help creating that function? Please let me know if I need to do more explaining.
Thank you!

Getting an error when trying to upload image to firebase (Swift)

I have the following code:
func upload() {
let localFile = usersImg!
// Create the file metadata
let metadata = StorageMetadata()
metadata.contentType = "image/jpeg"
let storageRef = Storage.storage().reference()
// Upload file and metadata to the object 'images/mountains.jpg'
let uploadTask = storageRef.putFile(from: localFile, metadata: metadata)
// Listen for state changes, errors, and completion of the upload.
uploadTask.observe(.resume) { snapshot in
// Upload resumed, also fires when the upload starts
}
uploadTask.observe(.pause) { snapshot in
// Upload paused
}
uploadTask.observe(.progress) { snapshot in
// Upload reported progress
let percentComplete = 100.0 * Double(snapshot.progress!.completedUnitCount)
/ Double(snapshot.progress!.totalUnitCount)
}
uploadTask.observe(.success) { snapshot in
// Upload completed successfully
}
uploadTask.observe(.failure) { snapshot in
if let error = snapshot.error as NSError? {
switch (StorageErrorCode(rawValue: error.code)!) {
case .objectNotFound:
// File doesn't exist
break
case .unauthorized:
// User doesn't have permission to access file
break
case .cancelled:
// User canceled the upload
break
/* ... */
case .unknown:
// Unknown error occurred, inspect the server response
break
default:
// A separate error occurred. This is a good place to retry the upload.
break
}
}
}
}
The userImg variable is just the image the user selected, if I were to print it out it would look something like this:
Optional(file:///var/mobile/Media/DCIM/100APPLE/IMG_0030.PNG)
However when I run my code and call the method I get the following error:
Thread 12: "*** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[1]"
What does that mean and what am I doing wrong. (By the way I have tried a lot of different methods to upload urls to firebase but with everyone I am getting weird errors so this code is straight from firebase website.
I cannot guarantee anything because I didn’t use this same code although I had to change the rules for my firebase storage and that helped!

Reading Data from Realm Database (Swift)

I am new to Realm DataBase and I need a way to read data from realmCloud, but from two different app projects. The way I have tried to implement this is by using query-synced realm. At the moment I'm using a singe realm user to write the data in one app, and the same realm user to read data from another app. The problem is that making a query from the second app(the one used for reading) doesn't return any realm objects ( I have also noticed that user identifier is different from the first one, and also the user permissions are nil.
I have tried setting permissions directly from RealmStudio since documentation is not precise on how to set them from code
func openRealm() {
do {
realm = try Realm(configuration: SyncUser.current!.configuration())
let queryResults = realm.objects(*className*.self)
let syncSubscription = queryResults.subscribe()
let notificationToken = queryResults.observe() { [weak self] (changes) in
switch (changes) {
case .initial: print(queryResults)
case .error(let error): print(error)
default: print("default")
}
}
for token in queryResults {
print(token.tokenString)
}
syncSubscription.unsubscribe()
notificationToken.invalidate()
} catch {
print(error)
}
}
This function prints the data in one app project, but used in another app project with the same user logged in, and the same classFile referenced in the project, it does not. (note that SyncUser.current.identifier is different also
There are a couple of issues.
Some of these calls are asynchronous and the code in your question is going out of scope before the data is sync'd (retreived). The bottom line is code is faster than the internet and you need to design the flow of the app around async calls; don't try to work with the data until it's available.
For example
let notificationToken = queryResults.observe() { [weak self] (changes) in
//here is where results are fully populated
}
// this code may run before results are populated //
for token in queryResults {
print(token.tokenString)
}
Also, let notificationToken is a local var and goes out of scope before the results are populated as well.
These issues are super easy to fix. First is to keep the notification token alive while waiting for results to be populated and the second is to work with the results inside the closure, as that's when they are valid.
var notificationToken: NotificationToken? = nil //a class var
func openRealm() {
do {
let config = SyncUser.current?.configuration()
let realm = try Realm(configuration: config!)
let queryResults = realm.objects(Project.self)
let syncSubscription = queryResults.subscribe(named: "my-projects")
self.notificationToken = queryResults.observe() { changes in
switch changes {
case .initial:
print("notification: initial results are populated")
queryResults.forEach { print($0) }
case .update(_, let deletions, let insertions, let modifications):
print("notification: results, inserted, deleteed or modified")
insertions.forEach { print($0) } //or mods or dels
case .error(let error):
fatalError("\(error)")
}
}
} catch {
print(error)
}
}
deinit {
self.notificationToken?.invalidate()
}
The other advantage of keeping that token (and its corresponding code) alive is when there are further changes, your app will be notified. So if another project is added for example, the code in the 'changes' section will run and display that change.

A background URLSession with identifier already exists

I have an S3Service which is a singleton that manages all the S3 related uploads and downloads.
When I upload the first image it works fine but if I try to upload an Image consecutively It gives me this warning and the completion block never gets called.
A background URLSession with identifier com.amazonaws.AWSS3TransferUtility.Identifier.TransferManager already exists.
This is how I upload method looks:
if let data = image.jpegData(compressionQuality: 0.5) {
let transferUtility = AWSS3TransferUtility.s3TransferUtility(forKey: S3Service.TRANSFER_MANAGER_KEY)
transferUtility.uploadUsingMultiPart(data: data, bucket: EnvironmentUtils.getBucketName(), key: filename, contentType: "image/jpg", expression: nil, completionHandler: { task,error in
if let error = error {
print(error.localizedDescription)
} else {
print("Image upload success")
}
})
}
The call to register transfer utility AWSS3TransferUtility.register(with: serviceconfig, forKey: KEY) was causing the above issue. There are two things that should be kept in mind.
The AWSS3TransferUtility should be registered only once per Application session. Then we can use AWSS3TransferUtility.S3TransferUtilityForKey to get the instance wherever needed.
If these are for different users within the app, ( e.g. sign-up) and if we want to keep AWSS3TransferUtility separate for each user, register AWSS3TransferUtility with a different key (preferably the same key for the same user) and look up using that key.

Accessing some file via Cloud Speech in Google cloud storage throws error 7

here is the Context : My iOS swift app
records a sound,
creates a firebase object,
renames the file with the key of the object
uploads on firebase cloud the wav file.
A firebase cloud function is triggered that sends the audio file to google speech .recognize
My problem :
When I upload manually a sound file to the cloud storage, it works fine, but when the file is uploaded by the app automatically, I get the following error message as a return form the speech API :
{ Error: The caller does not have permission
at /user_code/node_modules/#google-cloud/speech/node_modules/grpc/src/node/src/client.js:554:15
code: 7, metadata: Metadata { _internal_repr: {} }, note:
'Exception occurred in retry method that was not classified as
transient' }
Here is the swift part :
func uploadFile(fileName:String){
// File located on disk
let localFileURL = FileManager.default.urls(for: .documentDirectory, in:.userDomainMask)[0]
let fileURL = localFileURL.appendingPathComponent(fileName)
if FileManager.default.fileExists(atPath: fileURL.path) {
print("FilePath", fileURL.path)
// Create a reference to the file you want to upload
let newMnemoRef = MnemoDatabase.shared.createNew()
let newMnemoId = newMnemoRef.key
let filename=newMnemoId+".wav"
//let filename=fileName
let audioStorageRef = storage.reference().child(filename)
let storagePath = "gs://\(audioStorageRef.bucket)/\(audioStorageRef.fullPath)"
print(storagePath)
// Upload the file to the path "audio"
let uploadTask = audioStorageRef.putFile(from: fileURL, metadata: nil) { metadata, error in
if let error = error {
print("Upload error : ", error.localizedDescription)
} else {
// Metadata contains file metadata such as size, content-type, and download URL.
print ("OK")
}
}
// Add a progress observer to an upload task
let observer = uploadTask.observe(.success) { snapshot in
print("uploaded!")
newMnemoRef.child("audio").setValue([
"encoding_converted":"LINEAR16",
"sampleRate_converted":"44100",
"path_converted":storagePath])
}
} else {
print ("Non existent file", fileURL.path)
}
}
The cloud function calling the the speech API is fine with manually uploaded files.
here is the extract
const request = {
encoding: encoding,
sampleRateHertz: sampleRateHertz,
languageCode: language,
speechContexts: context
};
speech.recognize(uri, request)
The cloud storage bucket and cloud function all share the same project credentials.
I removed all authentification from the bucket
// Anyone can read or write to the bucket, even non-users of your app.
// Because it is shared with Google App Engine, this will also make
// files uploaded via GAE public.
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write;
}
}
}
I even tried hard coding the path into the cloud function, but to no avail.
I will appreciate any help
Well it seems that I was working on two different projects, and the one I was calling functions from didn't have the speech API activated, and I was passing credentials of the other project.
I should really stop working too late...
I re-engineered my project to work with a file trigger now, this is how I found the bug...