Bucket upload notification - bucket

I use the Buckets function in Oracle Cloud and I would like to know if there is any way to send an email/sms notifying when a user uploads files to the bucket.
I studied the documentation and found only bucket creation warnings.

Did you tried the Emit Object Events on OCI bucket.
You can check out this doc for ref : https://www.jmjcloud.com/blog/oracle-cloud-infrastructure-events-with-apex-and-ords
You can check the object even types here : https://docs.oracle.com/en-us/iaas/Content/Events/Reference/eventsproducers.htm#ObjectStor__object
Create object event : com.oraclecloud.objectstorage.createobject
When the user uploads the file object is getting created in the bucket so you can use this event in the notification.

Related

Manually Setting a firebaseStorageDownloadTokens on image Upload to Firebase Storage with Flutter

I am trying to put a file to Firebase Storage with the following:
final metadata = firebase_storage.SettableMetadata(
customMetadata: {'firebaseStorageDownloadTokens': customToken});
_uploadTask =
_storage.ref().child(filePath).putFile(widget.file!, metadata);
I get the following error in my console:
E/StorageException( 6906): Caused by: java.io.IOException: { "error": { "code": 400, "message": "Not allowed to set custom metadata for firebaseStorageDownloadTokens" }}
Submitting this as both a bug & feature request on the plugin site, but in the meantime, in the off-chance I'm just writing this wrong, as documentation on this isn't thorough, I thought I'd submit here to see if anybody has done it successfully from the client.
I'll be writing a cloud function later if I can't do it from client, but since this isn't a high security thing I'm doing...I have many reasons to avoid forcing my app to get the downloadUrl....my aim is to create a predictable downloadUrl...readily done in the cloud, I know, just looking to do it from the client here.
After talking to some folks, this is not allowed from the client. It has to be done with a Cloud Function. It's highly discouraged. Here is why:
Anyone can access images if they have the image ID and the downloadToken. The idea is that both are hard to guess...hence an modicum of security. You can set security rules to getDownloadUrl for authenticated users, hence "restricting" the viewability through the obscurity of the image name, downloadToken and location.
By manually setting a downloadToken, you erode that "security."
It is allowed, however, you have to do it with a cloud function, triggering on create of the image, over-writing the auto-generated downloadToken. Send me a DM for help in that if you need it.

Can't create working Shared Access Signature for Azure Files

I need to create a SAS so I can create an Azure SQL Extended Event session. The event session needs a file data storage target via SAS and I can't create one that works. Here's what I've tried:
Identified a storage account that's not blob; just general. I'm pretty sure I need general so I can create files directly.
Created a file share therein.
Using azure storage explorer, right clicked on that file share and selected, "Get Shared Access Signature."
Checked Read, Write, List and created.
This gives me the URL https://mystorageacct.file.core.windows.net/xevents?st=2018-12-25T16%3A29%3A51Z&se=2018-12-29T16%3A29%3A00Z&sp=rwl&sv=2018-03-28&sr=s&sig=mysig
If I just try to follow this URL or create a CloudFile object with it in code, I get the oft-seen error, Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. Signature did not match. String to sign used was rwl 2018-12-25T16:29:51Z 2018-12-29T16:29:00Z /file/cs7f0fbc5104d4ax435dx883/$root 2018-03-28
Tried adding in comp=list&restype=container as suggested here. No joy.
Ensured I have no access policy in use.
Went to the azure portal and created a different SAS at the storage account level (couldn't see a way to create it on the file share). That gave me this "File service SAS URL": https://mystorageacct.file.core.windows.net/?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2018-12-30T01:25:16Z&st=2018-12-26T17:25:16Z&spr=https&sig=mysig
If I try that URL I get Value for one of the query parameters specified in the request URI is invalid. I don't know which parameter is in question, they look fine to me, but I don't know what the value srt=sco indicates. Based on this doc srt is resource type, but I don't know what the value sco indicates.
Very confused, looking for suggestions.
For any future readers, extended event sessions confusingly (because they write a file) require blob containers, not general/file/queue containers. At least I could only get them to work that way.
You are probably confused by how the SAS URLs are presented. In fact, the SAS URLs you got just provide examples of how to use the SAS token, they can't be used directly. Hence you saw those errors occur.
Service-level SAS URL, i.e. the one you got from Storage Explorer.
It's in the format of fileEndPoint/fileShareName?SASToken. The SASToken gives us permission to operate on all files inside the specified file share. To leverage the token, we need to add fileName in the URL, i.e. fileEndPoint/fileShareName/fileName?SASToken.
comp=list&restype=container is to list blobs in Blob Container, not for File Share.
Account-Level SAS URL, the one you got form Azure portal.
It's in the format of fileEndPoint/?SASToken. Likewise, we need to complement the URL to make it valid, i.e. fileEndPoint/fileShareName/fileName?SASToken. Note that this SASToken has all permission on all Storage resources because all choices are checked.
sco means we have permission to operate on service, container, and object, which indicates the scope of permission, check doc for details.
I am not familiar with Azure SQL Extended Event session, but if you only need to work with files inside one file share, 1st is enough.

How to atomically delete user and user's data from Firebase Auth and Realtime Database respectively?

Some background
When a user account is created, I do 3 things using callback chaining in the sequence 1 > 2 > 3.
A user is created in Firebase Auth (the standard way using createUser(withEmail ...))
I upload the user's profile picture to Firebase Storage and capture the returned downloadUrl for use in step 3
I store the user's other information (including the downloadUrl from step 2) in a node in the realtime database (keyed by $userid)
Now the problem
I provide a button called 'Delete account' which should enable the user to delete everything. That is, clear all their data in the Realtime Database, clear their profile picture in Firebase Storage, and finally delete their account from Auth. The important thing is that all these operations should succeed or be canceled if even one fails.
I've gone through ~10 pages of S/O questions & answers, there was 1 unanswered question like this one (it asked about the account creation process...I suppose an answer to that question can easily be adapted here.)
What have i tried?
Currently, I use callback chaining like so:
// start by atomically deleting all the user data from the Realtime Database using the fanout system.
- get all appropriate locations and save in fanout dictionary
- update all these locations to nil // atomic goodness :)
-callback:
-on failure:
- just return // no worries, nothing has changed yet :/
-on success:
// proceed to delete user files on firebase storage
- delete path $userid on firebase storage
-callback:
-on failure: // this is bad, no idea what to do :(
-on success:
// proceed to delete account from Auth
- delete user account from Auth
-callback:
-on failure: // this is terrible, also, it could happen often b/c firebase does ask for re-authentication sometimes :(
-on success: // thank goodness! I have an authListener somewhere ready to show the 'signInViewController' :)
How do you handle such multi-system (Auth, Storage, RealtimeDB) operation atomically? I have looked into transactions but can't see how they can be adapted for this - the docs only show them being used in incrementing counters for likes/stars etc in the RealtimeDB.
Any help will be very much appreciated.
So the main problem is what to do if the data you're trying to delete is being deleted only partially.
I think you should think about a method that will help you restore it if something goes wrong.
I'm not on my computer now but I suggest you to try to download the data that you want to delete and only if there were a success you delete the local copy too, otherwise you should be able to restore it easily.
Edit: if we talk about a serious amount of storage instead of a local copy you can make it on the cloud.
So you first copy this data, then delete it, then delete the database data (also copied previously) then you can delete the user account and finally you can delete the copy.

How to get recordsChanged sync status using JS Datastore API?

I'm using the JavaScript SDK flavor of the Dropbox Datastore API with a web app for mobile and desktop. When the recordsChanged event fires while the app is offline, object data about those changes are generated but the changes can't sync to the datastore until the app is online again.
The event data can be checked against the settings table, for instance, like this:
e.affectedRecordsForTable("settings")
But the array data returned has a lot of layers to wade through.
[t_datastore: t_deleted: false_managed_datastore: t_record_cache: t_rid: "startDate"_tid: "settings"__proto__: t]
I would like to capture the "has been synced" or the "not yet synced" status of each change (each array index) so that I can store the data still waiting to sync in case the session is lost (user closes the app/browser or OS kills the app process). But I also want to know if/when the data does eventually sync successfully. Where can I find the property holding this data?
I found my answer. Steve Marx has a post on the Dropbox developer blog that covers the information I needed. There is a datastore.getSyncStatus().uploading property that returns true or false depending on the state of the datastore sync status.
Source:
https://www.dropbox.com/developers/blog/61/checking-the-datastore-sync-status-in-javascript

how can i delete a Facebook Test User with 2+ apps using the Graph API

When i try to delete the user like it says in the docs (http://developers.facebook.com/docs/test_users/#deleting) i get the error response :
(#2903) Cannot delete this test account because it is associated with other applications. Use DELETE app_id/accounts/test-users/test_account_id to remove it from other apps first. Use GET test_user_id/ownerapps to get complete list of owner apps.
then when i try to do what it says (replacing <user_id> and <app_id> with a facebook numeric id) :
DELETE <app_id>/accounts/test-users/<user_id> to remove it from other apps first
i get this error :
Unknown path components: /<user_id>
am i missing something ?
As the error said you cannot delete an user because it has been assigned to two or more application.
The workaround for this is to get all the applications which are using this specific user using this call
"https://graph.facebook.com/TEST_ACCOUNT_ID/ownerapps&access_token=YOUR_APP_ACCESS_TOKEN"
This will give you the list of app the test account is attached to prior decoding the object with a JSON serializer. Then you can remove and not delete the test account from each app until only one remains using :
"https://graph.facebook.com/APP_ID/accounts/test-users?uid=TEST_ACCOUNT_ID&access_token=APPLICATION_ACCESS_TOKEN&method=delete"
When only one app remains you can delete the test account using :
"https://graph.facebook.com/TEST_ACCOUNT_ID?method=delete&access_token=TEST_ACCOUNT_ACCESS_TOKEN"
Hope this help!