Supabase error when trying to upload image - postgresql

When I try to upload an image to supabase to a public bucket I created I get the following error
new row violates row-level security policy for table \"objects\
I am wondering why that is for a public bucket?

I found out that there are also policys for storage, which are pretty straight forward. Since I did not read about this functionality in the official docs I thought I might save some people some time search around.
Under "Storage" you can find a policies section.

image
you need to use the secret not the public key to upload files open this image in the top

Related

Can google cloud storage auto delete unused image/file?

I use google cloud storage as my CKEditor image uploader.
But now I face some problems. If I upload a image to GCS, then I delete the image in my article, that image still exist in my GCS.
Can GCS auto delete unused image?
It sounds like the condition you want is that there are no img tag references in your HTML files to an image file you want to delete. If that's the case, Lifecycle Management has no condition support for that condition. Or maybe you're thinking of deleting images if they haven't been accessed in the past N days. Unfortunately that also is not a condition supported by Lifecycle Management.
If the above is a correct interpretation of your case, you will need to implement something to do the needed detection - either by walking your HTML objects and determining which of your image files is no longer referenced, or by enabling object access logging and walking the logs to determine image files that haven't been accessed recently.
You can set up a GCS Object Lifecycle rule that would delete images based on conditions you choose.

How do I resolve a "The policy has been modified by another process" error in Google Cloud SQL?

I'm trying to export the contents of a MySQL table from Google Cloud SQL into a Cloud Storage bucket, and I'm running into an error:
The policy has been modified by another process. Please try again.
Yesterday, I happily imported CSV data to my Cloud SQL database left and right, and when I tried to write some of the modified data from a query out to another CSV file, I got tripped up. So I followed the directions here to try to resolve my issue: https://cloud.google.com/sql/docs/mysql/import-export/exporting?_ga=2.11596404.-1979747439.1580744022
I hit a wall at the end of the day and decided to come back to it later. This morning, I created a new table in my DB and inserted the data into it that I need to export via a query. When I went to export it using the export function in the Cloud SQL console, I get the error message above.
I'm pretty sure I messed something up with permissions somewhere when I was poking around yesterday, but I can't figure out what I did. I'm also having problems with import now, too -- I get a little "Permissions updated" popup, and then this error:
Operation failed: You are not authorized to perform this operation. Learn more
I'd appreciate any insight into how to undo whatever I apparently did.
Never mind! I opened up another project/storage instance and made sure things matched. For future reference, the permissions for the service account on the storage bucket need to be:
Storage Legacy Bucket Reader
Storage Object Admin
Storage Object Viewer

Is there a way to change the google storage signed url to not include the name of the file?

I have a method that gets a signed url for a blob in a google bucket and then returns that to users. Ideally, I could change the name of the file shown as well. Is this possible?
An example is:
https://storage.googleapis.com/<bucket>/<path to file.mp4>?Expires=1580050133&GoogleAccessId=<access-id>&Signature=<signature>
The part that I'd like to set myself is <path to file.mp4>.
The only way I can think of is having something in the middle that will be responsible for the name "swap".
For example Google App Engine with an http trigger or Cloud Function with storage trigger that whenever you need it will fetch the object, rename it, and either provide it to the user directly or store it with the new name in another bucket.
Keep in mind that things you want to store temporarily in GAE or Cloud Functions need to be stored in "/tmp" directory.
Then for renaming, if you are using GAE possibly you can use something like:
import os
os.system([YOUR_SHELL_COMMAND])
However, the easiest but more costly approach is to set a Function with storage trigger that whenever an object is uploaded it will store a copy of it with the desired new name in a different bucket that you will use for the users.

AWS CloudFormation: Detect Drift

While updating s3 bucket name through cloudformation, Its getting UPDATE_ROLL_BACK automatically and Please let Is it possible to update S3 bucket name through cloudformation and how drift detecting works?.
Updating bucket name requires a replacement. That means that CloudFormation will delete the bucket and then create a new one with the new name. CloudFormation won't delete buckets unless they're empty. That's probably why it fails for your case. To confirm this go to the CloudFormation console page, click on the stack and go to the events tab. Look at some of the latest events and one of them should be about failing to delete the bucket.
To get around this you need to empty your bucket before updating your stack. You probably want to backup all of its content, update the stack, and upload the content back to the new bucket.

Can I pass a FilePicker a custom source using their API

Essentially I'm trying to see if I can use file picker to manage user assets.
With they accelerate bundle you can specify a custom s3 source, but only on their dashboard.
I want users to pick and store to their own folders ( which appears to be possible )
but then also be able to re-use those files they have already picked and stored using filepicker.
Is this possible? by reading through the doc it appears not.
Such a feature is not available so far.
I think the only solution here would be to create a database with user filepicker filelinks.