I need to delete the images in google container registry which are older than 7 days. Do we have any lifecycle similar to objects in GCS bucket.
Any reference will help a lot
I do not think there is a direct way to do that, but you could use a script like this one
Related
I use google cloud storage as my CKEditor image uploader.
But now I face some problems. If I upload a image to GCS, then I delete the image in my article, that image still exist in my GCS.
Can GCS auto delete unused image?
It sounds like the condition you want is that there are no img tag references in your HTML files to an image file you want to delete. If that's the case, Lifecycle Management has no condition support for that condition. Or maybe you're thinking of deleting images if they haven't been accessed in the past N days. Unfortunately that also is not a condition supported by Lifecycle Management.
If the above is a correct interpretation of your case, you will need to implement something to do the needed detection - either by walking your HTML objects and determining which of your image files is no longer referenced, or by enabling object access logging and walking the logs to determine image files that haven't been accessed recently.
You can set up a GCS Object Lifecycle rule that would delete images based on conditions you choose.
Salesforce has two different UIs and in accordance with it, it has the possibility to store attached files differently.
Two files were uploaded via the classic UI and they are marked as 'attachments'. Other files were uploaded through the new UI and they are marked as 'files'.
I want to upload all of these files using REST API. I cannot find the proper documentation. Can somebody help me with this?
That's not 100% true. In SF Classic UI you were able to upload Files too. It's "just" about knowing the right API name of the table and you'll find lots of examples online.
Attachment and Document objects have exactly same API names, you can view their definitions in SOAP API definition or in REST API explorer (there was something which you can still see in screenshot in here, seems to be down now, maybe they're moving it to another area in documentation...)
The Files (incl. "Chatter Files") are stored in ContentDocument and ContentVersion object. The name is unexpected because long time ago SF purchased another company's product and it was called "Salesforce Content". In beginning it was bit of mess, now it's better integrated into whole platform but still some things lurk like File folders can be called Libraries sometimes in documentation but actual API name is ContentWorkspace. The entity relationship diagram can help a bit: https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_erd_content.htm
ContentDocument is a header to which many places in SF link (imagine file wasting space on disk only once but being cross-linked from multiple records). It can have at least 1 version and if you need to update the document - you'd upload new version but all links in org wouldn't change, they'd still link to header.
So, how to use it?
REST API guide: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_sobject_insert_update_blob.htm
or maybe Chatter API guide (you tagged it with chatter so chances are you already use it): https://developer.salesforce.com/docs/atlas.en-us.chatterapi.meta/chatterapi/connect_resources_files.htm
some of my answers here might help (shameless plug). They're about upload and reading data too and one is even about data loader... but you might experiment with exporting files first, get familiar with structure before you load?
https://stackoverflow.com/a/48668673/313628
https://stackoverflow.com/a/56268939/313628
https://stackoverflow.com/a/60284736/313628
While updating s3 bucket name through cloudformation, Its getting UPDATE_ROLL_BACK automatically and Please let Is it possible to update S3 bucket name through cloudformation and how drift detecting works?.
Updating bucket name requires a replacement. That means that CloudFormation will delete the bucket and then create a new one with the new name. CloudFormation won't delete buckets unless they're empty. That's probably why it fails for your case. To confirm this go to the CloudFormation console page, click on the stack and go to the events tab. Look at some of the latest events and one of them should be about failing to delete the bucket.
To get around this you need to empty your bucket before updating your stack. You probably want to backup all of its content, update the stack, and upload the content back to the new bucket.
Is it possible to enable Directory listing in Google Cloud Storage?
I was thinking on having a "domain bucket" and use it to list all the contents, similar to Nginx's autoindex on or Apache's Options +Indexes.
If I make the bucket public all contents will be listed as a XML, but not like a directory listing.
No, it is not currently possible.
You could perhaps implement such a thing by creating "index" pages in each directory that used JavaScript to query the bucket and render a list of objects, but there's no built-in support for this.
You might want to take a look at s3-bucket-listing. It's for Amazon S3 but works with GCS as well.
I am uploading objects to amazon s3 using AWS iOS SDK in Iphone, sometime error occurs and some of the objects are uploaded, remaining are not uploaded. I have created bucket and inside bucket i have created folder in which i have store my objects. I want to delete folder and all its object. Can anyone help me?
First of all, there is not such thing as "folders" in S3. Most S3 clients (including the AWS web console) show them as folders only for convenience (grouping stuff), but in fact, what you see as a "folder name" is merely a prefix.
Being that said, my suggestion to you is using the listObjectsInBucket API call, passing in your "folder name" as prefix in the S3ListObjectsRequest parameter.
When you have obtained all the keys (file names including prefix) matching that prefix, use the deleteObjects API call, passing in the keys in S3ListObjectsRequest parameter.
For more details on folder/prefix and deleting stuff, please see these related links:
Delete files, directories and buckets in amazon s3 java
Thread on AWS forum regarding this subject