I've set 3 day TTL on a bucket and it has been working for about a month.
but in last 10 days nothing has been removed from the bucket. I've checked if the lifecycle rule still exists on the bucket using gsutil lifecycle get bucket and it is still there:
{"rule": [{"action": {"type": "Delete"}, "condition": {"age": 3}}]}
Is the rule correct? if yes what could be the problem?
Related
I want to set a policy to a new GCS bucket so that files expire in 14 days (TTL, time to live, or lifecycle ends).
I use
gsutil mb \
-p ${GCP_PROJECT_ID} \
gs://$GCS_BUCKET_NAME \
--retention 14d
it doesn't work. Why is that?
GCS bucket TTL and retention policy
I had misunderstood the intention of --retention.
A retention policy is to govern how long objects in the bucket must be retained, not when it shall expire or time to live.
https://cloud.google.com/storage/docs/bucket-lock
--retention 14d means the objects are not allowed to be deleted within 14 days. It doesn't mean the objects has 14d lifecycle and shall expire and be deleted after 14 days.
To set TTL correctly for a GCS bucket, do below instead
# set GCS bucket object TTL
echo '
{
"rule":
[
{
"action": {"type": "Delete"},
"condition": {"age": 14}
}
]
}
' > gcs_lifecycle.tmp
gsutil lifecycle set gcs_lifecycle.tmp gs://$GCS_BUCKET_NAME
rm gcs_lifecycle.tmp
I have a spinnaker pipeline that deploys a db-job to k8s.
I would like to be able to delete the job before deploying another one, i.e. to add a spinnaker stage or to somehow configure job so it deletes itself.
I know that cronjob would be great for it, but it is in beta and not stable enough to be used for db operations.
I have tried to add a stage to spinnaker like this:
{
"account": "k8s-devops-v2",
"cloudProvider": "kubernetes",
"location": "loc",
"manifestArtifactAccount": "loc-open-source",
"manifestName": "job my-job-name",
"mode": "static",
"name": "Delete Db job",
"options": {
"cascading": true,
"gracePeriodSeconds": null
},
"type": "deleteManifest"
}
but it won't work.
I also don't want to use ttl because I wan't the latest job to be present until the new one is created.
Are there any other options? What is the best practice for this?
Depending on what version of kubernetes you're running you can do this in the k8s job itself: https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits. The caveat is that once deleted unless you collect the logs via some kind of scraper, you won't be able to see what the job did unless you get to the cluster in time.
I would like to have a maximum of only 2 versions of all objects in my google cloud storage bucket. I have enabled object versioning and have added a lifecycle rule to delete any objects with more than 2 versions. I then start adding objects multiple times to the bucket and run
gsutil ls -R -a gs://bucketname
I end up seeing 3 or 4 different generations of each object even after several minutes of waiting they are not deleted.
Eg:
gs://bucketname/b331108b.csv.gz#1562856078193350
gs://bucketname/b331108b.csv.gz#1564856078195342
gs://bucketname/b331108b.csv.gz#1565856078143350
gs://bucketname/b331108b.csv.gz#1567856078193551
Is this the expected behaviour?
I'm new in IBM-cloud, I have not know how to stop ibm-cloud engine (Analytic-engine). Ive receive a mail telling me it'll be suspended (remaining little time. I did not see anywhere to stop it and Ive deleted it(instance). While I tried to create other engine I've got a message telling me I have to wait about 30 days.
I'm using lite account with credit of cognitiveclass (245 days duration)
My Question is: Is it possible to retrieve my instance by contacting support?
Once a service instance is deleted, the underlying cluster is also deleted. All data and metadata, including all logs, on the cluster will be lost after the cluster is deleted.
Also, here are a couple of restrictions using Lite plan,
Maximum of one tile per IBM Cloud account every 30 days.
Maximum of one cluster with up to 3 compute nodes.
Free usage limit is 50 node hours. After 50 node hours, the cluster will be disabled. This means, for example, that a cluster with 4 nodes (3 compute node and 1 management node) will be disabled after 12.5 hours. While the cluster is disabled, it cannot be scaled up or customized.
A grace period of 24 hours is given to upgrade your user account to a paid account, and to upgrade the service instance to the Standard-Hourly plan.
If the service instance is not upgraded, then it will expire and be deleted.
Note: You are entitled to one service instance per month. If you delete the service instance or it expires after the free 50 node hours, you will not be able to create a new one until after the month has passed.
Check this link for other supporting plans
I have this lifecyle set on my Google Cloud Storage
"action": {"type": "Delete"},
"condition": {"age": 7, "isLive": false}
If I remove a file will the lifecycle delete event occur 7 days later or will it apply immediately if the file is already over 7 days old?
When I use gsutil ls -a it seems like the version doesn't change when I remove a file which makes me think that it will get treated by lifecycle as if it is already over 7 days old.
If that is the case how can I have my files deleted 7 days after they are removed?
If you remove a file, it will be deleted immediately. Nothing will happen 7 days later.
If you have an existing object in a bucket with that lifecycle policy, it will be deleted by GCS at some point after it reaches 7 days old. There is no guarantee that it will be deleted immediately, but it will usually happen in less than a day.