Google cloud lifecycle doesn't work - google-cloud-storage

I tried to apply rule on my bucket using gsutil according to:
https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
I was even able to get the new rule applied to my bucket:
gsutil lifecycle get gs://testbucket24/
And return by this:
{"rule": [{"action": {"type": "Delete"}, "condition": {"age": 7}}]}
The issue is that i waited 72 hours , and still all the objects (which created 3 months ago) are still there.
more info:
I think i have the permission because I succeed to set and get the rule
Multi-Regional class bucket
I enabled the access log for this bucket,still get nothing as no action is started
bucket details:
Multi-Regional class

Related

Google cloud storage: Cannot reuse bucket name after deleting bucket

I deleted an existing bucket on google cloud storage using:
gsutil rm -r gs://www.<mydomain>.com
I then verify then bucket was deleted using:
gcloud storage ls gs://www.<mydomain>.com
And I get expected response:
ERROR: (gcloud.storage.ls) gs://www.<mydomain>.com not found: 404.
I then verify then bucket was deleted using:
gsutil ls
And I get expected empty response.
I then tried to recreate a new bucket with same name using:
gsutil mb -p <projectid> -c STANDARD -l US-EAST1 -b on gs://www.<mydomain>.com
I get the unexpected error below indicating bucket still exists:
www.<mydomain>.com
Creating gs://www.<mydomain>.com/...
ServiceException: 409 A Cloud Storage bucket named 'www.<mydomain>.com' already exists. Try another name. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
How can I reuse the bucket name for the bucket that I deleted?
I found the answer to my question here:
https://stackoverflow.com/a/44763841
Basically I had deleted the project the bucket was in before or after (not sure) deleting the bucket. For some reason this causes the bucket to still appear to exist even though it does not. The behavior does not seem quite right to me but I believe waiting for billing period to complete and project to be deleted would delete the phantom bucket. Unfortunately this means I have to wait 2 weeks. I will confirm this in 2 weeks.

Azure functions swap functionality is not working after enabling private endpoint for function app linked storage

Azure functions swap functionality is not working after enabling private endpoint(with selected networks option) for function app linked storage account(webjobstorage)
Created private endpoint for blob, file and table storage
Below are the additional app settings I am adding
{
"name": "WEBSITE_CONTENTOVERVNET",
"value": "1",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "production",
"slotSetting": false
},
{
"name": "WEBSITE_DNS_SERVER",
"value": "168.63.129.16",
"slotSetting": false
},
{
"name": "WEBSITE_VNET_ROUTE_ALL",
"value": "1",
"slotSetting": false
}
Referred this article Secure storage account linked to Function App with private endpoint
From the azure devops I am trying to deploy the code to staging slot first, then later I am swapping it with prod slot. at this step it is failing.
Tried to swap it from the portal that also failed.
I am getting below error
From devops swap task :
##[error]Error: Failed to swap App Service 'testmgmt-fa-min-go' slots - 'staging' and 'production'. Error: InternalServerError - There was an unexpected error swapping slots 'staging' and 'production' for site 'testmgmt-fa-min-go(staging)'. Please try to cancel your swap operation. (CODE: 500)
From Portal:
This was caused by an internal platform component, and I’ll updated this question to notify when the component fix has been fully released. Unfortunately, the ETA for a full roll out is within the next 3 to 4 months.
Thanks to #UBK, your comment helped me to resolve the same swapping issue in my Azure Private Endpoint Function App.
I tried to reproduce the issue by following the given documentation: Secure storage account linked to Function App with private endpoint - Microsoft Tech Community
Solved the swapping issue by allowing access to all networks in the Networking of Storage Account.
The fix is deployed but we had to introduce a new app setting that you should set on your production slot (or the swap slot if you're swapping between two subslots) called WEBSITE_OVERRIDE_STICKY_DIAGNOSTICS_SETTINGS and set it to 0 (zero). I.e.,
WEBSITE_OVERRIDE_STICKY_DIAGNOSTICS_SETTINGS=0
This will allow you to swap the slots when the storage account is network restricted. Here is our documentation on app settings. This should not have any impact on your Azure Monitor related diagnostics settings configuration and is related to the legacy Application Log Settings configuration, which was preventing Premium Functions slot swaps from occurring.
Next steps on our side are:
We will add to our backlog a work item for this setting to defaulted for Premium Functions, so you won't have to add it but currently no ETA for this, so the above is the current final solution.
We will add the app setting to our App Settings list documentation

Cloud Run + Firebase Hosting region rewrites issue

I'm trying to use Firebase Hosting for CDN connected to Cloud Run. Yesterday I was testing something for region eu-west1 and it went well. Today I'm trying to do the same but for region eu-west4 and I'm getting error that this region is not supported.
I switched to eu-west1 and it worked.
Is this bug or region eu-west4 is not supported?
=== Deploying to 'xxxxxxxx'...
i deploying hosting
Error: HTTP Error: 400, Cloud Run region `europe-west4` is not supported.
"rewrites": [
{
"source": "**",
"run": {
"serviceId": "web-client",
"region": "europe-west4"
}
}
],
same for new asia-southeast1 region also
Error: HTTP Error: 400, Cloud Run region `asia-southeast1` is not supported.
From this info here is the Details information regarding Rewrite:
Firebase Hosting originates in us-central1 so while deploying cloud Run it's recommended to select us-central1 region for less First Contentful Paint score or quick loading of your website, but kills the advantage of your nearby region availability purpose(really unfortunate for google fanboys).
Example: if your location is India your nearest cloud run available is asia-southeast1 Singapore we can't select asia-southeast1
Request path would go like this:
you→India(CDN)→USA(Firebase)→Signapore(CloudRun+aync call to Firestore India)→USA→CDN→you (which is REALLY BAD in terms of latency).
you→India(CDN)→USA(Firebase)→USA us-central1(CloudRun+aync call to Firestore India)→USA→CDN→you
(static Page will Load FAST, but Firestore dynamic data on webapp will data load with REALLY BAD in terms of latency, we should select us-central1 for Firestore also this makes no use of your local region GCP products this really strange that Firebase hosting not available for at least for AMERICA EUROPE ASIA-PACIFIC zones atleast).
Conclusion(till this date):
Cloud Run region rewrites issue for Firebase Hosting is there for many regions but, for the optimal page load result we should select us-central1 it is really unfortunate THIS IS THE REAL PROBLEM compare to Rewrite Issue, to avoid website Firestore latency for non USA users we should use cloud run/cloud function cache control such that data will cached at your local/near by region CDN for fast data loading (we cant use firebase web SDK since CDN caching not possible via if we use SDK, we should use Cloud function in firebase/cloud run)
Firebase Hosting to Cloud Run rewrite availability ( as of Aug 31, 2020)
Available:
us-central1,
us-east1,
asia-northeast1,
europe-west1
Not available
asia-east1,
europe-north1,
europe-west4,
us-east4,
us-west1,
asia-southeast1
Please file a feature request for Firebase rewrite availability if it's not available in your region Cloud Run and Firebase hosting not available for at least for AMERICA EUROPE ASIA-PACIFIC zones.
FYI: Cloud Firestore multi-region also not available for Asia region if using multi-region Firestore is the fix for locked Firebase hosting and Cloud run regions to us-central1
Cloud Run availability region
(Please comment if you get the rewrite access to any of the above mentioned region)
I actually managed to figure out a way to "fix" this. I changed my regions to europe-west4 instead of my previous europe-west1 and that "fixed" my deployment problem.

Cannot Delete an AWS VPC

I want to delete an AWS VPC which I don't know how it came into existence. When I try to delete it in AWS Console, it says:
We could not delete the following VPC (vpc-0a72ac71) Network interface
'eni-ce2a0d10' is currently in use. (Service: AmazonEC2; Status Code:
400; Error Code: InvalidParameterValue; Request ID:
821d8a6d-3d9b-4c24-b372-314ea9b18b23)
As it mentions "AmazonEC2" in the error message, I suspected there might be some EC2 instances residing in this VPC. So I went into EC2 dashboard but found no EC2 exist there. However, I found there are two security groups associated with this vpc. So I decided to delete them hoping that's the cause of the error. But when I tried to do so, I got this message:
As the message says, these security groups are associated with some network interfaces. Therefore, I decided to 'Detach' those but I got this error message:
Error deleting network interfaces eni-ce2a0d10: You do not have
permission to access the specified resource. eni-0b7ff712: You do not
have permission to access the specified resource.
But I'm the root user so I assume I should be able to do whatever I want to do except if the resource is made by aws itself or another root account.
I know somewhere this network interface is being used but it will be very time-consuming to go through each aws service and check that.
I've already checked AWS RDS service and no instance or rds subnet is made.
I've already checked this question and this with no luck.
I found the root cause of this issue.
Short Answer:
That VPC was created solely for the WorkDocs service instance. So AWS was preventing me to delete its VPC and any of its dependent services and pieces.
How I figured it out:
First, I noticed something interesting has been written in the 'Description' column of the 'undeletable' Network Interfaces (you can see them in the last OP's figure):
"AWS created network interface for directory d-90672d6b72."
From "directory", I suspected that this might have something do to with AWS Directory Service. So I went to this service and noticed there is a directory associated with the VPC:
So I tried to remove this directory but I got this error message:
Error - Directory cannot be deleted This directory still has
authorized applications, and cannot be deleted.  To delete this
directory, complete all of the following steps: • Delete the WorkDocs
site attached to this directory.
 
Therefore, I went to AWS WorkDocs Service and found it and deleted it:
So now the directory is also deleted (circled in red), I went back to delete those network interfaces. However I realized that they are vanished! (I guess Amazon removed them on its own). I went to VPC service to see whether I'm now able to delete the VPC. Guess what? That VPC was vanished too!
Now I understand what was happening. That VPC was created solely for the WorkDocs service instance. I wish Amazon was more transparent about it.
As a more generic answer to the "Error deleting network interface" issue, it happens when a network interface was created automatically for a higher-level AWS resource.
The Generic solution is to manage the network interface in the higher level resource directly such as WorkDocs or EFS.
In my case it happened when I wanted to delete a security group assigned to network interfaces created by an EFS volume.
So I went in the EFS console and removed the security group from the EFS.

Google Cloud SQL Instance does not start

I've stopped my Google Cloud SQL 2nd generation instance on 02 Jan this year.
Today I'm trying to start it again but just receives an error:
"Could not complete the operation"
This is the only info in the logs:
{
protoPayload: {…}
insertId: "54775E151DAA9.A2E1542.960A7970"
resource: {…}
timestamp: "2017-02-01T10:55:00.523Z"
severity: "ERROR"
logName: "projects/hti-info-center/logs/cloudaudit.googleapis.com%2Factivity"
}
All functions including restoring of backups are disabled as the instance is stopped.
Is there anywhere I can get more information about the instance's current state in order to try and resolve this issue, without having to sign up for a Google Support package?
This was suggested by Google support, and worked for me:
try to start your service via gcloud by running the following command:
gcloud sql instances patch [INSTANCE_NAME] --activation-policy=[ACTIVATION_POLICY]
and set the activation policy to "ALWAYS" or "ON_DEMAND".
Similar situation it's happening to me, post a comment here: https://code.google.com/p/googlecloudsql/issues/detail?can=2&start=0&num=100&q=&colspec=ID%20Type%20Status%20Priority%20Milestone%20Owner%20Summary%20log&groupby=&sort=&id=216
It seems that they have a bug with the cloud SQL instances since Jan 25th (although I have experimented a similar error on January 13th).
Very very frustrating...
In your case, if the instance is stopped, have you tried to create a clone or export the data and create a new instance?