How to reconcile the Terraform State with an existing bucket? - google-cloud-storage

Using Terraform 11.14
My terraform file contains the following resource:
resource "google_storage_bucket" "assets-bucket" {
name = "${local.assets_bucket_name}"
storage_class = "MULTI_REGIONAL"
force_destroy = true
}
And this bucket has already been created (it exists on the infrastructure based on a previous apply)
However the state (remote on gcs) is inconsistent and doesn't seem to include this bucket.
As a result, terraform apply fails with the following error:
google_storage_bucket.assets-bucket: googleapi: Error 409: You already own this bucket. Please select another name., conflict
How can I reconcile the state? (terraform refresh doesn't help)
EDIT
Following #ydaetskcoR's response, I did:
terraform import module.bf-nathan.google_storage_bucket.assets-bucket my-bucket
The output:
module.bf-nathan.google_storage_bucket.assets-bucket: Importing from ID "my-bucket"...
module.bf-nathan.google_storage_bucket.assets-bucket: Import complete! Imported google_storage_bucket (ID: next-assets-bf-nathan-botfront-cloud)
module.bf-nathan.google_storage_bucket.assets-bucket: Refreshing state... (ID: next-assets-bf-nathan-botfront-cloud)
Error: module.bf-nathan.provider.kubernetes: 1:11: unknown variable accessed: var.cluster_ip in:
https://${var.cluster_ip}
The refreshing step doesn't work. I ran the command from the project's root where a terraform.tfvars file exists.
I tried adding -var-file=terraform.tfvars but no luck. Any idea?

You need to import it into the existing state file. You can do this with the terraform import command for any resource that supports it.
Thankfully the google_storage_bucket resource does support it:
Storage buckets can be imported using the name or project/name. If the project is not passed to the import command it will be inferred from the provider block or environment variables. If it cannot be inferred it will be queried from the Compute API (this will fail if the API is not enabled).
e.g.
$ terraform import google_storage_bucket.image-store image-store-bucket
$ terraform import google_storage_bucket.image-store tf-test-project/image-store-bucket

Related

To enable preview feature of azure resource provider

I would like to enable an azure preview feature via terraform. I have configured skip provider registration but when I tried to apply still get provider already exists error. I have to import manually as a workaround.
QUERY?:
do we must import manually to avoid provider exist error when register preview feature?
as I already define skip registration but seems it didn’t work.
Thanks!
======== configuration ========
Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm”
version = “~> 2.99"
}
}
required_version = “>= 1.1.0”
}
provider “azurerm” {
features {}
skip_provider_registration = true
}
resource “azurerm_resource_provider_registration” “example” {
name = “Microsoft.Network”
feature {
name = “AFWEnableNetworkRuleNameLogging”
registered = true
}
}
have configured to skip provider registration but when I tried to apply still get provider already exists.
======== error log
terraform apply main.tf plan
azurerm_resource_provider_registration.example: Creating…
╷
│ Error: A resource with the ID. “/subscriptions/xxxx-xxxx/providers/Microsoft.Network” already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for “azurerm_resource_provider_registration” for more information.
Any solution on the above requirement to enable the preview feature of the corresponding namespace resource provider.
If the Terraform statefile already contains the relevant providers, we should import it first before making any changes. Then only Terraform will read the respective changes from statefile.
Step1:
Add below code in provider tf and main tf as below
provider tf file
terraform {
required_version = ">= 1.1.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.99"
}
}
}
provider "azurerm" {
features {}
skip_provider_registration = true
}
main tf file as follows
resource "azurerm_resource_provider_registration" "example" {
name = "Microsoft.Network"
feature {
name = "AFWEnableNetworkRuleNameLogging"
registered = true
}
}
Step2:
Run bellow commands
terraform plan
Run below command
terraform apply -auto-approve
NOTE:
if error saying its "already exists - to be managed via Terraform this resource needs to be imported into the State." then please run below command to import the respective service via terraform
terraform import azurerm_resource_provider_registration.example /subscriptions/************************/providers/Microsoft.Network
Output as follows:
Step3:
run below commands
terraform plan
terraform apply -auto-approve

pulumi import of AWS Elastic Cache cluster is failing with error "resource does not exist"

I have a few AWS Elastic Clusters and I am trying to import them to Pulumi. The issue is that pulumi import returned error for most of my clusters. Only a few of them were able to be detected by pulumi.
Here is the example
admin-gateway and infra-storage-thai-test-v2 are both in same aws account and same region (us-west-2). Pulumi was able to generate code for admin-gateway:
thaipham#N9GK65GXW1 quickstart % pulumi import aws:elasticache/cluster:Cluster admin-gateway admin-gateway
Previewing import (dev)
View Live: https://app.pulumi.com/thailowki/quickstart/dev/previews/2189eee2-316c-4e2d-a50f-ab03a68c4194
Type Name Plan
+ pulumi:pulumi:Stack quickstart-dev create
= └─ aws:elasticache:Cluster admin-gateway import
Resources:
+ 1 to create
= 1 to import
2 changes
Do you want to perform this import? details
Then for infra-storage-thai-test-v2
thaipham#N9GK65GXW1 quickstart % pulumi import aws:elasticache/cluster:Cluster infra-storage-thai-test-v2 infra-storage-thai-test-v2
Previewing import (dev)
View Live: https://app.pulumi.com/thailowki/quickstart/dev/previews/a5490dc5-985d-49d7-ad67-928e331dc742
Type Name Plan Info
+ pulumi:pulumi:Stack quickstart-dev create 1 error
= └─ aws:elasticache:Cluster infra-storage-thai-test-v2 import 1 error
Diagnostics:
pulumi:pulumi:Stack (quickstart-dev):
error: preview failed
aws:elasticache:Cluster (infra-storage-thai-test-v2):
error: Preview failed: resource 'infra-storage-thai-test-v2' does not exist
I tried to use --logflow enabled and still there were not good insights.
here is some of the log:
I0901 10:21:47.233425 22540 eventsink.go:59] [aws-sdk-go] DEBUG: Validate Response elasticache/DescribeCacheClusters failed, attempt 0/25, error CacheClusterNotFound: CacheCluster not found: infra-storage-thai-test-v2
I0901 10:21:47.233438 22540 eventsink.go:62] eventSink::Debug(<{%reset%}>[aws-sdk-go] DEBUG: Validate Response elasticache/DescribeCacheClusters failed, attempt 0/25, error CacheClusterNotFound: CacheCluster not found: infra-storage-thai-test-v2<{%reset%}>)
I0901 10:21:47.233675 22540 eventsink.go:59] status code: 404, request id:
Any help would be greatly appreciated.
PS: I'm using pulumi v3.39.1

ansible dynamic inventory kubernetes

I am trying to use the kubernetes plugin in ansible to be able to use a dynamic inventory based on my k8 cluster. I have followed this doc https://docs.ansible.com/ansible/latest/scenario_guides/kubernetes_scenarios/k8s_inventory.html however i keep getting a failed to parse error.
# ansible-inventory --list -i k8s.yaml
[WARNING]: * Failed to parse /etc/ansible/k8s.yaml with ansible_collections.kubernetes.core.plugins.inventory.k8s plugin: Invalid value "kubernetes.core.k8s" for configuration option "plugin_type: inventory
plugin: ansible_collections.kubernetes.core.plugins.inventory.k8s setting: plugin ", valid values are: ['k8s']
[WARNING]: Unable to parse /etc/ansible/k8s.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
extract from ansible.cfg
# egrep -i "\[inventory\]|kubernetes" ansible.cfg
[inventory]
enable_plugins = kubernetes.core.k8s
k8s.yaml
# cat k8s.yaml
plugin: kubernetes.core.k8s
The error suggests that kubernetes.core.k8s is an invalid value and that valid values are ['k8s']. yet this is exactly whats in the documentation, I have tried all manor of altering the plugin name with no success.
Can anyone steer me on what i am missing here?
So I managed to get it working by editing /usr/lib/python3/dist-packages/ansible_collections/kubernetes/core/plugins/inventory/k8s.py it seems my version only listed k8s as a pluggin name, I replaced with, kubernetes.core.k8s and it worked
options:
plugin:
description: token that ensures this is a source file for the 'k8s' plugin.
required: True
choices: ['kubernetes.core.k8s']
I did plan to raise it as a PR on the project but seems this was already updated several months back so I must have just had outdated files.
https://github.com/ansible-collections/kubernetes.core/blob/60933457e81fcfa1000f556b2bc3425bbf080602/plugins/inventory/k8s.py#L27

Importing a google_storage_bucket resource in Terraform state fails

I'm trying to import a google_storage_bucket storage bucket in my Terraform state:
terraform import module.bf-nathan.google_storage_bucket.assets-bucket my-bucket
However, it fails as follows:
module.bf-nathan.google_storage_bucket.assets-bucket: Importing from ID "my-bucket"...
module.bf-nathan.google_storage_bucket.assets-bucket: Import complete! Imported google_storage_bucket (ID: next-assets-bf-nathan-botfront-cloud)
module.bf-nathan.google_storage_bucket.assets-bucket: Refreshing state... (ID: next-assets-bf-nathan-botfront-cloud)
Error: module.bf-nathan.provider.kubernetes: 1:11: unknown variable accessed: var.cluster_ip in:
https://${var.cluster_ip}
The refreshing step doesn't work. I ran the command from the project's root where a terraform.tfvars file exists.
I tried adding -var-file=terraform.tfvars but no luck.
Note that the variables are correctly interpolated with all other terraform commands.
It's worth noting that the bucket in question is defined in a module and not in the main.tf. Here is how the bucket is declared:
resource "google_storage_bucket" "assets-bucket" {
name = "${local.assets_bucket_name}"
storage_class = "MULTI_REGIONAL"
force_destroy = true
}
```

SAM Deployment failed Error- Waiter StackCreateComplete failed: Waiter encountered a terminal failure state

When I try to deploy package on SAM, the very first status comes in cloud formation console is ROLLBACK_IN_PROGRESS after that it gets changed to ROLLBACK_COMPLETE
I have tried deleting the stack and trying again, but every time same issue occurs.
Error in terminal looks like this-
Sourcing local options from ./SAMToolkit.devenv
SAM_PARAM_PKG environment variable not set
SAMToolkit will operate in legacy mode.
Please set SAM_PARAM_PKG in your .devenv file to run modern packaging.
Run 'sam help package' for more information
Runtime: java
Attempting to assume role from AWS Identity Broker using account 634668058279
Assumed role from AWS Identity Broker successfully.
Deploying stack sam-dev* from template: /home/***/1.0/runtime/sam/template.yml
sam-additional-artifacts-url.txt was not found, which is fine if there is no additional artifacts uploaded
Replacing BATS::SAM placeholders in template...
Uploading template build/private/tmp/sam-toolkit.yml to s3://***/sam-toolkit.yml
make_bucket failed: s3://sam-dev* An error occurred (BucketAlreadyOwnedByYou) when calling the CreateBucket operation: Your previous request to create the named bucket succeeded and you already own it.
upload: build/private/tmp/sam-toolkit.yml to s3://sam-dev*/sam-toolkit.yml
An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id sam-dev* does not exist
sam-dev* will be created.
Creating ChangeSet ChangeSet-2020-01-20T12-25-56Z
Deploying stack sam-dev*. Follow in console: https://aws-identity-broker.amazon.com/federation/634668058279/CloudFormation
ChangeSet ChangeSet-2020-01-20T12-25-56Z in sam-dev* succeeded
"StackStatus": "REVIEW_IN_PROGRESS",
sam-dev* reached REVIEW_IN_PROGRESS
Deploying stack sam-dev*. Follow in console: https://console.aws.amazon.com/cloudformation/home?region=us-west-2
Waiting for stack-create-complete
Waiter StackCreateComplete failed: Waiter encountered a terminal failure state
Command failed.
Please see the logs above.
I set SQS as event source for Lambda, but didn't provided the permissions like this
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource: "*"
in lambda policies.
I found this error in "Events" tab of "CloudFormation" service.