Terraform - Restoring route53 records - amazon-route53

long time lurker first time posting.
I am new to using terraform and tried going through the documentation and it isn't documented clearly enough for me to resolve my issue.
Scenario
I am trying to simulate the event that an end-user accidentally deletes a public hosted zone from route53 then using terraform to restore the public zone and it's record sets.
Issue
The issue I'm running into is that when the public hosted zone is removed, running terraform plan will freak out and say that the route53 record/zone does not exist but doesn't actually prompt me to create that new zone even though it is being managed terraform currently.
The workaround I have is to run terraform state rm on the imported objects, editing my example.tf file to remove any route53 resource record sets, leaving only the aws_route53_zone record intact and re-running terraform plan, doing so terraform recognizes that this is a new public hosted zone and will create the new public zone. The problem with that is, is that it's done in two steps and I would like to achieve this all in one step ideally running terraform plan once to create the hosted zone then creating the record sets in the new zone.
Current working solution - for example.tf
This method requires the administrator to run terraform plan twice and involves editing the example.tf file. First to remove the existing record sets that terraform manages in order to run terraform plan to create the public hosted zone that was accidentally deleted.
resource "aws_route53_zone" "example" {
name = "terraform-test.example.com"
}
The ideal solution - example.tf
This is the solution I had in mind that would work, but when I run terraform plan it complains about the resource not found or undeclared.
# This should create the public hosted zone if it's not existing.
resource "aws_route53_zone" "example" {
name = "terraform-test.example.com"
private_zone = false
}
# Then it should create the following records under the hosted zone.
resource "aws_route53_record" "www-terraform-A" {
name = "${data.aws_route53_zone.example.name}"
zone_id = "${data.aws_route53_zone.example.zone_id}"
type = "A"
ttl = "300"
records = ["X.X.X.X"]
}
I expect after running terraform plan that terraform recognizes the zone isn't existing and it should prompt me to create it then record the record sets but it doesn't this is the error message I get
Error: Reference to undeclared resource
on terraform-test.servallapps.com.tf line 6, in resource "aws_route53_record" "www-terraform-A":
6: name = "${data.aws_route53_zone.example.name}"
A data resource "aws_route53_zone" "example" has not been declared in the
root module.

The error is because it is not a data source it's a resource, so it does not require a data prefix.
Changing data.aws_route53_zone.example.name to aws_route53_zone.example.name should work

Related

Where is a file created via Terraform code stored in Terraform Cloud?

I've been using Terraform for some time but I'm new to Terraform Cloud. I have a piece of code that if you run it locally it will create a .tf file under a folder that I tell him but if I run it with Terraform CLI on Terraform cloud this won't happen. I'll show it to you so it will be more clear for everyone.
resource "genesyscloud_tf_export" "export" {
directory = "../Folder/"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
So basically when I launch this code with terraform apply in local, it creates a .tf file with everything I need. Where? It goes up one folder and under the folder "Folder" it will store this file.
But when I execute the same code on Terraform Cloud obviously this won't happen. Does any of you have any workaround with this kind of troubles? How can I manage to store this file for example in a github repo when executing github actions? Thanks beforehand
The Terraform Cloud remote execution environment has an ephemeral filesystem that is discarded after a run is complete. Any files you instruct Terraform to create there during the run will therefore be lost after the run is complete.
If you want to make use of this information after the run is complete then you will need to arrange to either store it somewhere else (using additional resources that will write the data to somewhere like Amazon S3) or export the relevant information as root module output values so you can access it via Terraform Cloud's API or UI.
I'm not familiar with genesyscloud_tf_export, but from its documentation it sounds like it will create either one or two files in the given directory:
genesyscloud.tf or genesyscloud.tf.json, depending on whether you set export_as_hcl. (You did, so I assume it'll generate genesyscloud.tf.
terraform.tfstate if you set include_state_file. (You didn't, so I assume that file isn't important in your case.
Based on that, I think you could use the hashicorp/local provider's local_file data source to read the generated file into memory once the MyPureCloud/genesyscloud provider has created it, like this:
resource "genesyscloud_tf_export" "export" {
directory = "../Folder"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
data "local_file" "export_config" {
filename = "${genesyscloud_tf_export.export.directory}/genesyscloud.tf"
}
You can then refer to data.local_file.export_config.content to obtain the content of the file elsewhere in your module and declare that it should be written into some other location that will persist after your run is complete.
This genesyscloud_tf_export resource type seems unusual in that it modifies data on local disk and so its result presumably can't survive from one run to the next in Terraform Cloud. There might therefore be some problems on the next run if Terraform thinks that genesyscloud_tf_export.export.directory still exists but the files on disk don't, but hopefully the developers of this provider have accounted for that somehow in the provider logic.

Terraform : "Error: error deleting S3 Bucket" while trying to destroy EKS Cluster

So I created EKS Cluster using example given in
Cloudposse eks terraform module
On top of this, I created AWS S3 and Dynamodb for storing state file and lock file respectively and added the same in terraform backend config.
This is how it looks :
resource "aws_s3_bucket" "terraform_state" {
bucket = "${var.namespace}-${var.name}-terraform-state"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "${var.namespace}-${var.name}-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
bucket = "${var.namespace}-${var.name}-terraform-state"
key = "${var.stage}/terraform.tfstate"
region = var.region
# Replace this with your DynamoDB table name!
dynamodb_table = "${var.namespace}-${var.name}-running-locks"
encrypt = true
}
}
Now when I try to delete EKS cluster using terraform destroy I get this error:
Error: error deleting S3 Bucket (abc-eks-terraform-state): BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
This is the output of terraform plan -destroy after the cluster is partially destroyed because of s3 error
Changes to Outputs:
- dynamodb_table_name = "abc-eks-running-locks" -> null
- eks_cluster_security_group_name = "abc-staging-eks-cluster" -> null
- eks_cluster_version = "1.19" -> null
- eks_node_group_role_name = "abc-staging-eks-workers" -> null
- private_subnet_cidrs = [
- "172.16.0.0/19",
- "172.16.32.0/19",
] -> null
- public_subnet_cidrs = [
- "172.16.96.0/19",
- "172.16.128.0/19",
] -> null
- s3_bucket_arn = "arn:aws:s3:::abc-eks-terraform-state" -> null
- vpc_cidr = "172.16.0.0/16" -> null
I cannot manually delete the tfstate in s3 because that'll make terraform recreate everything, also I tried to remove s3 resource from tfstate but it gives me lock error(also tried to forcefully remove lock and with -lock=false)
So I wanted to know is there a way to tell terraform to delete s3 at the end once everything is deleted. Or is there a way to use the terraform which is there in s3 locally?
What's the correct approach to delete EKS cluster when your TF state resides in s3 backend and you have created s3 and dynamodb using same terraform.
Generally, it is not recommended to keep your S3 bucket that you use for Terraform's backend state management in the Terraform state itself (for this exact reason). I've seen this explicitly stated in Terraform documentation, but I've been unable to find it in a quick search.
What I would do to solve this issue:
Force unlock the Terraform lock (terraform force-unlock LOCK_ID, where LOCK_ID is shown in the error message it gives you when you try to run a command).
Download the state file from S3 (via the AWS console or CLI).
Create a new S3 bucket (manually, not in Terraform).
Manually upload the state file to the new bucket.
Modify your Terraform backend config to use the new bucket.
Empty the old S3 bucket (via the AWS console or CLI).
Re-run Terraform and allow it to delete the old S3 bucket.
Since it's still using the same old state file (just from a different bucket now), it won't re-create everything, and you'll be able to decouple your TF state bucket/file from other resources.
If, for whatever reason, Terraform refuses to force-unlock, you can go into the DynamoDB table via the AWS console and delete the lock manually.

How NOT to create a azurerm_mssql_database_extended_auditing_policy

I'm trying to deploy my infra with terraform.
I have a mssql server and database and using azurerm 2.32
While deploying mssql I'm getting following error
Error: issuing create/update request for SQL Server "itan-mssql-server" Blob Auditing Policies(Resource Group "itan-west-europe-resource-group"): sql.ExtendedServerBlobAuditingPoliciesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="DataSecurityInvalidUserSuppliedParameter" Message="Invalid parameter 'storageEndpoint'. Value should be a blob storage endpoint (e.g. https://MyAccount.blob.core.windows.net)."
I have already tried
defining extended_auditing_policy on database level - failed
defining extended_auditing_policy on server level - failed
defining azurerm_mssql_database_extended_auditing_policy on root level - failed
leaving empty extended_auditing_policy - failed
Global level of definition looks like this (^C^V from terraform documentation with adjustment to my project):
resource "azurerm_mssql_database_extended_auditing_policy" "db-policy" {
database_id = azurerm_mssql_database.itan-mssql-database.id
storage_endpoint = azurerm_storage_account.itan_storage_account.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.itan_storage_account.primary_access_key
storage_account_access_key_is_secondary = false
retention_in_days = 1
depends_on = [
azurerm_mssql_database.itan-mssql-database,
azurerm_storage_account.itan_storage_account]
}
I'm looking for one of two possible solutions:
total disabling of audits (I don't really needed now)
fixing error and enabling the audit
Thanks!
Jarek
This is caused by Breaking change in the SQL Extended Auditing Settings API. Please check also this issue in terraform provider.
As a workaround you may try call ARM template from terraform. However, I'm not sure if under the hood they use the same or different API.
Workarund that looks to be working for me is like this:
I Followed tip by [ddarwent][1] from git hub:
https://github.com/terraform-providers/terraform-provider-azurerm/issues/8915#issuecomment-711029508
So basically its like this:
terraform apply
Go to terraform.tfstate delete "tainted mssql server"
terraform apply
Go to terraform.tfstate delete "tainted mssql database"
terraform apply
Looks like all my stuff is on and working

Taking over existing Domains (HostedZones) in CloudFormation

I've been setting up CloudFormation templates for some new infrastructure for a project and I've made it to Route 53 Hosted Zones.
Now ideally I'd like to create a "core-domains" stack with all our hosted zones and base configuration. Thing is we already have these created manually using the AWS console (and they're used for test/live infrastructure), is there any way to supply the existing "HostedZoneId" as a property to the resource definition and essentially have it introspect what we already have and then apply the diff? (If I've done my job there shouldn't be a diff hopefully so should just be a no-op!).
I can't see a "HostedZoneId" property in the docs: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-route53-hostedzone.html
Any suggestions?
PS. I'm assuming this isn't possible and I'll have to recreate all the HostedZones under CloudFormation but I thought I'd check :)
Found it. It's fine to make other HostedZones, they're issued with new Nameservers, just use the "HostedZoneConfig: Comment" field to note down which hosted zone is which and then you can switch the nameservers over when you're ready!

Get-AzureRmResourceGroupDeployment lists machines I cannot see in the web interface

I'm tasked with automating the creation of Azure VM's, and naturally I do a number of more or less broken iterations of trying to deploy a VM image. As part of this, I automatically allocate serial hostnames, but there's a strange reason it's not working:
The code in the link above works very well, but the contents of my ResourceGroup is not as expected. Every time I deploy (successfully or not), a new entry is created in whatever list is returned by Get-AzureRmResourceGroupDeployment; however, in the Azure web interface I can only see a few of these entries. If, for instance, I omit a parameter for the JSON file, Azure cannot even begin to deploy something -- but the hostname is somehow reserved anyway.
Where is this list? How can I clean up after broken deployments?
Currently, Get-AzureRmResourceGroupDeployment returns:
azure-w10-tfs13
azure-w10-tfs12
azure-w10-tfs11
azure-w10-tfs10
azure-w10-tfs09
azure-w10-tfs08
azure-w10-tfs07
azure-w10-tfs06
azure-w10-tfs05
azure-w10-tfs02
azure-w7-tfs01
azure-w10-tfs19
azure-w10-tfs1
although the web interface only lists:
azure-w10-tfs12
azure-w10-tfs13
azure-w10-tfs09
azure-w10-tfs05
azure-w10-tfs02
Solved using the code $siblings = (Get-AzureRmResource).Name | Where-Object{$_ -match "^$hostname\d+$"}
(PS. If you have tips for better tags, please feel free to edit this question!)
If you create a VM in Azure Resource Management mode, it will have a deployment attached to it. In fact if you create any resource at all, it will have a resource deployment attached.
If you delete the resource you will still have the deployment record there, because you still deployed it at some stage. Consider deployments as part of the audit trail of what has happened within the account.
You can delete deployment records with Remove-AzureRmResourceGroupDeployment but there is very little point, since deployments have no bearing upon the operation of Azure. There is no cost associated they are just historical records.
Querying deployments with Get-AzureRmResourceGroupDeployment will yield you the following fields.
DeploymentName
Mode
Outputs
OutputsString
Parameters
ParametersString
ProvisioningState
ResourceGroupName
TemplateLink
TemplateLinkString
Timestamp
So you can know whether the deployment was successful via ProvisioningState know the templates you used with TemplateLink and TemplateLinkString and check the outputs of the deployment etc. This can be useful to figure out what template worked and what didn't.
If you want to see actual resources, that you are potentially being charged for, you can use Get-AzureRmResource
If you just want to retrieve a list of the names of VMs that exist within an Azure subscription, you can use
(Get-AzureRmVM).Name