Cannot deploy a new war file to beanstalk via Terraform - deployment

I want to create a new beanstalk environment via Terraform and have it run a specified war file. With my terraform configuration script, I can create the beanstalk environment and I can also upload the war file to an S3 bucket. However, I am unable to deploy this war file to this newly created beanstalk environment.
Here is my TF configuration.
resource "aws_s3_bucket_object" "myjar" {
bucket = "mybucketname"
key = "jars/myapp-1.0.war"
source = "localdir/myapp-1.0.war"
etag = "${md5(file("localdir/myapp-1.0.war"))}"
}
resource "aws_elastic_beanstalk_application_version" "myjarversion" {
application = "MyBeanstalkApplication"
name = "1.0"
bucket = "mybucketname"
key = "jars/myapp-1.0.war"
}
resource "aws_elastic_beanstalk_environment" "tftestenv" {
name = "myapp-tftest"
application = "MyBeanstalkApplication"
solution_stack_name = "64bit Amazon Linux 2017.03 v2.6.2 running Tomcat 8 Java 8"
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = "1"
}
... # bunch of other beanstalk settings
}
Terraform successfully picks up the local file localdir/myapp-1.0.war, uploads to S3 in the appropriate bucket and key and also associates the war file as a version in my beanstalk application (I can see the war listed in beanstalk application versions list when viewed through the AWS console).
It also creates a the myapp-tftest environment for my application but does not deploy the war file to it.
What am I missing here? Or is it not possible to deploy a version to a beanstalk environment via terraform (which would be disappointing).

I didn't realize there was an option called version_label through which the application version could be specified.
Closing the issue.

Related

Terraform : "Error: error deleting S3 Bucket" while trying to destroy EKS Cluster

So I created EKS Cluster using example given in
Cloudposse eks terraform module
On top of this, I created AWS S3 and Dynamodb for storing state file and lock file respectively and added the same in terraform backend config.
This is how it looks :
resource "aws_s3_bucket" "terraform_state" {
bucket = "${var.namespace}-${var.name}-terraform-state"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "${var.namespace}-${var.name}-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
bucket = "${var.namespace}-${var.name}-terraform-state"
key = "${var.stage}/terraform.tfstate"
region = var.region
# Replace this with your DynamoDB table name!
dynamodb_table = "${var.namespace}-${var.name}-running-locks"
encrypt = true
}
}
Now when I try to delete EKS cluster using terraform destroy I get this error:
Error: error deleting S3 Bucket (abc-eks-terraform-state): BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
This is the output of terraform plan -destroy after the cluster is partially destroyed because of s3 error
Changes to Outputs:
- dynamodb_table_name = "abc-eks-running-locks" -> null
- eks_cluster_security_group_name = "abc-staging-eks-cluster" -> null
- eks_cluster_version = "1.19" -> null
- eks_node_group_role_name = "abc-staging-eks-workers" -> null
- private_subnet_cidrs = [
- "172.16.0.0/19",
- "172.16.32.0/19",
] -> null
- public_subnet_cidrs = [
- "172.16.96.0/19",
- "172.16.128.0/19",
] -> null
- s3_bucket_arn = "arn:aws:s3:::abc-eks-terraform-state" -> null
- vpc_cidr = "172.16.0.0/16" -> null
I cannot manually delete the tfstate in s3 because that'll make terraform recreate everything, also I tried to remove s3 resource from tfstate but it gives me lock error(also tried to forcefully remove lock and with -lock=false)
So I wanted to know is there a way to tell terraform to delete s3 at the end once everything is deleted. Or is there a way to use the terraform which is there in s3 locally?
What's the correct approach to delete EKS cluster when your TF state resides in s3 backend and you have created s3 and dynamodb using same terraform.
Generally, it is not recommended to keep your S3 bucket that you use for Terraform's backend state management in the Terraform state itself (for this exact reason). I've seen this explicitly stated in Terraform documentation, but I've been unable to find it in a quick search.
What I would do to solve this issue:
Force unlock the Terraform lock (terraform force-unlock LOCK_ID, where LOCK_ID is shown in the error message it gives you when you try to run a command).
Download the state file from S3 (via the AWS console or CLI).
Create a new S3 bucket (manually, not in Terraform).
Manually upload the state file to the new bucket.
Modify your Terraform backend config to use the new bucket.
Empty the old S3 bucket (via the AWS console or CLI).
Re-run Terraform and allow it to delete the old S3 bucket.
Since it's still using the same old state file (just from a different bucket now), it won't re-create everything, and you'll be able to decouple your TF state bucket/file from other resources.
If, for whatever reason, Terraform refuses to force-unlock, you can go into the DynamoDB table via the AWS console and delete the lock manually.

How can I redeploy a docker-compose stack with terraform?

I use terraform to configure a GCE instance which runs a docker-compose stack. The docker-compose stack references an image with a tag and I would like to be able to rerun docker-compose up when the tag changes, so that a new version of the service can be run.
Currently, I do the following in my terraform files:
provisioner "file" {
source = "training-server/docker-compose.yml"
destination = "/home/curry/docker-compose.yml"
connection {
type = "ssh"
user = "curry"
host = google_compute_address.training-address.address
private_key = file(var.private_key_file)
}
}
provisioner "remote-exec" {
inline = [
"IMAGE_ID=${var.image_id} docker-compose -f /home/curry/docker-compose.yml up -d"
]
connection {
type = "ssh"
user = "root"
host = google_compute_address.training-address.address
private_key = file(var.private_key_file)
}
}
but this is wrong for various reasons:
Provisioners are somewhat frowned upon according to terraform documentation
If the image_id change this won't be considered a change in configuration by terraform so it won't run the provisioners
What I want is to consider my application stack like a resource, so that when one of its attributes change, eg. the image_id, the resource is recreated but the VM instance itself is not.
How can I do that with terraform? Or is there another better approach?
Terraform has a Docker provider, and if you wanted to use Terraform to manage your container stack, that's probably the right tool. But, using it requires essentially translating your Compose file into Terraform syntax.
I'm a little more used to a split where you use Terraform to manage infrastructure – set up EC2 instances and their network setup, for example – but use another tool like Ansible, Chef, or Salt Stack to actually run software on them. Then to update the software (Docker containers) you'd update your configuration management tool's settings to say which version (Docker image tag) you want, and then re-run that.
One trick that may help is to use the null resource which will let you "reprovision the resource" whenever the image ID changes:
resource "null_resource" "docker_compose" {
triggers = {
image_id = "${var.image_id}"
}
provisioner "remote_exec" {
...
}
}
If you wanted to go down the all-Terraform route, in theory you could write a Terraform configuration like
provider "docker" {
host = "ssh://root#${google_compute_address.training-address.address}"
# (where do its credentials come from?)
}
resource "docker_image" "myapp" {
name = "myapp:${var.image_id}"
}
resource "docker_container" "myapp" {
name = "myapp"
image = "${docker_image.myapp.latest}"
}
but you'd have to translate your entire Docker Compose configuration to this syntax, and set it up so that there's an option for developers to run it locally, and replicate Compose features like the default network, and so on. I don't feel like this is generally done in practice.

Pass output (database password) from Terraform to Kubernetes manifest in CICD pipeline

I am using Terraform to provision resources in Azure, one of which is a Postgres database. My Terraform module includes the following to generate a random password and output to console.
resource "random_string" "db_master_pass" {
length = 40
special = true
min_special = 5
override_special = "!-_"
keepers = {
pass_version = 1
}
}
# For postgres
output "db_master_pass" {
value = "${module.postgres.db_master_pass}"
}
I am using Kubernetes deployment manifest to deploy the application to Azure managed Kubernetes service. Is there a way of passing the database password to Kubernetes in the deployment pipeline? I am using CircleCI for CICD. Currently, I'm copying the password, encoding it to base64 and pasting it to the secrets manifest before running the deployment.
One solution is to generate the Kubernetes yaml from a template.
The pattern uses templatefile function in Terraform 0.12 or the template provider earlier versions to read and local_file resource to write. For example:
data "template_file" "service_template" {
template = "${file("${path.module}/templates/service.tpl")}"
vars {
postgres_password = ""${module.postgres.db_master_pass}"
}
}
resource "local_file" "template" {
content = "${data.template_file.service_template.rendered}"
filename = "postegres_service.yaml"
}
There are many other options, like using to the Kubernetes provider, but I think this better matches your question.

How do I set environment properties in AWS codestar?

I created a spring project in AWS codestar.
I would like to pass environment properties to my application (e.g. DATA_SOURCE_URL). I can do it in elastic beanstalk in "Configuration" -> "Software" "modify" and adding the properties. But whenever a new deployment is triggered this configuration gets reseted.
I was wondering what is the way of setting environment properties when using AWS codestar.
As it may help other people that search a solution
I finally get it to work by using the Saved Configuration function in Beanstalk, and calling it via the cloud formation template.yml : EBConfigurationTemplate (from the autogenerated template.yml by codestar)
EBConfigurationTemplate:
[...]
SourceConfiguration:
ApplicationName: !Ref 'EBApplication'
TemplateName: "Saved Configuration Name"
After that my django application was able to read the os.environ['ENV_VAR_NAME']
as well as django.config that was able to connect to an RDS (Non-managed by beanstalk) to do the migration as a container_command

Terraform with Google Container Engine (Kubernetes): Error executing access token command "...\gcloud.cmd"

I'm trying to deploy some module (Docker image) to google Google Container Engine. What I got in my Terraformconfig file:
terraform.tf
# Google Cloud provider
provider "google" {
credentials = "${file("google_credentials.json")}"
project = "${var.google_project_id}"
region = "${var.google_region}"
}
# Google Container Engine (Kubernetes) cluster resource
resource "google_container_cluster" "secureskye" {
name = "secureskye"
zone = "${var.google_kubernetes_zone}"
additional_zones = "${var.google_kubernetes_additional_zones}"
initial_node_count = 2
}
# Kubernetes provider
provider "kubernetes" {
host = "${google_container_cluster.secureskye.endpoint}"
username = "${var.google_kubernetes_username}"
password = "${var.google_kubernetes_password}"
client_certificate = "${base64decode(google_container_cluster.secureskye.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.secureskye.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.secureskye.master_auth.0.cluster_ca_certificate)}"
}
# Module UI
module "ui" {
source = "./modules/ui"
}
My problem is: google_container_cluster was created successfully, but it fails on module ui creation (which contains 2 resource kubernetes_service and kubernetes_pod) with error
* kubernetes_pod.ui: Post https://<ip>/api/v1/namespaces/default/pods: error executing access token command "<user_path>\\AppData\\Local\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd config config-helper --format=json": err=exec: "<user_path>\\AppData\\Local\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd": file does not exist output=
So, questions:
1. Do I need gcloud + kubectl installed? Even though google_container_cluster was created successfully before I install gcloud or kubectl installed.
2. I want to use independent, separated credentials info, project, region from the one in gcloud, kubectl CLI. Am I doing this right?
I have been able to reproduce your scenario running the Terraform config file you provided (except the Module UI part), in a Linux machine, so your issue should be related to that last part of the code.
Regarding your questions:
I am not sure, because I tried from Google Cloud Shell, and both gcloud and kubectl are already preinstalled there, although I would recommend you to install them just to make sure that is not the issue here.
For the credentials part, I added two new variables to the variables.tf Terraform configuration file, as in this example (those credentials do not need to be the sames as in gcloud or kubectl:
Use your prefered credentials in this case.
variable "google_kubernetes_username" {
default = "<YOUR_USERNAME>"
}
variable "google_kubernetes_password" {
default = "<YOUR_PASSWORD>"
}
Maybe you could share more information regarding what can be found in your Module UI, in order to understand which file does not exist. I guess you are trying the deployment from a Windows machine, as for the notation in the paths to your files, but that should not be an important issue.