Retrieve auto scaling group instance ip's and provide it to ansible - mongodb

Im currently developing terraform script and ansible roles in order to install mongodb with the replication. im using auto scaling group and i need to pass, ec2 instance private ip's to ansible as extra vars. is there any way to do that?
When it's come to rs.initiate() is there any way to add ec2 private ip to mongo cluster when terraform creating the instances.

Not really sure about how it's done in ASGs, probably a combination of user-data and EC2 metadata would be helpful.
But I do it as below in case we have a fixed number of nodes. Posting this answer as it can be helpful to someone in some way.
Using EC2 dynamic inventory scripts.
Ref - https://docs.ansible.com/ansible/2.5/user_guide/intro_dynamic_inventory.html
This is basically a python script i.e ec2.py which gets the instance private IP using tags etc. It comes with a config file named ec2.ini.
Tag your instance in TF script (you add a role tag) -
resource "aws_instance" "ec2" {
....
tags = "${merge(var.tags, map(
"description","mongodb-node",
"role", "mongodb-node",
"Environment", "${local.env}",))}"
}
output "ip" {
value = ["${aws_instance.ec2.private_ip}"]
}
Get the instance private IP in playbook -
- hosts: localhost
connection: local
tasks:
- debug: msg="MongoDB Node IP is - {{ hostvars[groups['tag_role_mongodb-node'][0]].inventory_hostname }}"
Now run the playbook using TF null_resource -
resource null_resource "ansible_run" {
triggers {
ansible_file = "${sha1(file("${path.module}/${var.ansible_play}"))}"
}
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i ./ec2.py --private-key ${var.private_key} ${var.ansible_play}"
}
}
You got to make sure AWS related environment variables are present/exported for ansible to fetch AWS EC2 metadata. Also make sure ec2.py is executable.
If you want to get the private IP, change the following config in ec2.ini -
destination_variable = private_ip_address
vpc_destination_variable = private_ip_address

Related

Private docker.io registry in microk8s

I have issue with microk8s hitting rate limit for docker.io registry
ctr: failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/calico/kube-controllers/manifests/sha256:bf58609ff39089533b80ff2a10fffd1302346f153c66e24d0572fb8b198daea1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
I wanted to configure private repository authorization for docker.io. I've followed following instruction
It looks like that it's not working with docker.io registry
I've modified configuration file
/var/snap/microk8s/current/args/containerd-template.toml
with following content
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."docker.io".auth]
username = ""
password = ""
auth = ""
email = ""
However it looks like this is not working for docker.io registry
I'm aware of this solution, however if I recall correctly this needs to be applied to every namespace separately. I'm looking for a one-shot solution for whole kubernetes cluster.
Is there such solution, or kubernetes secrets are the only way to go ?

How can I redeploy a docker-compose stack with terraform?

I use terraform to configure a GCE instance which runs a docker-compose stack. The docker-compose stack references an image with a tag and I would like to be able to rerun docker-compose up when the tag changes, so that a new version of the service can be run.
Currently, I do the following in my terraform files:
provisioner "file" {
source = "training-server/docker-compose.yml"
destination = "/home/curry/docker-compose.yml"
connection {
type = "ssh"
user = "curry"
host = google_compute_address.training-address.address
private_key = file(var.private_key_file)
}
}
provisioner "remote-exec" {
inline = [
"IMAGE_ID=${var.image_id} docker-compose -f /home/curry/docker-compose.yml up -d"
]
connection {
type = "ssh"
user = "root"
host = google_compute_address.training-address.address
private_key = file(var.private_key_file)
}
}
but this is wrong for various reasons:
Provisioners are somewhat frowned upon according to terraform documentation
If the image_id change this won't be considered a change in configuration by terraform so it won't run the provisioners
What I want is to consider my application stack like a resource, so that when one of its attributes change, eg. the image_id, the resource is recreated but the VM instance itself is not.
How can I do that with terraform? Or is there another better approach?
Terraform has a Docker provider, and if you wanted to use Terraform to manage your container stack, that's probably the right tool. But, using it requires essentially translating your Compose file into Terraform syntax.
I'm a little more used to a split where you use Terraform to manage infrastructure – set up EC2 instances and their network setup, for example – but use another tool like Ansible, Chef, or Salt Stack to actually run software on them. Then to update the software (Docker containers) you'd update your configuration management tool's settings to say which version (Docker image tag) you want, and then re-run that.
One trick that may help is to use the null resource which will let you "reprovision the resource" whenever the image ID changes:
resource "null_resource" "docker_compose" {
triggers = {
image_id = "${var.image_id}"
}
provisioner "remote_exec" {
...
}
}
If you wanted to go down the all-Terraform route, in theory you could write a Terraform configuration like
provider "docker" {
host = "ssh://root#${google_compute_address.training-address.address}"
# (where do its credentials come from?)
}
resource "docker_image" "myapp" {
name = "myapp:${var.image_id}"
}
resource "docker_container" "myapp" {
name = "myapp"
image = "${docker_image.myapp.latest}"
}
but you'd have to translate your entire Docker Compose configuration to this syntax, and set it up so that there's an option for developers to run it locally, and replicate Compose features like the default network, and so on. I don't feel like this is generally done in practice.

Pass output (database password) from Terraform to Kubernetes manifest in CICD pipeline

I am using Terraform to provision resources in Azure, one of which is a Postgres database. My Terraform module includes the following to generate a random password and output to console.
resource "random_string" "db_master_pass" {
length = 40
special = true
min_special = 5
override_special = "!-_"
keepers = {
pass_version = 1
}
}
# For postgres
output "db_master_pass" {
value = "${module.postgres.db_master_pass}"
}
I am using Kubernetes deployment manifest to deploy the application to Azure managed Kubernetes service. Is there a way of passing the database password to Kubernetes in the deployment pipeline? I am using CircleCI for CICD. Currently, I'm copying the password, encoding it to base64 and pasting it to the secrets manifest before running the deployment.
One solution is to generate the Kubernetes yaml from a template.
The pattern uses templatefile function in Terraform 0.12 or the template provider earlier versions to read and local_file resource to write. For example:
data "template_file" "service_template" {
template = "${file("${path.module}/templates/service.tpl")}"
vars {
postgres_password = ""${module.postgres.db_master_pass}"
}
}
resource "local_file" "template" {
content = "${data.template_file.service_template.rendered}"
filename = "postegres_service.yaml"
}
There are many other options, like using to the Kubernetes provider, but I think this better matches your question.

Terraform with Google Container Engine (Kubernetes): Error executing access token command "...\gcloud.cmd"

I'm trying to deploy some module (Docker image) to google Google Container Engine. What I got in my Terraformconfig file:
terraform.tf
# Google Cloud provider
provider "google" {
credentials = "${file("google_credentials.json")}"
project = "${var.google_project_id}"
region = "${var.google_region}"
}
# Google Container Engine (Kubernetes) cluster resource
resource "google_container_cluster" "secureskye" {
name = "secureskye"
zone = "${var.google_kubernetes_zone}"
additional_zones = "${var.google_kubernetes_additional_zones}"
initial_node_count = 2
}
# Kubernetes provider
provider "kubernetes" {
host = "${google_container_cluster.secureskye.endpoint}"
username = "${var.google_kubernetes_username}"
password = "${var.google_kubernetes_password}"
client_certificate = "${base64decode(google_container_cluster.secureskye.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.secureskye.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.secureskye.master_auth.0.cluster_ca_certificate)}"
}
# Module UI
module "ui" {
source = "./modules/ui"
}
My problem is: google_container_cluster was created successfully, but it fails on module ui creation (which contains 2 resource kubernetes_service and kubernetes_pod) with error
* kubernetes_pod.ui: Post https://<ip>/api/v1/namespaces/default/pods: error executing access token command "<user_path>\\AppData\\Local\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd config config-helper --format=json": err=exec: "<user_path>\\AppData\\Local\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd": file does not exist output=
So, questions:
1. Do I need gcloud + kubectl installed? Even though google_container_cluster was created successfully before I install gcloud or kubectl installed.
2. I want to use independent, separated credentials info, project, region from the one in gcloud, kubectl CLI. Am I doing this right?
I have been able to reproduce your scenario running the Terraform config file you provided (except the Module UI part), in a Linux machine, so your issue should be related to that last part of the code.
Regarding your questions:
I am not sure, because I tried from Google Cloud Shell, and both gcloud and kubectl are already preinstalled there, although I would recommend you to install them just to make sure that is not the issue here.
For the credentials part, I added two new variables to the variables.tf Terraform configuration file, as in this example (those credentials do not need to be the sames as in gcloud or kubectl:
Use your prefered credentials in this case.
variable "google_kubernetes_username" {
default = "<YOUR_USERNAME>"
}
variable "google_kubernetes_password" {
default = "<YOUR_PASSWORD>"
}
Maybe you could share more information regarding what can be found in your Module UI, in order to understand which file does not exist. I guess you are trying the deployment from a Windows machine, as for the notation in the paths to your files, but that should not be an important issue.

strongloop slc deploy env var complications

I've been deploying a loopback app via a custom init.d/app.conf script, using slc run --detach --cluster "cpu", but want to move to using strong-pm, as recommended.
But I've come across some limitations and am looking for any guidance on how to replicate the setup with which I'm currently familiar.
Currently I set app-specific configuration inside server/config.local.js and server/datasources.local.js, most importantly the PORT at which the app should listen for connections on. This works perfectly using slc run for local development and remote deploying for staging, all I do is set different env vars for each distinct app:
datasources.local.js:
module.exports = {
"mysqlDS": {
name: "mysqlDS",
connector: "mysql",
host: process.env.PROTEUS_MYSQL_HOST,
port: process.env.PROTEUS_MYSQL_PORT,
database: process.env.PROTEUS_MYSQL_DB,
username: process.env.PROTEUS_MYSQL_USER,
password: process.env.PROTEUS_MYSQL_PW
}
}
config.local.js:
module.exports = {
port: process.env.PROTEUS_API_PORT
}
When I deploy using strong-pm, I am not able to control this port, and it always gets set to 3000+N, where N is just incremented based on the service ID assigned to the app when it's deployed.
So even when I deploy and then set env using
slc ctl -C http://localhost:8701 env-set proteus-demo PROTEUS_API_PORT=3033 PROTEUS_DB=demo APP_DOMAIN=demo.domain.com
I see that strong-pm completely ignores PROTEUS_API_PORT when it redeploys with the new env vars:
ENV has changed, restarting
Service "1" listening on 0.0.0.0:3001
Restarting next commit Runner: commit 1/deploy/default/demo-deploy
Start Runner: commit 1/deploy/default/demo-deploy
Request (status) of current Runner: child 20066 commit 1/deploy/default/demo-deploy
Request {"cmd":"status"} of Runner: child 20066 commit 1/deploy/default/demo-deploy
3001! Not 3033 like I want, and spec'd in config.local.js. Is there a way to control this explicitly? I do not want to need to run an slc inspection command to determine the port for my nginx upstream block each time I deploy an app. Would be awesome to be able to specify listen PORT by service name, too.
FWIW, this is on an aws instance that will host demo and staging apps pointing to separate DBs and on different PORTs.
strong-pm only sets a PORT environment variable, which the app is responsible for honouring.
Based on loopback-boot/lib/executor:109, it appears that loopback actually prefers the PORT environment variable over the value in the config file. In that case it seems your best bet is to either:
pass a port in to app.listen() yourself
set one of the higher priority environment variables such as npm_config_port (which would normally be set via npm start --port 1234).