Refer to resource created by module TERRAFORM - kubernetes

got one question about terraform and reference to resource. Long story short: I have module to create AKS cluster (attachment) and I create cluster form one folder. On other folder I have other module to manage kubernetes it self: like create namespace, deployments etc. How can I refer to this cluster form other folder?

As Marko mentioned you would use outputs in the same root module, however if you are applying 1 plan, then the other you would likely need to use data sources.
Typically in my root modules I have a data-source.tf file with any pre-existing resources that I need to reference in the root module.
data "kubernetes_service" "example" {
metadata {
name = "terraform-example"
}
}
With the above data source defined, you can use it like this in your root module: data.kubernetes_service.example.attribute-you-want-to-retrieve
Here's a good reference: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/service

Related

How to cache resources that haven't changed rather than rebuild or delete?

I have a pulumi repository setup for an AWS project such that I have a directory of services
index.ts
services/
user-service/
recommendation-service/
chat-service/
convert-service/
Each service has its own docker file and application code (i.e. node or go micro service).
There is a pulumi script in the root index.ts that currently scans the services directory for directories with directory name matching the pattern: *-service.
For each service directory a fargateType ECS service is created.
These services are then added to their own target group and attached to an Application Load Balancer using a ALB listener with path based routing condition so that
/user/* -> user service /recommendation/* -> recommendation service /chat/* -> chat service ...etc
This is all working fine and dandy!!
The only issue is I wish to build a git pipeline with incremental builds... Meaning If there is no diff to the user-service I do not want to build the docker image or have pulumi calculate a diff of aws resources I want to skip all that without deleting the resource... It would be simple enough to just check to see if the file has been modified either using git to see what files have changed since last commit, or use a checksum.
I can do that but currently pulumi will delete those resources if they are skipped in the "pulumi up" script.
I would like to do this without creating a separate stack for each service, as it is convenient to reproduce the entire environment by creating a single new stack for all resources.
I want those resources to stay as they were if there is no change without pulumi having to create all those resources.

terraform remote backend using postgres

i am planning to use remote backend as postgres instead of s3 as enterprise standard.
terraform {
backend "pg" {
conn_str = "postgres://user:pass#db.example.com/schema_name"
}
}
When we use postgres remote backend, when we run terraform init, we have to provide schema which is specific to that terraform folder, as backend supports only one table and new record will be created with workspace name.
I am stuck now, as i have 50 projects and each have 2 tiers which is maintained in different folders, then we need to create 100 schemas in postgres. Also it is difficult to handle so many schemas in automated provisioning.
Can we handle something in similar to S3, where we have one bucket for all projects and multiple entries in same bucket with different key which specified in each terraform script. Can we use single schema for all projects and multiple tables/records based on key provide in backend configuration of each terraform folder.
You can use a single database and the pg provider will automatically create a specified schema.
Something like this:
terraform {
backend "pg" {
conn_str = "postgres://user:pass#db.example.com/terraform_backend"
schema = "fooapp"
}
}
This keeps the projects unique, at least. You could append a tier to that, too, or use Terraform Workspaces.
If you specify the config on the command line (aka partial configuration), as the provider recommends, it might make it easier to dynamically set for your use case:
terraform init \
-backend-config="conn_str=postgres://user:pass#db.example.com/terraform_backend" \
-backend-config="schema=fooapp-prod"
This works pretty well in my scenario similar to yours. Each project has a unique schema in a shared database and no tasks beyond the initial creation/configuration of the database is needed - the provider creates the schema as specified.

How to convert/migrate existing google cloud platform infrastructure to terraform or other IaC

Currently we have our kubernetes cluster master set to zonal, and require it to be regional. My idea is to convert the existing cluster and all workloads/nodes/resources to some infrastructure-as-code - preferably terraform (but could be as simple as a set of gcloud commands).
I know with GCP I can generate raw command lines for commands I'm about to run, but I don't know how (or if I even can) to convert existing infrastructure to the same.
Based on my research, it looks like it isn't exactly possible to do what I'm trying to do [in a straight-forward fashion]. So I'm looking for any advice, even if it's just to read some other documentation (for a tool I'm not familiar with maybe).
TL;DR: I'm looking to take my existing Google Cloud Platform Kubernetes cluster and rebuild it in order to change the location type from zonal to master - I don't actually care how this is done. What is a currently accepted best-practice way of doing this? If there isn't one, what is a quick and dirty way of doing this?
If you require me to specify further, I will - I have intentionally left out linking to specific research I've done.
Creating a Kubernetes cluster with terraform is very straightforward because ultimately making a Kubernetes cluster in GKE is straightforward, you'd just use the google_container_cluster and google_container_node_pool resources, like so:
resource "google_container_cluster" "primary" {
name = "${var.name}"
region = "${var.region}"
project = "${var.project_id}"
min_master_version = "${var.version}"
addons_config {
kubernetes_dashboard {
disabled = true
}
}
maintenance_policy {
daily_maintenance_window {
start_time = "03:00"
}
}
lifecycle {
ignore_changes = ["node_pool"]
}
node_pool {
name = "default-pool"
}
}
resource "google_container_node_pool" "default" {
name = "default"
project = "${var.project_id}"
region = "${var.region}"
cluster = "${google_container_cluster.primary.name}"
autoscaling {
min_node_count = "${var.node_pool_min_size}"
max_node_count = "${var.node_pool_max_size}"
}
management {
auto_repair = "${var.node_auto_repair}"
auto_upgrade = "${var.node_auto_upgrade}"
}
lifecycle {
ignore_changes = ["initial_node_count"]
}
node_config {
machine_type = "${var.node_machine_type}"
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform",
]
}
depends_on = ["google_container_cluster.primary"]
}
For a more fully featured experience, there are terraform modules available like this one
Converting an existing cluster is considerably more fraught. If you want to use terraform import
terraform import google_container_cluster.mycluster us-east1-a/my-cluster
However, in your comment , you mentioned wanting to convert a zonal cluster to a regional cluster. Unfortunately, that's not possible at this time
You decide whether your cluster is zonal or regional when you create
it. You cannot convert an existing zonal cluster to regional, or vice
versa.
Your best bet, in my opinion, is to:
Create a regional cluster with terraform, giving the cluster a new name
Backup your existing zonal cluster, either using an etcd backup, or a more sophisticated backup using heptio-ark
Restore that backup to your regional cluster
I wanted to achieve exactly that: Take existing cloud infrastructure and bring it to infrastructure as code (IaC), i.e. put it in *.tf files
There were basically 2 options that I found and took into consideration:
terraform import (Documentation)
Because of the following limitation terraform import did not achieve exactly what I was looking for, because it requires to manually create the resources.
The current implementation of Terraform import can only import resources into the state. It does not generate configuration. A future version of Terraform will also generate configuration.
Because of this, prior to running terraform import it is necessary to write manually a resource configuration block for the resource, to which the imported object will be mapped.
Terraformer (GitHub Repo)
A CLI tool that generates tf/json and tfstate files based on existing infrastructure (reverse Terraform).
This tools is provider-agnostic and follows the flow as terraform, i.e. plan and import. It was able to import specific resources entire workspaces and convet it into *.tf files.

Copying directories into minikube and persisting them

I am trying to copy some directories into the minikube VM to be used by some of the pods that are running. These include API credential files and template files used at run time by the application. I have found you can copy files using scp into the /home/docker/ directory, however these files are not persisted over reboots of the VM. I have read files/directories are persisted if stored in the /data/ directory on the VM (among others) however I get permission denied when trying to copy files to these directories.
Are there:
A: Any directories in minikube that will persist data that aren't protected in this way
B: Any other ways of doing the above without running into this issue (could well be going about this the wrong way)
To clarify, I have already been able to mount the files from /home/docker/ into the pods using volumes, so it's just the persisting data I'm unclear about.
Kubernetes has dedicated object types for these sorts of things. API credential files you might store in a Secret, and template files (if they aren't already built into your Docker image) could go into a ConfigMap. Both of them can either get translated to environment variables or mounted as artificial volumes in running containers.
In my experience, trying to store data directly on a node isn't a good practice. It's common enough to have multiple nodes, to not directly have login access to those nodes, and for them to be created and destroyed outside of your direct control (imagine an autoscaler running on a cloud provider that creates a new node when all of the existing nodes are 90% scheduled). There's a good chance your data won't (or can't) be on the host where you expect it.
This does lead to a proliferation of Kubernetes objects and associated resources, and you might find a Helm chart to be a good resource to tie them together. You can check the chart into source control along with your application, and deploy the whole thing in one shot. While it has a couple of useful features beyond just packaging resources together (a deploy-time configuration system, a templating language for the Kubernetes YAML itself) you can ignore these if you don't need them and just write a bunch of YAML files and a small control file.
For minikube, data kept in $HOME/.minikube/files directory is copied to / directory in VM host by minikube.

Using Ansible to automatically configure AWS autoscaling group instances

I'm using Amazon Web Services to create an autoscaling group of application instances behind an Elastic Load Balancer. I'm using a CloudFormation template to create the autoscaling group + load balancer and have been using Ansible to configure other instances.
I'm having trouble wrapping my head around how to design things such that when new autoscaling instances come up, they can automatically be provisioned by Ansible (that is, without me needing to find out the new instance's hostname and run Ansible for it). I've looked into Ansible's ansible-pull feature but I'm not quite sure I understand how to use it. It requires a central git repository which it pulls from, but how do you deal with sensitive information which you wouldn't want to commit?
Also, the current way I'm using Ansible with AWS is to create the stack using a CloudFormation template, then I get the hostnames as output from the stack, and then generate a hosts file for Ansible to use. This doesn't feel quite right – is there "best practice" for this?
Yes, another way is just to simply run your playbooks locally once the instance starts. For example you can create an EC2 AMI for your deployment that in the rc.local file (Linux) calls ansible-playbook -i <inventory-only-with-localhost-file> <your-playbook>.yml. rc.local is almost the last script run at startup.
You could just store that sensitive information in your EC2 AMI, but this is a very wide topic and really depends on what kind of sensitive information it is. (You can also use private git repositories to store sensitive data).
If for example your playbooks get updated regularly you can create a cron entry in your AMI that runs every so often and that actually runs your playbook to make sure your instance configuration is always up to date. Thus avoiding having "push" from a remote workstation.
This is just one approach there could be many others and it depends on what kind of service you are running, what kind data you are using, etc.
I don't think you should use Ansible to configure new auto-scaled instances. Instead use Ansible to configure a new image, of which you will create an AMI (Amazon Machine Image), and order AWS autoscaling to launch from that instead.
On top of this, you should also use Ansible to easily update your existing running instances whenever you change your playbook.
Alternatives
There are a few ways to do this. First, I wanted to cover some alternative ways.
One option is to use Ansible Tower. This creates a dependency though: your Ansible Tower server needs to be up and running at the time autoscaling or similar happens.
The other option is to use something like packer.io and build fully-functioning server AMIs. You can install all your code into these using Ansible. This doesn't have any non-AWS dependencies, and has the advantage that it means servers start up fast. Generally speaking building AMIs is the recommended approach for autoscaling.
Ansible Config in S3 Buckets
The alternative route is a bit more complex, but has worked well for us when running a large site (millions of users). It's "serverless" and only depends on AWS services. It also supports multiple Availability Zones well, and doesn't depend on running any central server.
I've put together a GitHub repo that contains a fully-working example with Cloudformation. I also put together a presentation for the London Ansible meetup.
Overall, it works as follows:
Create S3 buckets for storing the pieces that you're going to need to bootstrap your servers.
Save your Ansible playbook and roles etc in one of those S3 buckets.
Have your Autoscaling process run a small shell script. This script fetches things from your S3 buckets and uses it to "bootstrap" Ansible.
Ansible then does everything else.
All secret values such as Database passwords are stored in CloudFormation Parameter values. The 'bootstrap' shell script copies these into an Ansible fact file.
So that you're not dependent on external services being up you also need to save any build dependencies (eg: any .deb files, package install files or similar) in an S3 bucket. You want this because you don't want to require ansible.com or similar to be up and running for your Autoscale bootstrap script to be able to run. Generally speaking I've tried to only depend on Amazon services like S3.
In our case, we then also use AWS CodeDeploy to actually install the Rails application itself.
The key bits of the config relating to the above are:
S3 Bucket Creation
Script that copies things to S3
Script to copy Bootstrap Ansible. This is the core of the process. This also writes the Ansible fact files based on the CloudFormation parameters.
Use the Facts in the template.