How to manage multiple kubernetes namespaces in a terraform project - kubernetes

I've been using terraform for a while now and I have deployed everything into separate clusters. Now due to cost, we'd like to merge the different clusters into one cluster and use kubernetes namespaces.
My desired outcome would be that I could call terraform apply - var="kubernetes_namespace=my-namespace" and it would create the namespace which could live alongside my other namespaces, however, due to how terraform remote state is managed, any new deployment will overwrite the old and I can't have co-existing namespaces.
When I try to redeploy another namespace I get
namespace = "deploy-pr-image" -> "test-second-branch-pr" # forces replacement
I can see why because it's writing everything to a single workspace file.
terraform {
backend "s3" {
bucket = "my-tf-state"
key = "terraform-services.tfstate"
region = "us-west-1"
workspace_key_prefix = "workspaces"
#dynamodb_table = "terraform-state-lock-dynamo"
}
}
Is there some way to use the workspace/namespace combination to keep terraform from overwriting my other namespace ?

Since now you'll be merging all your clusters into a single one, it would make sense to only have a backend where you can manage the state of the cluster rather than having multiple backends per Kubernetes namespaces.
I suggest you to update your module or root deployment to be flexible enough that it can create X number of Kubernetes namespaces resources rather than a single one using count or for_each.

Related

Add existing GKE cluster to terraform stat file

Lets assume I have an existing GKE cluster that contains all my applications. They were all deployed using different methods. Now I want to deploy some resources to that cluster using Terraform. The trouble here is that terraform doesn't see it in his state file so it can't interact with it. Another problem is that even if I get that cluster to my state file, terraform doesn't see all of the created resources in that cluster. This could lead to some conflicts e.g. I'm trying to deploy two resources with the same name. Is there a way to solve this problem or do I just have to deal with the reality of my existence and create a new cluster for every new project that I deploy with terraform?
You can use terraform import command to import your existing GKE cluster to terraform state. Prior to run it, you need to have the adequate terraform configuration for your cluster.
example of import command :
terraform import google_container_cluster.<TF_RESOURCE_NAME> projects/<PROJECT_ID>/locations/<YOUR-CLUSTER-ZONE>/clusters/<CLUSTER_NAME>
for a terraform configuration :
resource "google_container_cluster" "<TF_RESOURCE_NAME>" {
name = "<CLUSTER_NAME>"
location = "<YOUR-CLUSTER-ZONE>"
}
The CLUSTER_NAME is the name displayed in your GKE clusters list on Google Cloud Console.
Then you need also to import the cluster node pool(s) in the same way using terraform google_container_node_pool resource.

Update the node size of a digital ocean kubernetes cluster without replacing the whole cluster

I successfully maintain a kubernetes cluster in digital ocean throught terraform. The core cluster configuration is the following:
resource "digitalocean_kubernetes_cluster" "cluster" {
name = var.name
region = var.region
version = var.k8s_version
vpc_uuid = digitalocean_vpc.network.id
node_pool {
name = "workers"
size = var.k8s_worker_size
node_count = var.k8s_worker_count
}
}
The problem is, I now need to increase the node size (stored in the variable k8s_worker_size).
If I simply change the variable to a new string, the terraform plan results in a full replace of the kubernetes cluster:
digitalocean_kubernetes_cluster.cluster must be replaced
This is not doable in our production environment.
The correct procedure to perform this operation inside digital ocean is to:
Create a new node pool, with the required size
Use kubectl drain to remove our pods from the 'old' nodes
Remove the previous node pool.
Of course, by doing this manually inside the digital ocean console, the terraform state is completely out-of-sync and is therefore unusable.
Is there a way to perform that operation through terraform?
As an alternative options, is it possible to "manually" update the terraform state in order to sync it with the real cluster state after I perform the migration manually?
Is there a way to perform that operation through terraform?
There might be some edge cases where there is a solution to this. Since I am not familiar with kubernetes inside DigitalOcean I can't share a specific solution.
As an alternative options, is it possible to "manually" update the terraform state in order to sync it with the real cluster state after I perform the migration manually?
Yes! Do as you proposed manually and then remove the out-of-sync cluster with
terraform state rm digitalocean_kubernetes_cluster.cluster
from the state. Please visit the corresponding documentation for state rm and update the address if your cluster is in a module etc. Then use
terraform import digitalocean_kubernetes_cluster.cluster <id of your cluster>
to reimport the cluster. Please consult the documentation for importing the cluster for the details. The documentations mentions something around tagging the default node pool.

What is the difference between namespaces and contexts in Kubernetes?

I found specifying like kubectl --context dev --namespace default {other commands} before kubectl client in many examples. Can I get a clear difference between them in a k8's environment?
The kubernetes concept (and term) context only applies in the kubernetes client-side, i.e. the place where you run the kubectl command, e.g. your command prompt.
The kubernetes server-side doesn't recognise this term 'context'.
As an example, in the command prompt, i.e. as the client:
when calling the kubectl get pods -n dev, you're
retrieving the list of the pods located under the namespace 'dev'.
when calling the kubectl get deployments -n dev, you're
retrieving the list of the deployments located under the namespace 'dev'.
If you know that you're targetting basically only the 'dev' namespace at the moment, then instead of adding "-n dev" all the time in each of your kubectl commands, you can just:
Create a context named 'context-dev'.
Specify the namespace='dev' for this context.
Set the current-context='context-dev'.
This way, your commands above will be simplified to as followings:
kubectl get pods
kubectl get deployments
You can set different contexts, such as 'context-dev', 'context-staging', etc., whereby each of them is targeting different namespace. BTW it's not obligatory to prefix the name with 'context'. You can just set the name with 'dev', 'staging', etc.
Just as an analogy where a group of people are talking about films. So somewhere within the conversation the word 'Rocky' was used. Since they're talking about films, it's clear and there's no ambiguity that 'Rocky' here refers to the boxing film 'Rocky' and not about the "bumpy, stony" terrains. It's redundant and unnecessary to mention 'the movie Rocky' each time. Just one word, 'Rocky', is enough. The context is obviously about film.
The same thing with Kubernetes and with the example above. If the context is already set to a certain cluster and namespace, it's redundant and unnecessary to set and / or mention these parameters in each of your commands.
My explanation here is just revolving around namespace, but this is just an example. Other than specifying the namespace, within the context you will actually also specify which cluster you're targeting and the user info used to access the cluster. You can have a look inside the ~/.kube/config file to see what information other than the namespace is associated to each context.
In the sample command in your question above, both the namespace and the context are specified. In this case, kubectl will use whatever configuration values set within the 'dev' context, but the namespace value specified within this context (if it exists) will be ignored as it will be overriden by the value explicitly set in the command, i.e. 'default'.
Meanwhile, the namespace concept is used in both sides: server and client sides. It's a logical grouping of Kubernetes objects. Just like how we group files inside different folders in Operating Systems.
You use multiple contexts to target multiple different Kubernetes clusters.You can quickly switch between clusters by using the kubectl config use-context command.
Namespaces are a way to divide cluster resources between multiple users (via resource quota).Namespaces are intended for use in environments with many users spread across multiple teams, or projects.
A context in Kubernetes is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl: all kubectl commands run against that cluster. Each of the context that have been used will be available on your .kubeconfig.
Meanwhile a namespace is a way to support multiple virtual cluster within the same physical cluster. This usually will be related to resource quota as well as RBAC management.
A context is the connection to a specific cluster (username/apiserver host) used by kubectl. You can manage multiple clusters that way.
Namespace is a logical partition inside a specific cluster to manage resources and constraints.

Terraform apply, how to increment count and add kubernetes worker nodes to the existing workers?

I deployed k8s cluster on bare metal using terraform following this repository on github
Now I have three nodes:
ewr1-controller, ewr1-worker-0, ewr1-worker-1
Next, I would like to run terraform apply and increment the worker nodes (*ewr1-worker-3, ewr1-worker-4 ... *) while keeping the existing controller and worker nodes.
I tried incrementing the count.index to start from 3, however it still overwrites the existing workers.
resource "packet_device" "k8s_workers" {
project_id = "${packet_project.kubenet.id}"
facilities = "${var.facilities}"
count = "${var.worker_count}"
plan = "${var.worker_plan}"
operating_system = "ubuntu_16_04"
hostname = "${format("%s-%s-%d", "${var.facilities[0]}", "worker", count.index+3)}"
billing_cycle = "hourly"
tags = ["kubernetes", "k8s", "worker"]
}
I havent tried this but if I do
terraform state rm 'packet_device.k8s_workers'
I am assuming these worker nodes will not be managed by the kubernetes master. I don't want to create all the nodes at beginning because the worker nodes that I am adding will have different specs(instance types).
The entire script I used is available here on this github repository.
I appreciate it if someone could tell what I am missing here and how to achieve this.
Thanks!
Node resizing is best addressed using an autoscaler. Using Terraform to scale a nodepool might not be the optimal approach as the tool is meant to declare the state of the system rather than dynamically change it. The best approach for this is to use a cloud auto scaler.
In bare metal, you can implement a CloudProvider interface (like the one provided by cloud such as AWS, GCP, Azure) as described here
After implementing that, you need to determine if your K8s implementation can be operated as a provider by Terraform, and if that's the case, find the nodepool autoscaler resource that allows the autoscaling.
Wrapping up, Terraform is not meant to be used as an autoscaler given its natures as a declarative language that describes the infrastructure.
The autoscaling features in K8s are meant to tackle this kind of requirements.
I solved this issue by modifying and removing modules, resources from the terraform.state: manually modifying it and using terraform state rm <--->.
Leaving out (remove state) the section that I want to keep as they
are.
Modifying the sections in terraform.state that I want change when new servers are added.
Incrementing the counter to add new resources, see terraform
interpolation.
I am using bare-metal cloud provider to deploy k8s and it doesn't support k8s HA or VA autos-calling. This is may not be optimal solution as other have pointed out but if it is not something you need to do quite often, terraform can do the job albeit the hardway.

terraforming with dependant providers

In my terraform infrastructure, I spin up several Kubernetes clusters based on parameters, then install some standard contents to those Kubernetes clusters using the kubernetes provider.
When I change the parameters and one of the clusters is no longer needed, terraform is unable to tear it down because the provider and resources are both in the module. I don't see an alternative, however, because I create the kubernetes cluster in that same module, and the kubernetes object are all per kubernetes cluster.
All solutions I can think of involve adding a bunch of boilerplate to my terraform config. Should I consider generating my terraform config from a script?
I made a git repo that shows exactly the problems I'm having:
https://github.com/bukzor/terraform-gke-k8s-demo
TL;DR
Two solutions:
Create two separate modules with Terraform
Use interpolations and depends_on between the code that creates your Kubernetes cluster and the kubernetes resources:
resource "kubernetes_service" "example" {
metadata {
name = "my-service"
}
depends_on = ["aws_vpc.kubernetes"]
}
resource "aws_vpc" "kubernetes" {
...
}
When destroying resources
You are encountering a dependency lifecycle issue
PS: I don't know the code you've used to create / provision your Kubernetes cluster but I guess it looks like this
Write code for the Kubernetes cluster (creates a VPC)
Apply it
Write code for provisionning Kubernetes (create an Service that creates an ELB)
Apply it
Try to destroy everything => Error
What is happenning is that by creating a LoadBalancer Service, Kubernetes will provision an ELB on AWS. But Terraform doesn't know that and there is no link between the ELB created and any other resources managed by Terraform.
So when terraform tries to destroy the resources in the code, it will try to destroy the VPC. But it can't because there is an ELB inside that VPC that terraform doesn't know about.
The first thing would be to make sure that Terraform "deprovision" the Kubernetes cluster and then destroy the cluster itself.
Two solutions here:
Use different modules so there is no dependency lifecycle. For example the first module could be k8s-infra and the other could be k8s-resources. The first one manages all the squeleton of Kubernetes and is apply first / destroy last. The second one manages what is inside the cluster and is apply last / destroy first.
Use the depends_on parameter to write the dependency lifecycle explicitly
When creating resources
You might also ran into a dependency issue when terraform apply cannot create resources even if nothing is applied yet. I'll give an other example with a postgres
Write code to create an RDS PostgreSQL server
Apply it with Terraform
Write code, in the same module, to provision that RDS instance with the postgres terraform provider
Apply it with Terraform
Destroy everything
Try to apply everything => ERROR
By debugging Terraform a bit I've learned that all the providers are initialized at the beggining of the plan / apply so if one has an invalid config (wrong API keys / unreachable endpoint) then Terraform will fail.
The solution here is to use the target parameter of a plan / apply command.
Terraform will only initialize providers that are related to the resources that are applied.
Apply the RDS code with the AWS provider: terraform apply -target=aws_db_instance
Apply everything terraform apply. Because the RDS instance is already reachable, the PostgreSQL provider can also initiate itself