override config file from helm chart with terraform - kubernetes

I'm trying to deploy ArgoCD in my k8s cluser using the helm chart for ArgoCD. I deploy everything with Terraform. Now i want to change the config file from ArgoCD such that it can connect to my private repo. It works when i manually change the file using kubectl after ArgoCD is running in my cluster but when I try to use terraform, I get the message Error: configmaps "argocd-cm" already exists meaning that i cannot overrite the configmap that is created by ArgoCD. How to i change these variables?
terraform
resource "kubernetes_namespace" "argocd" {
metadata {
name = "argocd"
}
}
resource "kubernetes_secret" "argocd_registry_secret" {
metadata {
name = "argocd-repo-credentials"
namespace = "argocd"
}
data = {
username = "USERNAME"
password = "PASSWORD"
}
}
data "helm_repository" "argoproj" {
name = "argoproj"
url = "https://argoproj.github.io/argo-helm"
}
resource "helm_release" "argocd" {
name = "argocd"
chart = "argoproj/argo-cd"
version = "2.3.5"
namespace = kubernetes_namespace.argocd.metadata[0].name
timeout = 600
}
resource "kubernetes_config_map" "argocd-cm" {
depends_on = [helm_release.argocd]
metadata {
name = "argocd-cm"
namespace = "argocd"
}
data = {
config = file("${path.module}/configs/ingress/argo-configmap.yaml")
}
}

Instead of name use generate_name in kubernetes_config_map
generate_name - (Optional) Prefix, used by the server, to generate a unique name ONLY IF the name field has not been provided. This value will also be combined with a unique suffix.

You can add private repo through argocd helm chart, add this to argocd helm release resource in TF file:
set {
name = "server.config.repositories"
value = "${file("${path.module}/repositories.yml")}"
}
where repositories.yml is:
- url: ssh://abc#def.com/my-repo.git
sshPrivateKeySecret:
name: argo-cd-stash-key
key: ssh-privatekey

Related

Terraform helm_release destroys all other helm releases

We have three services that we are trying to deploy to a k8s cluster with Terraform helm_release. Each service has something similar to
release.tf
resource "helm_release" "someService" {
count = var.deploy_apps ? 1 : 0
name = "someService"
version = "2.1.3"
namespace = "app"
timeout = 600
values = [yamlencode({
image = {
tag = var.terraform_version
}
envVars = {
k8env = local.environment
}
})]
}
variables.tf
variable "deploy_apps" {
type = bool
default = true
description = "deploy someService"
}
With the only difference being the service name. Each service is in its own Git repo, each repo has its own variables.tf file which contains its own deploy_apps variable, and they are all deployed to the same namespace.
When any of the services is deployed, it destroys the other two.
How can we prevent that from happening?

How to install AGIC in Kubernetes cluster using Terraform

I am trying to install AGIC in AKS using Terraform. I am following this document https://learn.microsoft.com/en-us/azure/terraform/terraform-create-k8s-cluster-with-aks-applicationgateway-ingress but this document shows partial terraform deployment i want to fully automate it with the help of Terraform. Is there any other document/way to do this?
Of course, you can use the Terraform to deploy the Helm charts to the AKS. And here is an example for deploying Helm charts through Terraform:
data "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
resource "helm_release" "example" {
name = "my-redis-release"
repository = data.helm_repository.stable.metadata[0].name
chart = "redis"
version = "6.0.1"
values = [
"${file("values.yaml")}"
]
set {
name = "cluster.enabled"
value = "true"
}
set {
name = "metrics.enabled"
value = "true"
}
set_string {
name = "service.annotations.prometheus\\.io/port"
value = "9127"
}
}
And you can also configure the certificate of the AKS to deploy the Helm charts through Terraform, take a look at the document here.

Deploying helm charts via Terraform Helm provider and Azure DevOps while fetching the helm charts from ACR

I am trying to deploy the helm charts from ACR to an AKS cluster using Terraform helm provider and Azure DevOps container job but it fails while fetching the helm chart from ACR. Please let me know what is going wrong.
helm provider tf module:
data "helm_repository" "cluster_rbac_helm_chart_repo" {
name = "mcp-rbac-cluster"
url = "https://mcpshareddcr.azurecr.io"
}
# Deploy Cluster RBAC helm chart onto the cluster
resource "helm_release" "cluster_rbac_helm_chart_release" {
name = "mcp-rbac-cluster"
repository = data.helm_repository.cluster_rbac_helm_chart_repo.metadata[0].name
chart = "mcp-rbac-cluster"
}
provider:
version = "=1.36.0"
tenant_id = var.ARM_TENANT_ID
subscription_id = var.ARM_SUBSCRIPTION_ID
client_id = var.ARM_CLIENT_ID
client_secret = var.ARM_CLIENT_SECRET
skip_provider_registration = true
}
data "azurerm_kubernetes_cluster" "aks_cluster" {
name = var.aks_cluster
resource_group_name = var.resource_group_aks
}
locals {
kubeconfig_path = "/tmp/kubeconfig"
}
resource "local_file" "kubeconfig" {
filename = local.kubeconfig_path
content = data.azurerm_kubernetes_cluster.aks_cluster.kube_admin_config_raw
}
provider "helm" {
home = "resources/.helm"
kubernetes {
load_config_file = true
config_path = local.kubeconfig_path
}
}
module "aks_resources" {
source = "./modules/helm/aks-resources"
}
error:
Error: Looks like "" is not a valid chart repository or cannot be reached: Failed to fetch /index.yaml : 404 Not Found
Until now, Helm still doesn't support directly installing chart from an OCI registry.
The recommended steps are:
helm chart remove mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart pull mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart export mycontainerregistry.azurecr.io/helm/hello-world:v1 --destination ./install
cd install & helm install myhelmtest ./hello-world
So my solution is:
resource "null_resource" "download_chart" {
provisioner "local-exec" {
command = <<-EOT
export HELM_EXPERIMENTAL_OCI=1
helm registry login mycontainerregistry.azurecr.io --username someuser --password somepass
helm chart remove mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart pull mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart export mycontainerregistry.azurecr.io/helm/hello-world:v1 --destination ./install
EOT
}
}
resource "helm_release" "chart" {
name = "hello_world"
repository = "./install"
chart = "hello-world"
version = "v1"
depends_on = [null_resource.download_chart]
}
Not perfect but works.
The problem is that you use the wrong url in the Terraform helm_repository. The right url for ACR looks like this:
https://acrName.azurecr.io/helm/v1/repo
And the ACR is a private registry, so it means you need to add the username and password for it. Finally, your Terraform code should like this with version 2.0+ of helm provider:
resource "helm_release" "my-chart" {
name = "my-chart"
chart = "my/chart"
repository = "https://${var.acr_name}.azurecr.io/helm/v1/repo"
repository_username = var.acr_user_name
repository_password = var.acr_user_password
}
Or with 1.x helm provider:
data "helm_repository" "cluster_rbac_helm_chart_repo" {
name = "mcp-rbac-cluster"
url = "https://mcpshareddcr.azurecr.io/helm/v1/repo"
username = "xxxxx"
password = "xxxxx"
}
# Deploy Cluster RBAC helm chart onto the cluster
resource "helm_release" "cluster_rbac_helm_chart_release" {
name = "mcp-rbac-cluster"
repository = data.helm_repository.cluster_rbac_helm_chart_repo.metadata[0].name
chart = "mcp-rbac-cluster"
}
Update
Here is the screenshot that it works well and deploy the charts in the AKS:
Small enhancement to the above solution. Include a trigger to force a download of the chart every time. Otherwise, it expects that you always maintain the local copy of the chart post the first deployment
resource "null_resource" "download_chart" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = <<-EOT
export HELM_EXPERIMENTAL_OCI=1
helm registry login ${var.registry_fqdn} --username ${var.acr_client_id} --password ${var.acr_client_secret}
helm chart remove ${var.registry_fqdn}/helm/${var.chart_name}:${var.chart_tag}
helm chart pull ${var.registry_fqdn}/helm/${var.chart_name}:${var.chart_tag}
helm chart export ${var.registry_fqdn}/helm/${var.chart_name}:${var.chart_tag} --destination ./install
EOT
}
}

How can I configure an AWS EKS autoscaler with Terraform?

I'm using the AWS EKS provider (github.com/terraform-aws-modules/terraform-aws-eks ). I'm following along the tutorial with https://learn.hashicorp.com/terraform/aws/eks-intro
However this does not seem to have autoscaling enabled... It seems it's missing the cluster-autoscaler pod / daemon?
Is Terraform able to provision this functionality? Or do I need to set this up following a guide like: https://eksworkshop.com/scaling/deploy_ca/
You can deploy Kubernetes resources using Terraform. There is both a Kubernetes provider and a Helm provider.
data "aws_eks_cluster_auth" "authentication" {
name = "${var.cluster_id}"
}
provider "kubernetes" {
# Use the token generated by AWS iam authenticator to connect as the provider does not support exec auth
# see: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
host = "${var.cluster_endpoint}"
cluster_ca_certificate = "${base64decode(var.cluster_certificate_authority_data)}"
token = "${data.aws_eks_cluster_auth.authentication.token}"
load_config_file = false
}
provider "helm" {
install_tiller = "true"
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.12.3"
}
resource "helm_release" "cluster_autoscaler" {
name = "cluster-autoscaler"
repository = "stable"
chart = "cluster-autoscaler"
namespace = "kube-system"
version = "0.12.2"
set {
name = "autoDiscovery.enabled"
value = "true"
}
set {
name = "autoDiscovery.clusterName"
value = "${var.cluster_name}"
}
set {
name = "cloudProvider"
value = "aws"
}
set {
name = "awsRegion"
value = "${data.aws_region.current_region.name}"
}
set {
name = "rbac.create"
value = "true"
}
set {
name = "sslCertPath"
value = "/etc/ssl/certs/ca-bundle.crt"
}
}
This answer below is still not complete... But at least it gets me partially further...
1.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
helm install stable/cluster-autoscaler --name my-release --set "autoscalingGroups[0].name=demo,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1" --set rbac.create=true
And then manually fix the certificate path:
kubectl edit deployments my-release-aws-cluster-autoscaler
replace the following:
path: /etc/ssl/certs/ca-bundle.crt
With
path: /etc/ssl/certs/ca-certificates.crt
2.
In the AWS console, give AdministratorAccess policy to the terraform-eks-demo-node role.
3.
Update the nodes parameter with (kubectl edit deployments my-release-aws-cluster-autoscaler)
- --nodes=1:10:terraform-eks-demo20190922124246790200000007

Terraform - staggered provider population

I have been looking at implementing Kubernetes with Terraform over the past week and I seem to have a lifecycle issue.
While I can make a Kubernetes resource depend on a cluster being spun up, the KUBECONFIG file isn't updated in the middle of the terraform apply.
The kubernete
resource "kubernetes_service" "example" {
...
depends_on = ["digitalocean_kubernetes_cluster.example"]
}
resource "digitalocean_kubernetes_cluster" "example" {
name = "example"
region = "${var.region}"
version = "1.12.1-do.2"
node_pool {
name = "woker-pool"
size = "s-1vcpu-2gb"
node_count = 1
}
provisioner "local-exec" {
command = "sh ./get-kubeconfig.sh" // gets KUBECONFIG file from digitalocean API.
environment = {
digitalocean_kubernetes_cluster_id = "${digitalocean_kubernetes_cluster.k8s.id}"
digitalocean_kubernetes_cluster_name = "${digitalocean_kubernetes_cluster.k8s.name}"
digitalocean_api_token = "${var.digitalocean_token}"
}
}
While I can pull the CONFIG file down using the API, terraform won't use this file, because the terraform plan is already in motion
I've seen some examples using ternary operators (resource ? 1 : 0) but I haven't found a workaround for non count created clusters besides -target
Ideally, I'd like to create this with one terraform repo.
It turns out that the digitalocean_kubernetes_cluster resource has an attribute which can be passed to the provider "kubernetes" {} like so:
resource "digitalocean_kubernetes_cluster" "k8s" {
name = "k8s"
region = "${var.region}"
version = "1.12.1-do.2"
node_pool {
name = "woker-pool"
size = "s-1vcpu-2gb"
node_count = 1
}
}
provider "kubernetes" {
host = "${digitalocean_kubernetes_cluster.k8s.endpoint}"
client_certificate = "${base64decode(digitalocean_kubernetes_cluster.k8s.kube_config.0.client_certificate)}"
client_key = "${base64decode(digitalocean_kubernetes_cluster.k8s.kube_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(digitalocean_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)}"
}
It results in one provider being dependant on the other, and acts accordingly.