How to install AGIC in Kubernetes cluster using Terraform - kubernetes

I am trying to install AGIC in AKS using Terraform. I am following this document https://learn.microsoft.com/en-us/azure/terraform/terraform-create-k8s-cluster-with-aks-applicationgateway-ingress but this document shows partial terraform deployment i want to fully automate it with the help of Terraform. Is there any other document/way to do this?

Of course, you can use the Terraform to deploy the Helm charts to the AKS. And here is an example for deploying Helm charts through Terraform:
data "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
resource "helm_release" "example" {
name = "my-redis-release"
repository = data.helm_repository.stable.metadata[0].name
chart = "redis"
version = "6.0.1"
values = [
"${file("values.yaml")}"
]
set {
name = "cluster.enabled"
value = "true"
}
set {
name = "metrics.enabled"
value = "true"
}
set_string {
name = "service.annotations.prometheus\\.io/port"
value = "9127"
}
}
And you can also configure the certificate of the AKS to deploy the Helm charts through Terraform, take a look at the document here.

Related

Terraform helm_release destroys all other helm releases

We have three services that we are trying to deploy to a k8s cluster with Terraform helm_release. Each service has something similar to
release.tf
resource "helm_release" "someService" {
count = var.deploy_apps ? 1 : 0
name = "someService"
version = "2.1.3"
namespace = "app"
timeout = 600
values = [yamlencode({
image = {
tag = var.terraform_version
}
envVars = {
k8env = local.environment
}
})]
}
variables.tf
variable "deploy_apps" {
type = bool
default = true
description = "deploy someService"
}
With the only difference being the service name. Each service is in its own Git repo, each repo has its own variables.tf file which contains its own deploy_apps variable, and they are all deployed to the same namespace.
When any of the services is deployed, it destroys the other two.
How can we prevent that from happening?

override config file from helm chart with terraform

I'm trying to deploy ArgoCD in my k8s cluser using the helm chart for ArgoCD. I deploy everything with Terraform. Now i want to change the config file from ArgoCD such that it can connect to my private repo. It works when i manually change the file using kubectl after ArgoCD is running in my cluster but when I try to use terraform, I get the message Error: configmaps "argocd-cm" already exists meaning that i cannot overrite the configmap that is created by ArgoCD. How to i change these variables?
terraform
resource "kubernetes_namespace" "argocd" {
metadata {
name = "argocd"
}
}
resource "kubernetes_secret" "argocd_registry_secret" {
metadata {
name = "argocd-repo-credentials"
namespace = "argocd"
}
data = {
username = "USERNAME"
password = "PASSWORD"
}
}
data "helm_repository" "argoproj" {
name = "argoproj"
url = "https://argoproj.github.io/argo-helm"
}
resource "helm_release" "argocd" {
name = "argocd"
chart = "argoproj/argo-cd"
version = "2.3.5"
namespace = kubernetes_namespace.argocd.metadata[0].name
timeout = 600
}
resource "kubernetes_config_map" "argocd-cm" {
depends_on = [helm_release.argocd]
metadata {
name = "argocd-cm"
namespace = "argocd"
}
data = {
config = file("${path.module}/configs/ingress/argo-configmap.yaml")
}
}
Instead of name use generate_name in kubernetes_config_map
generate_name - (Optional) Prefix, used by the server, to generate a unique name ONLY IF the name field has not been provided. This value will also be combined with a unique suffix.
You can add private repo through argocd helm chart, add this to argocd helm release resource in TF file:
set {
name = "server.config.repositories"
value = "${file("${path.module}/repositories.yml")}"
}
where repositories.yml is:
- url: ssh://abc#def.com/my-repo.git
sshPrivateKeySecret:
name: argo-cd-stash-key
key: ssh-privatekey

How to Create Terraform code for GKE cluster

We can create/see the gcloud command from GKE cluster creation UI , is there a way to convert the command into terraform code for GKE cluster. thanks
You can create GKE cluster by using terraform. There is a official link, by following you need to write the .tf file and apply.
resource "google_container_cluster" "primary" {
name = "my-gke-cluster"
location = "us-central1"
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
This is an useful tutorial.

Deploying helm charts via Terraform Helm provider and Azure DevOps while fetching the helm charts from ACR

I am trying to deploy the helm charts from ACR to an AKS cluster using Terraform helm provider and Azure DevOps container job but it fails while fetching the helm chart from ACR. Please let me know what is going wrong.
helm provider tf module:
data "helm_repository" "cluster_rbac_helm_chart_repo" {
name = "mcp-rbac-cluster"
url = "https://mcpshareddcr.azurecr.io"
}
# Deploy Cluster RBAC helm chart onto the cluster
resource "helm_release" "cluster_rbac_helm_chart_release" {
name = "mcp-rbac-cluster"
repository = data.helm_repository.cluster_rbac_helm_chart_repo.metadata[0].name
chart = "mcp-rbac-cluster"
}
provider:
version = "=1.36.0"
tenant_id = var.ARM_TENANT_ID
subscription_id = var.ARM_SUBSCRIPTION_ID
client_id = var.ARM_CLIENT_ID
client_secret = var.ARM_CLIENT_SECRET
skip_provider_registration = true
}
data "azurerm_kubernetes_cluster" "aks_cluster" {
name = var.aks_cluster
resource_group_name = var.resource_group_aks
}
locals {
kubeconfig_path = "/tmp/kubeconfig"
}
resource "local_file" "kubeconfig" {
filename = local.kubeconfig_path
content = data.azurerm_kubernetes_cluster.aks_cluster.kube_admin_config_raw
}
provider "helm" {
home = "resources/.helm"
kubernetes {
load_config_file = true
config_path = local.kubeconfig_path
}
}
module "aks_resources" {
source = "./modules/helm/aks-resources"
}
error:
Error: Looks like "" is not a valid chart repository or cannot be reached: Failed to fetch /index.yaml : 404 Not Found
Until now, Helm still doesn't support directly installing chart from an OCI registry.
The recommended steps are:
helm chart remove mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart pull mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart export mycontainerregistry.azurecr.io/helm/hello-world:v1 --destination ./install
cd install & helm install myhelmtest ./hello-world
So my solution is:
resource "null_resource" "download_chart" {
provisioner "local-exec" {
command = <<-EOT
export HELM_EXPERIMENTAL_OCI=1
helm registry login mycontainerregistry.azurecr.io --username someuser --password somepass
helm chart remove mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart pull mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart export mycontainerregistry.azurecr.io/helm/hello-world:v1 --destination ./install
EOT
}
}
resource "helm_release" "chart" {
name = "hello_world"
repository = "./install"
chart = "hello-world"
version = "v1"
depends_on = [null_resource.download_chart]
}
Not perfect but works.
The problem is that you use the wrong url in the Terraform helm_repository. The right url for ACR looks like this:
https://acrName.azurecr.io/helm/v1/repo
And the ACR is a private registry, so it means you need to add the username and password for it. Finally, your Terraform code should like this with version 2.0+ of helm provider:
resource "helm_release" "my-chart" {
name = "my-chart"
chart = "my/chart"
repository = "https://${var.acr_name}.azurecr.io/helm/v1/repo"
repository_username = var.acr_user_name
repository_password = var.acr_user_password
}
Or with 1.x helm provider:
data "helm_repository" "cluster_rbac_helm_chart_repo" {
name = "mcp-rbac-cluster"
url = "https://mcpshareddcr.azurecr.io/helm/v1/repo"
username = "xxxxx"
password = "xxxxx"
}
# Deploy Cluster RBAC helm chart onto the cluster
resource "helm_release" "cluster_rbac_helm_chart_release" {
name = "mcp-rbac-cluster"
repository = data.helm_repository.cluster_rbac_helm_chart_repo.metadata[0].name
chart = "mcp-rbac-cluster"
}
Update
Here is the screenshot that it works well and deploy the charts in the AKS:
Small enhancement to the above solution. Include a trigger to force a download of the chart every time. Otherwise, it expects that you always maintain the local copy of the chart post the first deployment
resource "null_resource" "download_chart" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = <<-EOT
export HELM_EXPERIMENTAL_OCI=1
helm registry login ${var.registry_fqdn} --username ${var.acr_client_id} --password ${var.acr_client_secret}
helm chart remove ${var.registry_fqdn}/helm/${var.chart_name}:${var.chart_tag}
helm chart pull ${var.registry_fqdn}/helm/${var.chart_name}:${var.chart_tag}
helm chart export ${var.registry_fqdn}/helm/${var.chart_name}:${var.chart_tag} --destination ./install
EOT
}
}

How can I configure an AWS EKS autoscaler with Terraform?

I'm using the AWS EKS provider (github.com/terraform-aws-modules/terraform-aws-eks ). I'm following along the tutorial with https://learn.hashicorp.com/terraform/aws/eks-intro
However this does not seem to have autoscaling enabled... It seems it's missing the cluster-autoscaler pod / daemon?
Is Terraform able to provision this functionality? Or do I need to set this up following a guide like: https://eksworkshop.com/scaling/deploy_ca/
You can deploy Kubernetes resources using Terraform. There is both a Kubernetes provider and a Helm provider.
data "aws_eks_cluster_auth" "authentication" {
name = "${var.cluster_id}"
}
provider "kubernetes" {
# Use the token generated by AWS iam authenticator to connect as the provider does not support exec auth
# see: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
host = "${var.cluster_endpoint}"
cluster_ca_certificate = "${base64decode(var.cluster_certificate_authority_data)}"
token = "${data.aws_eks_cluster_auth.authentication.token}"
load_config_file = false
}
provider "helm" {
install_tiller = "true"
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.12.3"
}
resource "helm_release" "cluster_autoscaler" {
name = "cluster-autoscaler"
repository = "stable"
chart = "cluster-autoscaler"
namespace = "kube-system"
version = "0.12.2"
set {
name = "autoDiscovery.enabled"
value = "true"
}
set {
name = "autoDiscovery.clusterName"
value = "${var.cluster_name}"
}
set {
name = "cloudProvider"
value = "aws"
}
set {
name = "awsRegion"
value = "${data.aws_region.current_region.name}"
}
set {
name = "rbac.create"
value = "true"
}
set {
name = "sslCertPath"
value = "/etc/ssl/certs/ca-bundle.crt"
}
}
This answer below is still not complete... But at least it gets me partially further...
1.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
helm install stable/cluster-autoscaler --name my-release --set "autoscalingGroups[0].name=demo,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1" --set rbac.create=true
And then manually fix the certificate path:
kubectl edit deployments my-release-aws-cluster-autoscaler
replace the following:
path: /etc/ssl/certs/ca-bundle.crt
With
path: /etc/ssl/certs/ca-certificates.crt
2.
In the AWS console, give AdministratorAccess policy to the terraform-eks-demo-node role.
3.
Update the nodes parameter with (kubectl edit deployments my-release-aws-cluster-autoscaler)
- --nodes=1:10:terraform-eks-demo20190922124246790200000007