I'm using the AWS EKS provider (github.com/terraform-aws-modules/terraform-aws-eks ). I'm following along the tutorial with https://learn.hashicorp.com/terraform/aws/eks-intro
However this does not seem to have autoscaling enabled... It seems it's missing the cluster-autoscaler pod / daemon?
Is Terraform able to provision this functionality? Or do I need to set this up following a guide like: https://eksworkshop.com/scaling/deploy_ca/
You can deploy Kubernetes resources using Terraform. There is both a Kubernetes provider and a Helm provider.
data "aws_eks_cluster_auth" "authentication" {
name = "${var.cluster_id}"
}
provider "kubernetes" {
# Use the token generated by AWS iam authenticator to connect as the provider does not support exec auth
# see: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
host = "${var.cluster_endpoint}"
cluster_ca_certificate = "${base64decode(var.cluster_certificate_authority_data)}"
token = "${data.aws_eks_cluster_auth.authentication.token}"
load_config_file = false
}
provider "helm" {
install_tiller = "true"
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.12.3"
}
resource "helm_release" "cluster_autoscaler" {
name = "cluster-autoscaler"
repository = "stable"
chart = "cluster-autoscaler"
namespace = "kube-system"
version = "0.12.2"
set {
name = "autoDiscovery.enabled"
value = "true"
}
set {
name = "autoDiscovery.clusterName"
value = "${var.cluster_name}"
}
set {
name = "cloudProvider"
value = "aws"
}
set {
name = "awsRegion"
value = "${data.aws_region.current_region.name}"
}
set {
name = "rbac.create"
value = "true"
}
set {
name = "sslCertPath"
value = "/etc/ssl/certs/ca-bundle.crt"
}
}
This answer below is still not complete... But at least it gets me partially further...
1.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
helm install stable/cluster-autoscaler --name my-release --set "autoscalingGroups[0].name=demo,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1" --set rbac.create=true
And then manually fix the certificate path:
kubectl edit deployments my-release-aws-cluster-autoscaler
replace the following:
path: /etc/ssl/certs/ca-bundle.crt
With
path: /etc/ssl/certs/ca-certificates.crt
2.
In the AWS console, give AdministratorAccess policy to the terraform-eks-demo-node role.
3.
Update the nodes parameter with (kubectl edit deployments my-release-aws-cluster-autoscaler)
- --nodes=1:10:terraform-eks-demo20190922124246790200000007
Related
I have a Terraform config that (among other resources) creates a Google Kubernetes Engine cluster on Google Cloud. I'm using the kubectl provider to add YAML manifests for a ManagedCertificate and a FrontendConfig, since these are not part of the kubernetes or google providers.
This works as expected when applying the Terraform config from my local machine, but when I try to execute it in our CI pipeline, I get the following error for both of the kubectl_manifest resources:
Error: failed to create kubernetes rest client for read of resource: Get "http://localhost/api?timeout=32s": dial tcp 127.0.0.1:80: connect: connection refused
Since I'm only facing this issue during CI, my first guess is that the service account is missing the right scopes, but as far as I can tell, all scopes are present. Any suggestions and ideas are greatly appreciated!
The provider trying to connect with localhost, which means either to you need to provide a proper kube-config file or set it dynamically in the terraform.
Although you didn't mention how are setting the auth, but here is two way
Poor way
resource "null_resource" "deploy-app" {
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
kubectl apply -f myapp.yaml ./temp/kube-config.yaml;
EOT
}
# will run always, its bad
triggers = {
always_run = "${timestamp()}"
}
depends_on = [
local_file.kube_config
]
}
resource "local_file" "kube_config" {
content = var.my_kube_config # pass the config file from ci variable
filename = "${path.module}/temp/kube-config.yaml"
}
Proper way
data "google_container_cluster" "cluster" {
name = "your_cluster_name"
}
data "google_client_config" "current" {
}
provider "kubernetes" {
host = data.google_container_cluster.cluster.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(
data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate
)
}
data "kubectl_file_documents" "app_yaml" {
content = file("myapp.yaml")
}
resource "kubectl_manifest" "app_installer" {
for_each = data.kubectl_file_documents.app_yaml.manifests
yaml_body = each.value
}
If the cluster in the same module , then provider should be
provider "kubernetes" {
load_config_file = "false"
host = google_container_cluster.my_cluster.endpoint
client_certificate = google_container_cluster.my_cluster.master_auth.0.client_certificate
client_key = google_container_cluster.my_cluster.master_auth.0.client_key
cluster_ca_certificate = google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate
}
Fixed the issue by adding load_config_file = false to the kubectl provider config. My provider config now looks like this:
data "google_client_config" "default" {}
provider "kubernetes" {
host = "https://${endpoint from GKE}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(CA certificate from GKE)
}
provider "kubectl" {
host = "https://${endpoint from GKE}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(CA certificate from GKE)
load_config_file = false
}
I'm trying to deploy ArgoCD in my k8s cluser using the helm chart for ArgoCD. I deploy everything with Terraform. Now i want to change the config file from ArgoCD such that it can connect to my private repo. It works when i manually change the file using kubectl after ArgoCD is running in my cluster but when I try to use terraform, I get the message Error: configmaps "argocd-cm" already exists meaning that i cannot overrite the configmap that is created by ArgoCD. How to i change these variables?
terraform
resource "kubernetes_namespace" "argocd" {
metadata {
name = "argocd"
}
}
resource "kubernetes_secret" "argocd_registry_secret" {
metadata {
name = "argocd-repo-credentials"
namespace = "argocd"
}
data = {
username = "USERNAME"
password = "PASSWORD"
}
}
data "helm_repository" "argoproj" {
name = "argoproj"
url = "https://argoproj.github.io/argo-helm"
}
resource "helm_release" "argocd" {
name = "argocd"
chart = "argoproj/argo-cd"
version = "2.3.5"
namespace = kubernetes_namespace.argocd.metadata[0].name
timeout = 600
}
resource "kubernetes_config_map" "argocd-cm" {
depends_on = [helm_release.argocd]
metadata {
name = "argocd-cm"
namespace = "argocd"
}
data = {
config = file("${path.module}/configs/ingress/argo-configmap.yaml")
}
}
Instead of name use generate_name in kubernetes_config_map
generate_name - (Optional) Prefix, used by the server, to generate a unique name ONLY IF the name field has not been provided. This value will also be combined with a unique suffix.
You can add private repo through argocd helm chart, add this to argocd helm release resource in TF file:
set {
name = "server.config.repositories"
value = "${file("${path.module}/repositories.yml")}"
}
where repositories.yml is:
- url: ssh://abc#def.com/my-repo.git
sshPrivateKeySecret:
name: argo-cd-stash-key
key: ssh-privatekey
I am trying to install AGIC in AKS using Terraform. I am following this document https://learn.microsoft.com/en-us/azure/terraform/terraform-create-k8s-cluster-with-aks-applicationgateway-ingress but this document shows partial terraform deployment i want to fully automate it with the help of Terraform. Is there any other document/way to do this?
Of course, you can use the Terraform to deploy the Helm charts to the AKS. And here is an example for deploying Helm charts through Terraform:
data "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
resource "helm_release" "example" {
name = "my-redis-release"
repository = data.helm_repository.stable.metadata[0].name
chart = "redis"
version = "6.0.1"
values = [
"${file("values.yaml")}"
]
set {
name = "cluster.enabled"
value = "true"
}
set {
name = "metrics.enabled"
value = "true"
}
set_string {
name = "service.annotations.prometheus\\.io/port"
value = "9127"
}
}
And you can also configure the certificate of the AKS to deploy the Helm charts through Terraform, take a look at the document here.
I am trying to deploy the helm charts from ACR to an AKS cluster using Terraform helm provider and Azure DevOps container job but it fails while fetching the helm chart from ACR. Please let me know what is going wrong.
helm provider tf module:
data "helm_repository" "cluster_rbac_helm_chart_repo" {
name = "mcp-rbac-cluster"
url = "https://mcpshareddcr.azurecr.io"
}
# Deploy Cluster RBAC helm chart onto the cluster
resource "helm_release" "cluster_rbac_helm_chart_release" {
name = "mcp-rbac-cluster"
repository = data.helm_repository.cluster_rbac_helm_chart_repo.metadata[0].name
chart = "mcp-rbac-cluster"
}
provider:
version = "=1.36.0"
tenant_id = var.ARM_TENANT_ID
subscription_id = var.ARM_SUBSCRIPTION_ID
client_id = var.ARM_CLIENT_ID
client_secret = var.ARM_CLIENT_SECRET
skip_provider_registration = true
}
data "azurerm_kubernetes_cluster" "aks_cluster" {
name = var.aks_cluster
resource_group_name = var.resource_group_aks
}
locals {
kubeconfig_path = "/tmp/kubeconfig"
}
resource "local_file" "kubeconfig" {
filename = local.kubeconfig_path
content = data.azurerm_kubernetes_cluster.aks_cluster.kube_admin_config_raw
}
provider "helm" {
home = "resources/.helm"
kubernetes {
load_config_file = true
config_path = local.kubeconfig_path
}
}
module "aks_resources" {
source = "./modules/helm/aks-resources"
}
error:
Error: Looks like "" is not a valid chart repository or cannot be reached: Failed to fetch /index.yaml : 404 Not Found
Until now, Helm still doesn't support directly installing chart from an OCI registry.
The recommended steps are:
helm chart remove mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart pull mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart export mycontainerregistry.azurecr.io/helm/hello-world:v1 --destination ./install
cd install & helm install myhelmtest ./hello-world
So my solution is:
resource "null_resource" "download_chart" {
provisioner "local-exec" {
command = <<-EOT
export HELM_EXPERIMENTAL_OCI=1
helm registry login mycontainerregistry.azurecr.io --username someuser --password somepass
helm chart remove mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart pull mycontainerregistry.azurecr.io/helm/hello-world:v1
helm chart export mycontainerregistry.azurecr.io/helm/hello-world:v1 --destination ./install
EOT
}
}
resource "helm_release" "chart" {
name = "hello_world"
repository = "./install"
chart = "hello-world"
version = "v1"
depends_on = [null_resource.download_chart]
}
Not perfect but works.
The problem is that you use the wrong url in the Terraform helm_repository. The right url for ACR looks like this:
https://acrName.azurecr.io/helm/v1/repo
And the ACR is a private registry, so it means you need to add the username and password for it. Finally, your Terraform code should like this with version 2.0+ of helm provider:
resource "helm_release" "my-chart" {
name = "my-chart"
chart = "my/chart"
repository = "https://${var.acr_name}.azurecr.io/helm/v1/repo"
repository_username = var.acr_user_name
repository_password = var.acr_user_password
}
Or with 1.x helm provider:
data "helm_repository" "cluster_rbac_helm_chart_repo" {
name = "mcp-rbac-cluster"
url = "https://mcpshareddcr.azurecr.io/helm/v1/repo"
username = "xxxxx"
password = "xxxxx"
}
# Deploy Cluster RBAC helm chart onto the cluster
resource "helm_release" "cluster_rbac_helm_chart_release" {
name = "mcp-rbac-cluster"
repository = data.helm_repository.cluster_rbac_helm_chart_repo.metadata[0].name
chart = "mcp-rbac-cluster"
}
Update
Here is the screenshot that it works well and deploy the charts in the AKS:
Small enhancement to the above solution. Include a trigger to force a download of the chart every time. Otherwise, it expects that you always maintain the local copy of the chart post the first deployment
resource "null_resource" "download_chart" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = <<-EOT
export HELM_EXPERIMENTAL_OCI=1
helm registry login ${var.registry_fqdn} --username ${var.acr_client_id} --password ${var.acr_client_secret}
helm chart remove ${var.registry_fqdn}/helm/${var.chart_name}:${var.chart_tag}
helm chart pull ${var.registry_fqdn}/helm/${var.chart_name}:${var.chart_tag}
helm chart export ${var.registry_fqdn}/helm/${var.chart_name}:${var.chart_tag} --destination ./install
EOT
}
}
I can use terraform to deploy a Kubernetes cluster in GKE.
Then I have set up the provider for Kubernetes as follows:
provider "kubernetes" {
host = "${data.google_container_cluster.primary.endpoint}"
client_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
By default, terraform interacts with Kubernetes with the user client, which has no power to create (for example) deployments. So I get this error when I try to apply my changes with terraform:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_deployment.foo: 1 error(s) occurred:
* kubernetes_deployment.foo: Failed to create deployment: deployments.apps is forbidden: User "client" cannot create deployments.apps in the namespace "default"
I don't know how should I proceed now, how should I give this permissions to the client user?
If the following fields are added to the provider, I am able to perform deployments, although after reading the documentation it seems these credentials are used for HTTP communication with the cluster, which is insecure if it is done through the internet.
username = "${data.google_container_cluster.primary.master_auth.0.username}"
password = "${data.google_container_cluster.primary.master_auth.0.password}"
Is there any other better way of doing so?
you can use the service account that are running the terraform
data "google_client_config" "default" {}
provider "kubernetes" {
host = "${google_container_cluster.default.endpoint}"
token = "${data.google_client_config.default.access_token}"
cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}"
load_config_file = false
}
OR
give permissions to the default "client"
But you need a valid authentication on GKE cluster provider to run this :/ ups circular dependency here
resource "kubernetes_cluster_role_binding" "default" {
metadata {
name = "client-certificate-cluster-admin"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "User"
name = "client"
api_group = "rbac.authorization.k8s.io"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = "kube-system"
}
subject {
kind = "Group"
name = "system:masters"
api_group = "rbac.authorization.k8s.io"
}
}
It looks like the user that you are using is missing the required RBAC role for creating deployments. Make sure that user has the correct verbs for the deployments resource. You can take a look at this Role examples to have an idea about it.
You need to provide both. Check this example on how to integrate the Kubernetes provider with the Google Provider.
Example of how to configure the Kubernetes provider:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}