Create kubernetes secret for docker registry - Terraform - kubernetes

Using kubectl we can create docker registry authentication secret as follows
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
--docker-email=my#email.com \
How do i create this secret using terraform, i saw this link, it has data, in the flow of terraform the kubernetes instance is being created in azure and i get the data required from there and i created something like below
resource "kubernetes_secret" "docker-registry" {
metadata {
name = "registry-credentials"
}
data = {
docker-server = data.azurerm_container_registry.docker_registry_data.login_server
docker-username = data.azurerm_container_registry.docker_registry_data.admin_username
docker-password = data.azurerm_container_registry.docker_registry_data.admin_password
}
}
It seems that it is wrong as the images are not being pulled. What am i missing here.

If you run following command
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
--docker-email=my#email.com
It will create a secret like following
$ kubectl get secrets regsecret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-06-01T18:31:07Z"
name: regsecret
namespace: default
resourceVersion: "42304"
selfLink: /api/v1/namespaces/default/secrets/regsecret
uid: 59054483-2789-4dd2-9321-74d911eef610
type: kubernetes.io/dockerconfigjson
If we decode .dockerconfigjson we will get
{"auths":{"docker.example.com":{"username":"kube","password":"PW_STRING","email":"my#email.com","auth":"a3ViZTpQV19TVFJJTkc="}}}
So, how can we do that using terraform?
I created a file config.json with following data
{"auths":{"${docker-server}":{"username":"${docker-username}","password":"${docker-password}","email":"${docker-email}","auth":"${auth}"}}}
Then in main.tf file
resource "kubernetes_secret" "docker-registry" {
metadata {
name = "regsecret"
}
data = {
".dockerconfigjson" = "${data.template_file.docker_config_script.rendered}"
}
type = "kubernetes.io/dockerconfigjson"
}
data "template_file" "docker_config_script" {
template = "${file("${path.module}/config.json")}"
vars = {
docker-username = "${var.docker-username}"
docker-password = "${var.docker-password}"
docker-server = "${var.docker-server}"
docker-email = "${var.docker-email}"
auth = base64encode("${var.docker-username}:${var.docker-password}")
}
}
then run
$ terraform apply
This will generate same secrets. Hope it will helps

I would suggest creating a azurerm_role_assignement to give aks access to the acr:
resource "azurerm_role_assignment" "aks_sp_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = var.service_principal_obj_id
depends_on = [
azurerm_kubernetes_cluster.aks,
azurerm_container_registry.acr
]
}
Update
You can create the service principal in the azure portal or with az cli and use client_id, client_secret and object-id in terraform.
Get Client_id and Object_id by running az ad sp list --filter "displayName eq '<name>'". The secret has to be created in the Certificates & secrets tab of the service principal. See this guide: https://pixelrobots.co.uk/2018/11/first-look-at-terraform-and-the-azure-cloud-shell/
Just set all three as variable, eg for obj_id:
variable "service_principal_obj_id" {
default = "<object-id>"
}
Now use the credentials with aks:
resource "azurerm_kubernetes_cluster" "aks" {
...
service_principal {
client_id = var.service_principal_app_id
client_secret = var.service_principal_password
}
...
}
And set the object id in the acr as described above.
Alternative
You can create the service principal with terraform (only works if you have the necessary permissions). https://www.terraform.io/docs/providers/azuread/r/service_principal.html combined with a random_password resource:
resource "azuread_application" "aks_sp" {
name = "somename"
available_to_other_tenants = false
oauth2_allow_implicit_flow = false
}
resource "azuread_service_principal" "aks_sp" {
application_id = azuread_application.aks_sp.application_id
depends_on = [
azuread_application.aks_sp
]
}
resource "azuread_service_principal_password" "aks_sp_pwd" {
service_principal_id = azuread_service_principal.aks_sp.id
value = random_password.aks_sp_pwd.result
end_date = "2099-01-01T01:02:03Z"
depends_on = [
azuread_service_principal.aks_sp
]
}
You need to assign the role "Conributer" to the sp and can use it directly in aks / acr.
resource "azurerm_role_assignment" "aks_sp_role_assignment" {
scope = var.subscription_id
role_definition_name = "Contributor"
principal_id = azuread_service_principal.aks_sp.id
depends_on = [
azuread_service_principal_password.aks_sp_pwd
]
}
Use them with aks:
resource "azurerm_kubernetes_cluster" "aks" {
...
service_principal {
client_id = azuread_service_principal.aks_sp.app_id
client_secret = azuread_service_principal_password.aks_sp_pwd.value
}
...
}
and the role assignment:
resource "azurerm_role_assignment" "aks_sp_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azuread_service_principal.aks_sp.object_id
depends_on = [
azurerm_kubernetes_cluster.aks,
azurerm_container_registry.acr
]
}
Update secret example
resource "random_password" "aks_sp_pwd" {
length = 32
special = true
}

Related

Authenticate to K8s cluster through AWS Lambda

We are using the following to authenticate:
import base64
import boto3
import string
import random
from botocore.signers import RequestSigner
class EKSAuth(object):
METHOD = 'GET'
EXPIRES = 60
EKS_HEADER = 'x-k8s-aws-id'
EKS_PREFIX = 'k8s-aws-v1.'
STS_URL = 'sts.amazonaws.com'
STS_ACTION = 'Action=GetCallerIdentity&Version=2011-06-15'
def __init__(self, cluster_id, region='us-east-1'):
self.cluster_id = cluster_id
self.region = region
def get_token(self):
"""
Return bearer token
"""
session = boto3.session.Session()
# Get ServiceID required by class RequestSigner
client = session.client("sts", region_name=self.region)
service_id = client.meta.service_model.service_id
signer = RequestSigner(
service_id,
session.region_name,
'sts',
'v4',
session.get_credentials(),
session.events
)
params = {
'method': self.METHOD,
'url': 'https://' + self.STS_URL + '/?' + self.STS_ACTION,
'body': {},
'headers': {
self.EKS_HEADER: self.cluster_id
},
'context': {}
}
signed_url = signer.generate_presigned_url(
params,
region_name=session.region_name,
expires_in=self.EXPIRES,
operation_name=''
)
return (
self.EKS_PREFIX +
base64.urlsafe_b64encode(
signed_url.encode('utf-8')
).decode('utf-8')
)
And then we call this by
KUBE_FILEPATH = '/tmp/kubeconfig'
CLUSTER_NAME = 'cluster'
REGION = 'us-east-2'
if not os.path.exists(KUBE_FILEPATH):
kube_content = dict()
# Get data from EKS API
eks_api = boto3.client('eks', region_name=REGION)
cluster_info = eks_api.describe_cluster(name=CLUSTER_NAME)
certificate = cluster_info['cluster']['certificateAuthority']['data']
endpoint = cluster_info['cluster']['endpoint']
kube_content = dict()
kube_content['apiVersion'] = 'v1'
kube_content['clusters'] = [
{
'cluster':
{
'server': endpoint,
'certificate-authority-data': certificate
},
'name': 'kubernetes'
}]
kube_content['contexts'] = [
{
'context':
{
'cluster': 'kubernetes',
'user': 'aws'
},
'name': 'aws'
}]
kube_content['current-context'] = 'aws'
kube_content['Kind'] = 'config'
kube_content['users'] = [
{
'name': 'aws',
'user': 'lambda'
}]
# Write kubeconfig
with open(KUBE_FILEPATH, 'w') as outfile:
yaml.dump(kube_content, outfile, default_flow_style=False)
# Get Token
eks = auth.EKSAuth(CLUSTER_NAME)
token = eks.get_token()
print("Token here:")
print(token)
# Configure
config.load_kube_config(KUBE_FILEPATH)
configuration = client.Configuration()
configuration.api_key['authorization'] = token
configuration.api_key_prefix['authorization'] = 'Bearer'
# API
api = client.ApiClient(configuration)
v1 = client.CoreV1Api(api)
print("THIS IS GETTING 401!!")
ret = v1.list_namespaced_pod(namespace='default')
However, this is getting the error in the Lambda:
[ERROR] ApiException: (401) Reason: Unauthorized
Is there some type of way I have to generate the ~/.aws/credentials or config? I believe this might be why it is not able to authenticate?
Your EKSAuth class works. Just checked it with my cluster.
Here is a working (simpler) snippet instead of the second one.
import base64
import tempfile
import kubernetes
import boto3
from auth import EKSAuth
cluster_name = "my-cluster"
# Details from EKS
eks_client = boto3.client('eks')
eks_details = eks_client.describe_cluster(name=cluster_name)['cluster']
# Saving the CA cert to a temp file (working around the Kubernetes client limitations)
fp = tempfile.NamedTemporaryFile(delete=False)
ca_filename = fp.name
cert_bs = base64.urlsafe_b64decode(eks_details['certificateAuthority']['data'].encode('utf-8'))
fp.write(cert_bs)
fp.close()
# Token for the EKS cluster
eks_auth = EKSAuth(cluster_name)
token = eks_auth.get_token()
# Kubernetes client config
conf = kubernetes.client.Configuration()
conf.host = eks_details['endpoint']
conf.api_key['authorization'] = token
conf.api_key_prefix['authorization'] = 'Bearer'
conf.ssl_ca_cert = ca_filename
k8s_client = kubernetes.client.ApiClient(conf)
# Doing something with the client
v1 = kubernetes.client.CoreV1Api(k8s_client)
print(v1.list_pod_for_all_namespaces())
* Most of the code is taken form here
And you also have to make sure you've granted permission for the IAM role your lambda run with in the eks cluster.
For that run:
kubectl edit -n kube-system configmap/aws-auth
Add this lines under mapRoles. rolearn is the arn of your role. username is the name you want to give to that role inside the k8s cluster.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
# Add this #######################################
- rolearn: arn:aws:iam::111122223333:role/myLambda-role-z71amo5y
username: my-lambda-mapped-user
####################################################
And create a clusterrolebinding or rolebinding to grant this user permissions inside the cluster.
kubectl create clusterrolebinding --clusterrole cluster-admin --user my-lambda-mapped-user my-clusterrolebinding

ERROR controller.provisioning Could not schedule pod, incompatible with provisioner "default", incompatible requirements, key karpenter.sh/provisioner

I read through the karpenter document at https://karpenter.sh/v0.16.1/getting-started/getting-started-with-terraform/#install-karpenter-helm-chart. I followed instructions step by step. I got errors at the end.
kubectl logs -f -n karpenter -l app.kubernetes.io/name=karpenter -c controller
DEBUG controller.provisioning Relaxing soft constraints for pod since it previously failed to schedule, removing: spec.topologySpreadConstraints = {"maxSkew":1,"topologyKey":"topology.kubernetes.io/zone","whenUnsatisfiable":"ScheduleAnyway","labelSelector":{"matchLabels":{"app.kubernetes.io/instance":"karpenter","app.kubernetes.io/name":"karpenter"}}} {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}
2022-09-10T00:13:13.122Z
ERROR controller.provisioning Could not schedule pod, incompatible with provisioner "default", incompatible requirements, key karpenter.sh/provisioner-name, karpenter.sh/provisioner-name DoesNotExist not in karpenter.sh/provisioner-name In [default] {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}
Below is the source code:
cat main.tf
terraform {
required_version = "~> 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.5"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "~> 1.14"
}
}
}
provider "aws" {
region = "us-east-1"
}
locals {
cluster_name = "karpenter-demo"
# Used to determine correct partition (i.e. - `aws`, `aws-gov`, `aws-cn`, etc.)
partition = data.aws_partition.current.partition
}
data "aws_partition" "current" {}
module "vpc" {
# https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest
source = "terraform-aws-modules/vpc/aws"
version = "3.14.4"
name = local.cluster_name
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
module "eks" {
# https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest
source = "terraform-aws-modules/eks/aws"
version = "18.29.0"
cluster_name = local.cluster_name
cluster_version = "1.22"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
# Required for Karpenter role below
enable_irsa = true
node_security_group_additional_rules = {
ingress_nodes_karpenter_port = {
description = "Cluster API to Node group for Karpenter webhook"
protocol = "tcp"
from_port = 8443
to_port = 8443
type = "ingress"
source_cluster_security_group = true
}
}
node_security_group_tags = {
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
# (i.e. - at most, only one security group should have this tag in your account)
"karpenter.sh/discovery/${local.cluster_name}" = local.cluster_name
}
# Only need one node to get Karpenter up and running.
# This ensures core services such as VPC CNI, CoreDNS, etc. are up and running
# so that Karpenter can be deployed and start managing compute capacity as required
eks_managed_node_groups = {
initial = {
instance_types = ["m5.large"]
# Not required nor used - avoid tagging two security groups with same tag as well
create_security_group = false
min_size = 1
max_size = 1
desired_size = 1
iam_role_additional_policies = [
"arn:${local.partition}:iam::aws:policy/AmazonSSMManagedInstanceCore", # Required by Karpenter
"arn:${local.partition}:iam::aws:policy/AmazonEKSWorkerNodePolicy",
"arn:${local.partition}:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly", #for access to ECR images
"arn:${local.partition}:iam::aws:policy/CloudWatchAgentServerPolicy"
]
tags = {
# This will tag the launch template created for use by Karpenter
"karpenter.sh/discovery/${local.cluster_name}" = local.cluster_name
}
}
}
}
#The EKS module creates an IAM role for the EKS managed node group nodes. We’ll use that for Karpenter.
#We need to create an instance profile we can reference.
#Karpenter can use this instance profile to launch new EC2 instances and those instances will be able to connect to your cluster.
resource "aws_iam_instance_profile" "karpenter" {
name = "KarpenterNodeInstanceProfile-${local.cluster_name}"
role = module.eks.eks_managed_node_groups["initial"].iam_role_name
}
#Create the KarpenterController IAM Role
#Karpenter requires permissions like launching instances, which means it needs an IAM role that grants it access. The config
#below will create an AWS IAM Role, attach a policy, and authorize the Service Account to assume the role using IRSA. We will
#create the ServiceAccount and connect it to this role during the Helm chart install.
module "karpenter_irsa" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "5.3.3"
role_name = "karpenter-controller-${local.cluster_name}"
attach_karpenter_controller_policy = true
karpenter_tag_key = "karpenter.sh/discovery/${local.cluster_name}"
karpenter_controller_cluster_id = module.eks.cluster_id
karpenter_controller_node_iam_role_arns = [
module.eks.eks_managed_node_groups["initial"].iam_role_arn
]
oidc_providers = {
ex = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["karpenter:karpenter"]
}
}
}
#Install Karpenter Helm Chart
#Use helm to deploy Karpenter to the cluster. We are going to use the helm_release Terraform resource to do the deploy and pass in the
#cluster details and IAM role Karpenter needs to assume.
provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", local.cluster_name]
}
}
}
resource "helm_release" "karpenter" {
namespace = "karpenter"
create_namespace = true
name = "karpenter"
repository = "https://charts.karpenter.sh"
chart = "karpenter"
version = "v0.16.1"
set {
name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
value = module.karpenter_irsa.iam_role_arn
}
set {
name = "clusterName"
value = module.eks.cluster_id
}
set {
name = "clusterEndpoint"
value = module.eks.cluster_endpoint
}
set {
name = "aws.defaultInstanceProfile"
value = aws_iam_instance_profile.karpenter.name
}
}
#Provisioner
#Create a default provisioner using the command below. This provisioner configures instances to connect to your cluster’s endpoint and
#discovers resources like subnets and security groups using the cluster’s name.
#This provisioner will create capacity as long as the sum of all created capacity is less than the specified limit.
provider "kubectl" {
apply_retry_count = 5
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
load_config_file = false
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id]
}
}
resource "kubectl_manifest" "karpenter_provisioner" {
yaml_body = <<-YAML
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: default
spec:
requirements:
- key: karpenter.sh/capacity-type
operator: In
values: ["spot"]
limits:
resources:
cpu: 1000
provider:
subnetSelector:
Name: "*private*"
securityGroupSelector:
karpenter.sh/discovery/${module.eks.cluster_id}: ${module.eks.cluster_id}
tags:
karpenter.sh/discovery/${module.eks.cluster_id}: ${module.eks.cluster_id}
ttlSecondsAfterEmpty: 30
YAML
depends_on = [
helm_release.karpenter
]
}
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.2
resources:
requests:
cpu: 1
EOF
kubectl scale deployment inflate --replicas 5
kubectl logs -f -n karpenter -l app.kubernetes.io/name=karpenter -c controller
DEBUG controller.provisioning Relaxing soft constraints for pod since it previously failed to schedule, removing: spec.topologySpreadConstraints = {"maxSkew":1,"topologyKey":"topology.kubernetes.io/zone","whenUnsatisfiable":"ScheduleAnyway","labelSelector":{"matchLabels":{"app.kubernetes.io/instance":"karpenter","app.kubernetes.io/name":"karpenter"}}} {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}
2022-09-10T00:13:13.122Z
ERROR controller.provisioning Could not schedule pod, incompatible with provisioner "default", incompatible requirements, key karpenter.sh/provisioner-name, karpenter.sh/provisioner-name DoesNotExist not in karpenter.sh/provisioner-name In [default] {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}
I belive this is due to the pod topology defined in the Karpenter deployment here:
https://github.com/aws/karpenter/blob/main/charts/karpenter/values.yaml#L73-L77
, you can read further on what pod topologySpreadConstraints does here:
https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
If you increase the desired_size to 2 which matches the default deployment replicas above, that should resove the error.

Create an identity mapping for EKS with Terraform

I am currently provision my EKS cluster/s using EKSCTL and I want to use Terraform to provision the cluster/s. I am using Terraform EKS module to create cluster. I have use EKSCTL to create identity mapping with following command
eksctl create iamidentitymapping -- region us-east-1 --cluster stage-cluster --arn arn:aws:iam::111222333444:role/developer --username dev-service
I want to convert this command to Terraform with following, but it is not the best way
resource "null_resource" "eks-identity-mapping" {
depends_on = [
module.eks,
aws_iam_policy_attachment.eks-policy-attachment
]
provisioner "local-exec" {
command = <<EOF
eksctl create iamidentitymapping \
--cluster ${var.eks_cluster_name} \
--arn ${data.aws_iam_role.mwaa_role.arn} \
--username ${var.mwaa_username} \
--profile ${var.aws_profile} \
--region ${var.mwaa_aws_region}
EOF
}
}
How can I use Kubernetes provider to achieve this
I haven't found a clear matching for this particular command, but you can achieve something similar by setting the aws-auth config map in kubernetes, adding all of the users/roles and their access rights in one go.
For example we use something like the following below to supply the list of admins to our cluster:
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = <<CONFIGMAPAWSAUTH
- rolearn: ${var.k8s-node-iam-arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111222333444:role/developer
username: dev-service
groups:
- system:masters
CONFIGMAPAWSAUTH
}
}
Note that this file contains all of the role mappings, so you should make sure var.k8s-node-iam-arn is set to the superuser of the cluster otherwise you can get locked out. Also you have to set what access these roles will get.
You can also add specific IAM users instead of roles as well:
- userarn: arn:aws:iam::1234:user/user.first
username: user.first
groups:
- system:masters
You can do this directly in the eks module.
You create the list of roles you want to add, e.g.:
locals {
aws_auth_roles = [
{
rolearn = ${data.aws_iam_role.mwaa_role.arn}
username = ${var.mwaa_username}
groups = [
"system:masters"
]
},
]
}
and then in the module, you add:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.5.1"
[...]
# aws-auth configmap
manage_aws_auth_configmap = true
aws_auth_roles = local.aws_auth_roles
[...]
}
Note: In older versions of "terraform-aws-modules/eks/aws" (14, 17) it was map_users and map_roles, in 19 it is manage_aws_auth_configmap and aws_auth_users, aws_auth_roles.
See the documentation here: https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/19.7.0#input_manage_aws_auth_configmap
UPDATE: for this to work and not give an error like Error: The configmap "aws-auth" does not exist, you need to also add this authentication part:
data "aws_eks_cluster_auth" "default" {
name = local.cluster_name
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.default.token
}

override config file from helm chart with terraform

I'm trying to deploy ArgoCD in my k8s cluser using the helm chart for ArgoCD. I deploy everything with Terraform. Now i want to change the config file from ArgoCD such that it can connect to my private repo. It works when i manually change the file using kubectl after ArgoCD is running in my cluster but when I try to use terraform, I get the message Error: configmaps "argocd-cm" already exists meaning that i cannot overrite the configmap that is created by ArgoCD. How to i change these variables?
terraform
resource "kubernetes_namespace" "argocd" {
metadata {
name = "argocd"
}
}
resource "kubernetes_secret" "argocd_registry_secret" {
metadata {
name = "argocd-repo-credentials"
namespace = "argocd"
}
data = {
username = "USERNAME"
password = "PASSWORD"
}
}
data "helm_repository" "argoproj" {
name = "argoproj"
url = "https://argoproj.github.io/argo-helm"
}
resource "helm_release" "argocd" {
name = "argocd"
chart = "argoproj/argo-cd"
version = "2.3.5"
namespace = kubernetes_namespace.argocd.metadata[0].name
timeout = 600
}
resource "kubernetes_config_map" "argocd-cm" {
depends_on = [helm_release.argocd]
metadata {
name = "argocd-cm"
namespace = "argocd"
}
data = {
config = file("${path.module}/configs/ingress/argo-configmap.yaml")
}
}
Instead of name use generate_name in kubernetes_config_map
generate_name - (Optional) Prefix, used by the server, to generate a unique name ONLY IF the name field has not been provided. This value will also be combined with a unique suffix.
You can add private repo through argocd helm chart, add this to argocd helm release resource in TF file:
set {
name = "server.config.repositories"
value = "${file("${path.module}/repositories.yml")}"
}
where repositories.yml is:
- url: ssh://abc#def.com/my-repo.git
sshPrivateKeySecret:
name: argo-cd-stash-key
key: ssh-privatekey

How can I configure an AWS EKS autoscaler with Terraform?

I'm using the AWS EKS provider (github.com/terraform-aws-modules/terraform-aws-eks ). I'm following along the tutorial with https://learn.hashicorp.com/terraform/aws/eks-intro
However this does not seem to have autoscaling enabled... It seems it's missing the cluster-autoscaler pod / daemon?
Is Terraform able to provision this functionality? Or do I need to set this up following a guide like: https://eksworkshop.com/scaling/deploy_ca/
You can deploy Kubernetes resources using Terraform. There is both a Kubernetes provider and a Helm provider.
data "aws_eks_cluster_auth" "authentication" {
name = "${var.cluster_id}"
}
provider "kubernetes" {
# Use the token generated by AWS iam authenticator to connect as the provider does not support exec auth
# see: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
host = "${var.cluster_endpoint}"
cluster_ca_certificate = "${base64decode(var.cluster_certificate_authority_data)}"
token = "${data.aws_eks_cluster_auth.authentication.token}"
load_config_file = false
}
provider "helm" {
install_tiller = "true"
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.12.3"
}
resource "helm_release" "cluster_autoscaler" {
name = "cluster-autoscaler"
repository = "stable"
chart = "cluster-autoscaler"
namespace = "kube-system"
version = "0.12.2"
set {
name = "autoDiscovery.enabled"
value = "true"
}
set {
name = "autoDiscovery.clusterName"
value = "${var.cluster_name}"
}
set {
name = "cloudProvider"
value = "aws"
}
set {
name = "awsRegion"
value = "${data.aws_region.current_region.name}"
}
set {
name = "rbac.create"
value = "true"
}
set {
name = "sslCertPath"
value = "/etc/ssl/certs/ca-bundle.crt"
}
}
This answer below is still not complete... But at least it gets me partially further...
1.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
helm install stable/cluster-autoscaler --name my-release --set "autoscalingGroups[0].name=demo,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1" --set rbac.create=true
And then manually fix the certificate path:
kubectl edit deployments my-release-aws-cluster-autoscaler
replace the following:
path: /etc/ssl/certs/ca-bundle.crt
With
path: /etc/ssl/certs/ca-certificates.crt
2.
In the AWS console, give AdministratorAccess policy to the terraform-eks-demo-node role.
3.
Update the nodes parameter with (kubectl edit deployments my-release-aws-cluster-autoscaler)
- --nodes=1:10:terraform-eks-demo20190922124246790200000007