Create an identity mapping for EKS with Terraform - kubernetes

I am currently provision my EKS cluster/s using EKSCTL and I want to use Terraform to provision the cluster/s. I am using Terraform EKS module to create cluster. I have use EKSCTL to create identity mapping with following command
eksctl create iamidentitymapping -- region us-east-1 --cluster stage-cluster --arn arn:aws:iam::111222333444:role/developer --username dev-service
I want to convert this command to Terraform with following, but it is not the best way
resource "null_resource" "eks-identity-mapping" {
depends_on = [
module.eks,
aws_iam_policy_attachment.eks-policy-attachment
]
provisioner "local-exec" {
command = <<EOF
eksctl create iamidentitymapping \
--cluster ${var.eks_cluster_name} \
--arn ${data.aws_iam_role.mwaa_role.arn} \
--username ${var.mwaa_username} \
--profile ${var.aws_profile} \
--region ${var.mwaa_aws_region}
EOF
}
}
How can I use Kubernetes provider to achieve this

I haven't found a clear matching for this particular command, but you can achieve something similar by setting the aws-auth config map in kubernetes, adding all of the users/roles and their access rights in one go.
For example we use something like the following below to supply the list of admins to our cluster:
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = <<CONFIGMAPAWSAUTH
- rolearn: ${var.k8s-node-iam-arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111222333444:role/developer
username: dev-service
groups:
- system:masters
CONFIGMAPAWSAUTH
}
}
Note that this file contains all of the role mappings, so you should make sure var.k8s-node-iam-arn is set to the superuser of the cluster otherwise you can get locked out. Also you have to set what access these roles will get.
You can also add specific IAM users instead of roles as well:
- userarn: arn:aws:iam::1234:user/user.first
username: user.first
groups:
- system:masters

You can do this directly in the eks module.
You create the list of roles you want to add, e.g.:
locals {
aws_auth_roles = [
{
rolearn = ${data.aws_iam_role.mwaa_role.arn}
username = ${var.mwaa_username}
groups = [
"system:masters"
]
},
]
}
and then in the module, you add:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.5.1"
[...]
# aws-auth configmap
manage_aws_auth_configmap = true
aws_auth_roles = local.aws_auth_roles
[...]
}
Note: In older versions of "terraform-aws-modules/eks/aws" (14, 17) it was map_users and map_roles, in 19 it is manage_aws_auth_configmap and aws_auth_users, aws_auth_roles.
See the documentation here: https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/19.7.0#input_manage_aws_auth_configmap
UPDATE: for this to work and not give an error like Error: The configmap "aws-auth" does not exist, you need to also add this authentication part:
data "aws_eks_cluster_auth" "default" {
name = local.cluster_name
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.default.token
}

Related

Terraform kubectl provider error: failed to created kubernetes rest client for read of resource

I have a Terraform config that (among other resources) creates a Google Kubernetes Engine cluster on Google Cloud. I'm using the kubectl provider to add YAML manifests for a ManagedCertificate and a FrontendConfig, since these are not part of the kubernetes or google providers.
This works as expected when applying the Terraform config from my local machine, but when I try to execute it in our CI pipeline, I get the following error for both of the kubectl_manifest resources:
Error: failed to create kubernetes rest client for read of resource: Get "http://localhost/api?timeout=32s": dial tcp 127.0.0.1:80: connect: connection refused
Since I'm only facing this issue during CI, my first guess is that the service account is missing the right scopes, but as far as I can tell, all scopes are present. Any suggestions and ideas are greatly appreciated!
The provider trying to connect with localhost, which means either to you need to provide a proper kube-config file or set it dynamically in the terraform.
Although you didn't mention how are setting the auth, but here is two way
Poor way
resource "null_resource" "deploy-app" {
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
kubectl apply -f myapp.yaml ./temp/kube-config.yaml;
EOT
}
# will run always, its bad
triggers = {
always_run = "${timestamp()}"
}
depends_on = [
local_file.kube_config
]
}
resource "local_file" "kube_config" {
content = var.my_kube_config # pass the config file from ci variable
filename = "${path.module}/temp/kube-config.yaml"
}
Proper way
data "google_container_cluster" "cluster" {
name = "your_cluster_name"
}
data "google_client_config" "current" {
}
provider "kubernetes" {
host = data.google_container_cluster.cluster.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(
data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate
)
}
data "kubectl_file_documents" "app_yaml" {
content = file("myapp.yaml")
}
resource "kubectl_manifest" "app_installer" {
for_each = data.kubectl_file_documents.app_yaml.manifests
yaml_body = each.value
}
If the cluster in the same module , then provider should be
provider "kubernetes" {
load_config_file = "false"
host = google_container_cluster.my_cluster.endpoint
client_certificate = google_container_cluster.my_cluster.master_auth.0.client_certificate
client_key = google_container_cluster.my_cluster.master_auth.0.client_key
cluster_ca_certificate = google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate
}
Fixed the issue by adding load_config_file = false to the kubectl provider config. My provider config now looks like this:
data "google_client_config" "default" {}
provider "kubernetes" {
host = "https://${endpoint from GKE}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(CA certificate from GKE)
}
provider "kubectl" {
host = "https://${endpoint from GKE}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(CA certificate from GKE)
load_config_file = false
}

Create kubernetes secret for docker registry - Terraform

Using kubectl we can create docker registry authentication secret as follows
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
--docker-email=my#email.com \
How do i create this secret using terraform, i saw this link, it has data, in the flow of terraform the kubernetes instance is being created in azure and i get the data required from there and i created something like below
resource "kubernetes_secret" "docker-registry" {
metadata {
name = "registry-credentials"
}
data = {
docker-server = data.azurerm_container_registry.docker_registry_data.login_server
docker-username = data.azurerm_container_registry.docker_registry_data.admin_username
docker-password = data.azurerm_container_registry.docker_registry_data.admin_password
}
}
It seems that it is wrong as the images are not being pulled. What am i missing here.
If you run following command
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
--docker-email=my#email.com
It will create a secret like following
$ kubectl get secrets regsecret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-06-01T18:31:07Z"
name: regsecret
namespace: default
resourceVersion: "42304"
selfLink: /api/v1/namespaces/default/secrets/regsecret
uid: 59054483-2789-4dd2-9321-74d911eef610
type: kubernetes.io/dockerconfigjson
If we decode .dockerconfigjson we will get
{"auths":{"docker.example.com":{"username":"kube","password":"PW_STRING","email":"my#email.com","auth":"a3ViZTpQV19TVFJJTkc="}}}
So, how can we do that using terraform?
I created a file config.json with following data
{"auths":{"${docker-server}":{"username":"${docker-username}","password":"${docker-password}","email":"${docker-email}","auth":"${auth}"}}}
Then in main.tf file
resource "kubernetes_secret" "docker-registry" {
metadata {
name = "regsecret"
}
data = {
".dockerconfigjson" = "${data.template_file.docker_config_script.rendered}"
}
type = "kubernetes.io/dockerconfigjson"
}
data "template_file" "docker_config_script" {
template = "${file("${path.module}/config.json")}"
vars = {
docker-username = "${var.docker-username}"
docker-password = "${var.docker-password}"
docker-server = "${var.docker-server}"
docker-email = "${var.docker-email}"
auth = base64encode("${var.docker-username}:${var.docker-password}")
}
}
then run
$ terraform apply
This will generate same secrets. Hope it will helps
I would suggest creating a azurerm_role_assignement to give aks access to the acr:
resource "azurerm_role_assignment" "aks_sp_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = var.service_principal_obj_id
depends_on = [
azurerm_kubernetes_cluster.aks,
azurerm_container_registry.acr
]
}
Update
You can create the service principal in the azure portal or with az cli and use client_id, client_secret and object-id in terraform.
Get Client_id and Object_id by running az ad sp list --filter "displayName eq '<name>'". The secret has to be created in the Certificates & secrets tab of the service principal. See this guide: https://pixelrobots.co.uk/2018/11/first-look-at-terraform-and-the-azure-cloud-shell/
Just set all three as variable, eg for obj_id:
variable "service_principal_obj_id" {
default = "<object-id>"
}
Now use the credentials with aks:
resource "azurerm_kubernetes_cluster" "aks" {
...
service_principal {
client_id = var.service_principal_app_id
client_secret = var.service_principal_password
}
...
}
And set the object id in the acr as described above.
Alternative
You can create the service principal with terraform (only works if you have the necessary permissions). https://www.terraform.io/docs/providers/azuread/r/service_principal.html combined with a random_password resource:
resource "azuread_application" "aks_sp" {
name = "somename"
available_to_other_tenants = false
oauth2_allow_implicit_flow = false
}
resource "azuread_service_principal" "aks_sp" {
application_id = azuread_application.aks_sp.application_id
depends_on = [
azuread_application.aks_sp
]
}
resource "azuread_service_principal_password" "aks_sp_pwd" {
service_principal_id = azuread_service_principal.aks_sp.id
value = random_password.aks_sp_pwd.result
end_date = "2099-01-01T01:02:03Z"
depends_on = [
azuread_service_principal.aks_sp
]
}
You need to assign the role "Conributer" to the sp and can use it directly in aks / acr.
resource "azurerm_role_assignment" "aks_sp_role_assignment" {
scope = var.subscription_id
role_definition_name = "Contributor"
principal_id = azuread_service_principal.aks_sp.id
depends_on = [
azuread_service_principal_password.aks_sp_pwd
]
}
Use them with aks:
resource "azurerm_kubernetes_cluster" "aks" {
...
service_principal {
client_id = azuread_service_principal.aks_sp.app_id
client_secret = azuread_service_principal_password.aks_sp_pwd.value
}
...
}
and the role assignment:
resource "azurerm_role_assignment" "aks_sp_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azuread_service_principal.aks_sp.object_id
depends_on = [
azurerm_kubernetes_cluster.aks,
azurerm_container_registry.acr
]
}
Update secret example
resource "random_password" "aks_sp_pwd" {
length = 32
special = true
}

How to Create Terraform code for GKE cluster

We can create/see the gcloud command from GKE cluster creation UI , is there a way to convert the command into terraform code for GKE cluster. thanks
You can create GKE cluster by using terraform. There is a official link, by following you need to write the .tf file and apply.
resource "google_container_cluster" "primary" {
name = "my-gke-cluster"
location = "us-central1"
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
This is an useful tutorial.

How can I configure an AWS EKS autoscaler with Terraform?

I'm using the AWS EKS provider (github.com/terraform-aws-modules/terraform-aws-eks ). I'm following along the tutorial with https://learn.hashicorp.com/terraform/aws/eks-intro
However this does not seem to have autoscaling enabled... It seems it's missing the cluster-autoscaler pod / daemon?
Is Terraform able to provision this functionality? Or do I need to set this up following a guide like: https://eksworkshop.com/scaling/deploy_ca/
You can deploy Kubernetes resources using Terraform. There is both a Kubernetes provider and a Helm provider.
data "aws_eks_cluster_auth" "authentication" {
name = "${var.cluster_id}"
}
provider "kubernetes" {
# Use the token generated by AWS iam authenticator to connect as the provider does not support exec auth
# see: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
host = "${var.cluster_endpoint}"
cluster_ca_certificate = "${base64decode(var.cluster_certificate_authority_data)}"
token = "${data.aws_eks_cluster_auth.authentication.token}"
load_config_file = false
}
provider "helm" {
install_tiller = "true"
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.12.3"
}
resource "helm_release" "cluster_autoscaler" {
name = "cluster-autoscaler"
repository = "stable"
chart = "cluster-autoscaler"
namespace = "kube-system"
version = "0.12.2"
set {
name = "autoDiscovery.enabled"
value = "true"
}
set {
name = "autoDiscovery.clusterName"
value = "${var.cluster_name}"
}
set {
name = "cloudProvider"
value = "aws"
}
set {
name = "awsRegion"
value = "${data.aws_region.current_region.name}"
}
set {
name = "rbac.create"
value = "true"
}
set {
name = "sslCertPath"
value = "/etc/ssl/certs/ca-bundle.crt"
}
}
This answer below is still not complete... But at least it gets me partially further...
1.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
helm install stable/cluster-autoscaler --name my-release --set "autoscalingGroups[0].name=demo,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1" --set rbac.create=true
And then manually fix the certificate path:
kubectl edit deployments my-release-aws-cluster-autoscaler
replace the following:
path: /etc/ssl/certs/ca-bundle.crt
With
path: /etc/ssl/certs/ca-certificates.crt
2.
In the AWS console, give AdministratorAccess policy to the terraform-eks-demo-node role.
3.
Update the nodes parameter with (kubectl edit deployments my-release-aws-cluster-autoscaler)
- --nodes=1:10:terraform-eks-demo20190922124246790200000007

Managing GKE and its deployments with Terraform

I can use terraform to deploy a Kubernetes cluster in GKE.
Then I have set up the provider for Kubernetes as follows:
provider "kubernetes" {
host = "${data.google_container_cluster.primary.endpoint}"
client_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
By default, terraform interacts with Kubernetes with the user client, which has no power to create (for example) deployments. So I get this error when I try to apply my changes with terraform:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_deployment.foo: 1 error(s) occurred:
* kubernetes_deployment.foo: Failed to create deployment: deployments.apps is forbidden: User "client" cannot create deployments.apps in the namespace "default"
I don't know how should I proceed now, how should I give this permissions to the client user?
If the following fields are added to the provider, I am able to perform deployments, although after reading the documentation it seems these credentials are used for HTTP communication with the cluster, which is insecure if it is done through the internet.
username = "${data.google_container_cluster.primary.master_auth.0.username}"
password = "${data.google_container_cluster.primary.master_auth.0.password}"
Is there any other better way of doing so?
you can use the service account that are running the terraform
data "google_client_config" "default" {}
provider "kubernetes" {
host = "${google_container_cluster.default.endpoint}"
token = "${data.google_client_config.default.access_token}"
cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}"
load_config_file = false
}
OR
give permissions to the default "client"
But you need a valid authentication on GKE cluster provider to run this :/ ups circular dependency here
resource "kubernetes_cluster_role_binding" "default" {
metadata {
name = "client-certificate-cluster-admin"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "User"
name = "client"
api_group = "rbac.authorization.k8s.io"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = "kube-system"
}
subject {
kind = "Group"
name = "system:masters"
api_group = "rbac.authorization.k8s.io"
}
}
It looks like the user that you are using is missing the required RBAC role for creating deployments. Make sure that user has the correct verbs for the deployments resource. You can take a look at this Role examples to have an idea about it.
You need to provide both. Check this example on how to integrate the Kubernetes provider with the Google Provider.
Example of how to configure the Kubernetes provider:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}