Granting RBAC roles in k8s cluster using terraform - kubernetes

I want to assign RBAC rule to a user providing access to all the resources except 'create' and 'delete' verb in 'namespace' resource using Terraform.
Currently we have rule as stated below:
rule {
api_groups = ["*"]
resources = ["*"]
verbs = ["*"]
}

As we can find in the Role and ClusterRole documentation, permissions (rules) are purely additive - there are no "deny" rules:
Role and ClusterRole
An RBAC Role or ClusterRole contains rules that represent a set of permissions. Permissions are purely additive (there are no "deny" rules).
The list of possible verbs can be found here:
You need to provide all verbs that should be applied to the resources contained in the rule.
Instead of:
verbs = ["*"]
Provide required verbs e.g.:
verbs = ["get", "list", "patch", "update", "watch"]
As an example, I've created an example-role Role and an example_role_binding RoleBinding.
The example_role_binding RoleBinding grants the permissions defined in the example-role Role to user john.
NOTE: For details on using the following resources, see the kubernetes_role and kubernetes_role_binding resource documentation.
resource "kubernetes_role" "example_role" {
metadata {
name = "example-role"
namespace = "default"
}
rule {
api_groups = ["*"]
resources = ["*"]
verbs = ["get", "list", "patch", "update", "watch"]
}
}
resource "kubernetes_role_binding" "example_role_binding" {
metadata {
name = "example_role_binding"
namespace = "default"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = "example-role"
}
subject {
kind = "User"
name = "john"
api_group = "rbac.authorization.k8s.io"
}
}
Additionally, I've created the test_user.sh Bash script to quickly check if it works as expected:
NOTE: You may need to modify the variables namespace, resources, and user to fit your needs.
$ cat test_user.sh
#!/bin/bash
namespace=default
resources="pods deployments"
user=john
echo "=== NAMESPACE: ${namespace} ==="
for verb in create delete get list patch update watch; do
echo "-- ${verb} --"
for resource in ${resources}; do
echo -n "${resource}: "
kubectl auth can-i ${verb} ${resource} -n ${namespace} --as=${user}
done
done
$ ./test_user.sh
=== NAMESPACE: default ===
-- create --
pods: no
deployments: no
-- delete --
pods: no
deployments: no
-- get --
pods: yes
deployments: yes
-- list --
pods: yes
deployments: yes
...

Related

Create an identity mapping for EKS with Terraform

I am currently provision my EKS cluster/s using EKSCTL and I want to use Terraform to provision the cluster/s. I am using Terraform EKS module to create cluster. I have use EKSCTL to create identity mapping with following command
eksctl create iamidentitymapping -- region us-east-1 --cluster stage-cluster --arn arn:aws:iam::111222333444:role/developer --username dev-service
I want to convert this command to Terraform with following, but it is not the best way
resource "null_resource" "eks-identity-mapping" {
depends_on = [
module.eks,
aws_iam_policy_attachment.eks-policy-attachment
]
provisioner "local-exec" {
command = <<EOF
eksctl create iamidentitymapping \
--cluster ${var.eks_cluster_name} \
--arn ${data.aws_iam_role.mwaa_role.arn} \
--username ${var.mwaa_username} \
--profile ${var.aws_profile} \
--region ${var.mwaa_aws_region}
EOF
}
}
How can I use Kubernetes provider to achieve this
I haven't found a clear matching for this particular command, but you can achieve something similar by setting the aws-auth config map in kubernetes, adding all of the users/roles and their access rights in one go.
For example we use something like the following below to supply the list of admins to our cluster:
resource "kubernetes_config_map" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = <<CONFIGMAPAWSAUTH
- rolearn: ${var.k8s-node-iam-arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111222333444:role/developer
username: dev-service
groups:
- system:masters
CONFIGMAPAWSAUTH
}
}
Note that this file contains all of the role mappings, so you should make sure var.k8s-node-iam-arn is set to the superuser of the cluster otherwise you can get locked out. Also you have to set what access these roles will get.
You can also add specific IAM users instead of roles as well:
- userarn: arn:aws:iam::1234:user/user.first
username: user.first
groups:
- system:masters
You can do this directly in the eks module.
You create the list of roles you want to add, e.g.:
locals {
aws_auth_roles = [
{
rolearn = ${data.aws_iam_role.mwaa_role.arn}
username = ${var.mwaa_username}
groups = [
"system:masters"
]
},
]
}
and then in the module, you add:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.5.1"
[...]
# aws-auth configmap
manage_aws_auth_configmap = true
aws_auth_roles = local.aws_auth_roles
[...]
}
Note: In older versions of "terraform-aws-modules/eks/aws" (14, 17) it was map_users and map_roles, in 19 it is manage_aws_auth_configmap and aws_auth_users, aws_auth_roles.
See the documentation here: https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/19.7.0#input_manage_aws_auth_configmap
UPDATE: for this to work and not give an error like Error: The configmap "aws-auth" does not exist, you need to also add this authentication part:
data "aws_eks_cluster_auth" "default" {
name = local.cluster_name
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.default.token
}

Scheduled restarts on kubernetes using terraform

I run a kubernetes cluster on aws managed by terraform. I'd like to automatically restart the pods in the cluster at some regular interval, maybe weekly. Since the entire cluster is managed by terraform, I'd like to run the auto restart command through terraform as well.
At first I assumed that kubernetes would have some kind of ttl for its pods, but that does not seem to be the case.
Elsewhere on SO I've seen the ability to run auto restarts using a cron job managed by kubernetes (eg: How to schedule pods restart). Terraform has a relevant resource -- the kubernetes_cron_job -- but I can't fully understand how to set it up with the permissions necessary to actually run.
Would appreciate some feedback!
Below is what I've tried:
resource "kubernetes_cron_job" "deployment_restart" {
metadata {
name = "deployment-restart"
}
spec {
concurrency_policy = "Forbid"
schedule = "0 8 * * *"
starting_deadline_seconds = 10
successful_jobs_history_limit = 10
job_template {
metadata {}
spec {
backoff_limit = 2
active_deadline_seconds = 600
template {
metadata {}
spec {
service_account_name = var.service_account.name
container {
name = "kubectl"
image = "bitnami/kubectl"
command = ["kubectl rollout restart deploy"]
}
}
}
}
}
}
}
resource "kubernetes_role" "deployment_restart" {
metadata {
name = "deployment-restart"
}
rule {
api_groups = ["apps", "extensions"]
resources = ["deployments"]
verbs = ["get", "list", "patch", "watch"]
}
}
resource "kubernetes_role_binding" "deployment_restart" {
metadata {
name = "deployment-restart"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = kubernetes_role.deployment_restart.metadata[0].name
}
subject {
kind = "ServiceAccount"
name = var.service_account.name
api_group = "rbac.authorization.k8s.io"
}
}
This was based on a combination of Granting RBAC roles in k8s cluster using terraform and How to schedule pods restart.
Currently getting the following error:
Error: RoleBinding.rbac.authorization.k8s.io "deployment-restart" is invalid: subjects[0].apiGroup: Unsupported value: "rbac.authorization.k8s.io": supported values: ""
As per the official documentation rolebinding.subjects.apiGroup for Service Accounts should be empty.
kubectl explain rolebinding.subjects.apiGroup
KIND: RoleBinding VERSION: rbac.authorization.k8s.io/v1
FIELD: apiGroup
DESCRIPTION:
APIGroup holds the API group of the referenced subject. Defaults to "" for
ServiceAccount subjects. Defaults to "rbac.authorization.k8s.io" for User
and Group subjects.

Create kubernetes secret for docker registry - Terraform

Using kubectl we can create docker registry authentication secret as follows
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
--docker-email=my#email.com \
How do i create this secret using terraform, i saw this link, it has data, in the flow of terraform the kubernetes instance is being created in azure and i get the data required from there and i created something like below
resource "kubernetes_secret" "docker-registry" {
metadata {
name = "registry-credentials"
}
data = {
docker-server = data.azurerm_container_registry.docker_registry_data.login_server
docker-username = data.azurerm_container_registry.docker_registry_data.admin_username
docker-password = data.azurerm_container_registry.docker_registry_data.admin_password
}
}
It seems that it is wrong as the images are not being pulled. What am i missing here.
If you run following command
kubectl create secret docker-registry regsecret \
--docker-server=docker.example.com \
--docker-username=kube \
--docker-password=PW_STRING \
--docker-email=my#email.com
It will create a secret like following
$ kubectl get secrets regsecret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-06-01T18:31:07Z"
name: regsecret
namespace: default
resourceVersion: "42304"
selfLink: /api/v1/namespaces/default/secrets/regsecret
uid: 59054483-2789-4dd2-9321-74d911eef610
type: kubernetes.io/dockerconfigjson
If we decode .dockerconfigjson we will get
{"auths":{"docker.example.com":{"username":"kube","password":"PW_STRING","email":"my#email.com","auth":"a3ViZTpQV19TVFJJTkc="}}}
So, how can we do that using terraform?
I created a file config.json with following data
{"auths":{"${docker-server}":{"username":"${docker-username}","password":"${docker-password}","email":"${docker-email}","auth":"${auth}"}}}
Then in main.tf file
resource "kubernetes_secret" "docker-registry" {
metadata {
name = "regsecret"
}
data = {
".dockerconfigjson" = "${data.template_file.docker_config_script.rendered}"
}
type = "kubernetes.io/dockerconfigjson"
}
data "template_file" "docker_config_script" {
template = "${file("${path.module}/config.json")}"
vars = {
docker-username = "${var.docker-username}"
docker-password = "${var.docker-password}"
docker-server = "${var.docker-server}"
docker-email = "${var.docker-email}"
auth = base64encode("${var.docker-username}:${var.docker-password}")
}
}
then run
$ terraform apply
This will generate same secrets. Hope it will helps
I would suggest creating a azurerm_role_assignement to give aks access to the acr:
resource "azurerm_role_assignment" "aks_sp_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = var.service_principal_obj_id
depends_on = [
azurerm_kubernetes_cluster.aks,
azurerm_container_registry.acr
]
}
Update
You can create the service principal in the azure portal or with az cli and use client_id, client_secret and object-id in terraform.
Get Client_id and Object_id by running az ad sp list --filter "displayName eq '<name>'". The secret has to be created in the Certificates & secrets tab of the service principal. See this guide: https://pixelrobots.co.uk/2018/11/first-look-at-terraform-and-the-azure-cloud-shell/
Just set all three as variable, eg for obj_id:
variable "service_principal_obj_id" {
default = "<object-id>"
}
Now use the credentials with aks:
resource "azurerm_kubernetes_cluster" "aks" {
...
service_principal {
client_id = var.service_principal_app_id
client_secret = var.service_principal_password
}
...
}
And set the object id in the acr as described above.
Alternative
You can create the service principal with terraform (only works if you have the necessary permissions). https://www.terraform.io/docs/providers/azuread/r/service_principal.html combined with a random_password resource:
resource "azuread_application" "aks_sp" {
name = "somename"
available_to_other_tenants = false
oauth2_allow_implicit_flow = false
}
resource "azuread_service_principal" "aks_sp" {
application_id = azuread_application.aks_sp.application_id
depends_on = [
azuread_application.aks_sp
]
}
resource "azuread_service_principal_password" "aks_sp_pwd" {
service_principal_id = azuread_service_principal.aks_sp.id
value = random_password.aks_sp_pwd.result
end_date = "2099-01-01T01:02:03Z"
depends_on = [
azuread_service_principal.aks_sp
]
}
You need to assign the role "Conributer" to the sp and can use it directly in aks / acr.
resource "azurerm_role_assignment" "aks_sp_role_assignment" {
scope = var.subscription_id
role_definition_name = "Contributor"
principal_id = azuread_service_principal.aks_sp.id
depends_on = [
azuread_service_principal_password.aks_sp_pwd
]
}
Use them with aks:
resource "azurerm_kubernetes_cluster" "aks" {
...
service_principal {
client_id = azuread_service_principal.aks_sp.app_id
client_secret = azuread_service_principal_password.aks_sp_pwd.value
}
...
}
and the role assignment:
resource "azurerm_role_assignment" "aks_sp_acr" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azuread_service_principal.aks_sp.object_id
depends_on = [
azurerm_kubernetes_cluster.aks,
azurerm_container_registry.acr
]
}
Update secret example
resource "random_password" "aks_sp_pwd" {
length = 32
special = true
}

Vault policy path with wildcard

I've below policy in vault
path "/secrets/global/*" { capabilities = ["read", "create", "update", "delete", "list"] }
will this policy grant me access to all the paths under global like
/secrets/global/common/*
/secrets/global/notsocommoon/app1/*
/secrets/global/notsocommoon/app1/module1/*
Yes. Vault will grand all the capabilities to the /secrets/global/ and its child directory.
As we can add multiple paths to the same policy, if we want to restrict few capabilities a particular path, we can do that like
#mypolicy.hcl
path "/secrets/global/*" { capabilities = ["read", "create", "update", "delete", "list"] }
path "/secrets/global/myteam/passwords/*" { capabilities = ["read"] }

How can I configure an AWS EKS autoscaler with Terraform?

I'm using the AWS EKS provider (github.com/terraform-aws-modules/terraform-aws-eks ). I'm following along the tutorial with https://learn.hashicorp.com/terraform/aws/eks-intro
However this does not seem to have autoscaling enabled... It seems it's missing the cluster-autoscaler pod / daemon?
Is Terraform able to provision this functionality? Or do I need to set this up following a guide like: https://eksworkshop.com/scaling/deploy_ca/
You can deploy Kubernetes resources using Terraform. There is both a Kubernetes provider and a Helm provider.
data "aws_eks_cluster_auth" "authentication" {
name = "${var.cluster_id}"
}
provider "kubernetes" {
# Use the token generated by AWS iam authenticator to connect as the provider does not support exec auth
# see: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
host = "${var.cluster_endpoint}"
cluster_ca_certificate = "${base64decode(var.cluster_certificate_authority_data)}"
token = "${data.aws_eks_cluster_auth.authentication.token}"
load_config_file = false
}
provider "helm" {
install_tiller = "true"
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.12.3"
}
resource "helm_release" "cluster_autoscaler" {
name = "cluster-autoscaler"
repository = "stable"
chart = "cluster-autoscaler"
namespace = "kube-system"
version = "0.12.2"
set {
name = "autoDiscovery.enabled"
value = "true"
}
set {
name = "autoDiscovery.clusterName"
value = "${var.cluster_name}"
}
set {
name = "cloudProvider"
value = "aws"
}
set {
name = "awsRegion"
value = "${data.aws_region.current_region.name}"
}
set {
name = "rbac.create"
value = "true"
}
set {
name = "sslCertPath"
value = "/etc/ssl/certs/ca-bundle.crt"
}
}
This answer below is still not complete... But at least it gets me partially further...
1.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
helm install stable/cluster-autoscaler --name my-release --set "autoscalingGroups[0].name=demo,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1" --set rbac.create=true
And then manually fix the certificate path:
kubectl edit deployments my-release-aws-cluster-autoscaler
replace the following:
path: /etc/ssl/certs/ca-bundle.crt
With
path: /etc/ssl/certs/ca-certificates.crt
2.
In the AWS console, give AdministratorAccess policy to the terraform-eks-demo-node role.
3.
Update the nodes parameter with (kubectl edit deployments my-release-aws-cluster-autoscaler)
- --nodes=1:10:terraform-eks-demo20190922124246790200000007