How to change default k8s cluster StorageClass with terraform? - kubernetes

On eks default storage class is called gp2 and configured with:
allow_volume_expansion = false
parameters = {
"encrypted" = "false"
"fsType" = "ext4"
"type" = "gp2"
}
and I would like to change the default storage like so:
allow_volume_expansion = true
parameters = {
"encrypted" = "true"
"fsType" = "ext4"
"type" = "gp3"
}
How can it be done using terraform?

Following this kubrnetes guide I created the following config:
# Remove non encrypted default storage class
resource "kubernetes_annotations" "default-storageclass" {
api_version = "storage.k8s.io/v1"
kind = "StorageClass"
force = "true"
metadata {
name = "gp2"
}
annotations = {
"storageclass.kubernetes.io/is-default-class" = "false"
}
}
# Create the new wanted StorageClass and make it default
resource "kubernetes_storage_class" "gp3-enc" {
metadata {
name = "gp3-enc"
annotations = {
"storageclass.kubernetes.io/is-default-class" = "true"
}
}
storage_provisioner = "ebs.csi.aws.com"
volume_binding_mode = "WaitForFirstConsumer"
allow_volume_expansion = true
parameters = {
"encrypted" = "true"
"fsType" = "ext4"
"type" = "gp3"
}
}

Related

Why terraform is not allowing me to use the image_pull_secrets?

I have an image to pull from a private registry. I did all the configs and added the secret to the pod config under pod.spec.image_pull_secrets. But I am getting an error like
An argument named "image_pull_secrets" is not expected here. Did you mean to define a block of type "image_pull_secrets"?
As per documentation this should be ok.
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/pod#nested-blocks
resource "kubernetes_pod" "main" {
count = data.coder_workspace.me.start_count
metadata {
name = "coder-${lower(data.coder_workspace.me.owner)}-${lower(data.coder_workspace.me.name)}"
namespace = var.workspaces_namespace
}
spec {
image_pull_secrets = {
name = ["coder-ocir-secret"]
}
security_context {
# run_as_user = "1000"
fs_group = "1000"
}
init_container {
name = "init-eclipse"
image = "busybox:latest"
command = [ "chown","-R","1000:1000","/data"]
security_context {
run_as_user = "0"
privileged = "true"
allow_privilege_escalation = "true"
read_only_root_filesystem = "false"
run_as_non_root = "false"
capabilities {
add = ["CAP_SYS_ADMIN","CHOWN",
"FOWNER",
"DAC_OVERRIDE"]
drop = [
"ALL"]
}
}
volume_mount {
mount_path = "/data"
name = "home-coder-vol-${data.coder_workspace.me.owner}-${lower(data.coder_workspace.me.name)}"
}
}
container {
name = "eclipse"
image = "docker.io/manumaan/eclipsevncv2.2:latest"
command = ["sh", "-c", coder_agent.coder.init_script]
image_pull_policy = "Always"
security_context {
run_as_user = "1000"
# fs_group = "1000"
}
env {
name = "CODER_AGENT_TOKEN"
value = coder_agent.coder.token
}
resources {
requests = {
cpu = "${var.cpu}"
memory = "${var.memory}G"
ephemeral-storage = "2Gi"
}
limits = {
cpu = "${var.cpu}"
memory = "${var.memory}G"
ephemeral-storage = "4Gi"
}
}
volume_mount {
mount_path = "/home/coder"
name = "home-coder-vol-${data.coder_workspace.me.owner}-${lower(data.coder_workspace.me.name)}"
}
}
I also tried giving it inside container, after all containers etc inside spec but it does not accept it.I am going crazy!
Also made it not a list: No difference.
image_pull_secrets = {
name = "coder-ocir-secret"
}
This might be caused by a typo, image_pull_secrets is a block, so you don't need the =, neither the square brackets ([]) here:
image_pull_secrets = {
name = ["coder-ocir-secret"]
}
It should be instead:
image_pull_secrets {
name = "coder-ocir-secret"
}
If you need to define multiple pull_secrets you can define multiple ones, or use dynamic blocks
Make sure your block is perfect and indentation also, this one is working for me
resource "kubernetes_pod" "main" {
metadata {
name = "coder-name"
namespace = "default"
}
spec {
image_pull_secrets {
name = "coder-ocir-secret"
}
security_context {
# run_as_user = "1000"
fs_group = "1000"
}
init_container {
name = "init-eclipse"
image = "busybox:latest"
command = [ "chown","-R","1000:1000","/data"]
security_context {
run_as_user = "0"
privileged = "true"
allow_privilege_escalation = "true"
read_only_root_filesystem = "false"
run_as_non_root = "false"
capabilities {
add = ["CAP_SYS_ADMIN","CHOWN",
"FOWNER",
"DAC_OVERRIDE"]
drop = [
"ALL"]
}
}
volume_mount {
mount_path = "/data"
name = "home-coder-vol-fake-name"
}
}
container {
name = "eclipse"
image = "docker.io/manumaan/eclipsevncv2.2:latest"
command = ["sh", "-c", "command"]
image_pull_policy = "Always"
security_context {
run_as_user = "1000"
# fs_group = "1000"
}
env {
name = "CODER_AGENT_TOKEN"
value = "value"
}
resources {
requests = {
cpu = "1"
memory = "1G"
ephemeral-storage = "2Gi"
}
limits = {
cpu = "1"
memory = "2G"
ephemeral-storage = "4Gi"
}
}
volume_mount {
mount_path = "/home/coder"
name = "home-coder-vol-fake-name"
}
}
}
}

Create Pods using Rancher with Terraform

I created this simple Terraform script with Rancher to create namespace in imported Kubernetes cluster:
terraform {
required_providers {
rancher2 = {
source = "rancher/rancher2"
version = "1.24.1"
}
}
}
provider "rancher2" {
api_url = "https://192.168.1.128/v3"
token_key = "token-n4fxx:4qcgctvph7qh2sdnn762zpzg889rgw8xpd2nvcnpnr4v4wpb9zljtd"
insecure = true
}
resource "rancher2_namespace" "zone-1" {
name = "zone-1"
project_id = "c-m-xmhbjzdt:p-sd86v"
description = "zone-1 namespace"
resource_quota {
limit {
limits_cpu = "100m"
limits_memory = "100Mi"
requests_storage = "1Gi"
}
}
container_resource_limit {
limits_cpu = "20m"
limits_memory = "20Mi"
requests_cpu = "1m"
requests_memory = "1Mi"
}
}
The question is how I can create Pods into the Kubernetes cluster using again Terraform script?
Terraform offers the Kubernetes Provider which allows you to create all kind of Kubernetes objects.
To quote the documentation of the "kubernetes_pod"-resource:
resource "kubernetes_pod" "test" {
metadata {
name = "terraform-example"
}
spec {
container {
image = "nginx:1.21.6"
name = "example"
env {
name = "environment"
value = "test"
}
port {
container_port = 80
}
liveness_probe {
http_get {
path = "/"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
dns_config {
nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
searches = ["example.com"]
option {
name = "ndots"
value = 1
}
option {
name = "use-vc"
}
}
dns_policy = "None"
}
}

Terraform fails for kubernetes storage class when changing type from gp2 to gp3

I have 2 storage class deployed using terraform(storage and expandable-storage). Below is the code for both.
Before
resource "kubernetes_storage_class" "expandable-storage" {
metadata {
name = "expandable-storage"
}
storage_provisioner = "kubernetes.io/aws-ebs"
reclaim_policy = "Retain"
parameters = {
type = "gp2"
fsType: "ext4"
encrypted: "true"
}
allow_volume_expansion = true
volume_binding_mode = "Immediate"
}
resource "kubernetes_storage_class" "storage" {
metadata {
name = "storage"
}
storage_provisioner = "ebs.csi.aws.com"
reclaim_policy = "Retain"
parameters = {
type = "gp2"
fsType: "ext4"
encrypted: "true"
}
allow_volume_expansion = true
volume_binding_mode = "Immediate"
}
resource "null_resource" "k8s_storage_class_patch" {
depends_on = [kubernetes_storage_class.expandable-storage]
provisioner "local-exec" {
command = "/bin/bash scripts/storage_class_patch.sh"
}
}
After this I tried to update parameter for both storage class from type gp2 to type gp3.
After
resource "kubernetes_storage_class" "expandable-storage" {
metadata {
name = "expandable-storage"
}
storage_provisioner = "kubernetes.io/aws-ebs"
reclaim_policy = "Retain"
parameters = {
type = "gp3"
fsType: "ext4"
encrypted: "true"
}
allow_volume_expansion = true
volume_binding_mode = "Immediate"
}
resource "kubernetes_storage_class" "storage" {
metadata {
name = "storage"
}
storage_provisioner = "ebs.csi.aws.com"
reclaim_policy = "Retain"
parameters = {
type = "gp3"
fsType: "ext4"
encrypted: "true"
}
allow_volume_expansion = true
volume_binding_mode = "Immediate"
}
resource "null_resource" "k8s_storage_class_patch" {
depends_on = [kubernetes_storage_class.expandable-storage]
provisioner "local-exec" {
command = "/bin/bash scripts/storage_class_patch.sh"
}
}
After applying the resource module "storage" updated to gp3 but for "expandable-storage" module I am getting error
Error: storageclasses.storage.k8s.io "expandable-storage" already exists
I am not sure what is causing this as same changes worked for other storage class.

Post "https://***.eks.amazonaws.com/api/v1/persistentvolumes": dial tcp *****:443: i/o timeout

I'm getting this error when creating the persistent volume through terraform for EKS.
Created the EBS volume through terraform only.. It successfully created. But when try to create the Persistent volume getting the error
Please check the code below
resource "kubernetes_persistent_volume" "api-application-pv" {
metadata {
name = "api-application-pv"
}
spec {
capacity = {
storage = "2Gi"
}
access_modes = ["ReadWriteMany"]
persistent_volume_source {
aws_elastic_block_store {
volume_id = aws_launch_template.default.arn
}
}
}
}
resource "kubernetes_persistent_volume_claim" "api-application-pvc" {
metadata {
name = "api-application-pvc"
}
spec {
resources {
requests = {
storage = "2Gi"
}
}
access_modes = ["ReadWriteMany"]
storage_class_name = "gp2"
volume_name = "${kubernetes_persistent_volume.api-application-pv.metadata.0.name}"
}
wait_until_bound = false
depends_on = [kubernetes_persistent_volume.api-application-pv]
}
resource "aws_launch_template" "default" {
name_prefix = "eks-stage-template"
description = "eks-stage-template"
update_default_version = true
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = 50
volume_type = "gp2"
delete_on_termination = true
encrypted = true
}
}

Can't create secrets in kubernetes with terraform cloud

I am trying to create a secret within my kubernetes cluster and terraform cloud.
I can create the cluster with no problems, but problems arise when I try to inject a secret in the cluster.
Here is a simplified version of my terraform manifest:
terraform {
backend "remote" {
organization = "my-org"
// Workspaces separate deployment envs (like prod, stage, or UK, Italy)
workspaces {
name = "my-workspace-name"
}
}
}
resource "google_container_cluster" "demo-k8s-cluster" {
name = "demo-cluster"
location = var.region
initial_node_count = 1
project = var.project-id
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
// service_account = var.service-account
metadata = {
disable-legacy-endpoints = "true"
}
}
timeouts {
create = "30m"
update = "40m"
}
}
provider "kubernetes" {
host = google_container_cluster.demo-k8s-cluster.endpoint
username = google_container_cluster.demo-k8s-cluster.master_auth.0.username
password = google_container_cluster.demo-k8s-cluster.master_auth.0.password
client_certificate = base64decode(google_container_cluster.demo-k8s-cluster.master_auth.0.client_certificate)
client_key = base64decode(google_container_cluster.demo-k8s-cluster.master_auth.0.client_key)
cluster_ca_certificate = base64decode(google_container_cluster.demo-k8s-cluster.master_auth.0.cluster_ca_certificate)
load_config_file = "false"
}
resource "kubernetes_secret" "cloudsql-db-credentials" {
metadata {
name = "cloudsql-instance-credentials-test"
}
data = {
"stack-creds.json" = var.service-account
}
}
The plan works fine, I get the following error at Apply stage:
Error: secrets is forbidden: User "system:anonymous" cannot create resource "secrets" in API group "" in the namespace "default"
on infrastructure.tf line 149, in resource "kubernetes_secret" "cloudsql-db-credentials":
149: resource "kubernetes_secret" "cloudsql-db-credentials" {
As per #mario comment, it turns out terraform cloud can't get the right identity and can't connect to the cluster to inject the secret. Instead of using terraform cloud I have instead opted to use GCS backend and managed to get it working. The following configuration works:
terraform {
backend "gcs" {
bucket = "infrastructure-state-bucket"
prefix = "test/so_simple2"
}
}
// The project-id variable contains project id to use.
variable "project-id" {
type = string
}
variable "region" {
type = string
}
variable "cluster-name" {
type = string
}
provider "google" {
project = var.project-id
region = var.region
}
provider "random" {}
resource "random_id" "id" {
byte_length = 4
prefix = "${var.cluster-name}-"
}
resource "google_container_cluster" "cluster" {
name = random_id.id.hex
location = var.region
initial_node_count = 1
project = var.project-id
}
provider "kubernetes" {
host = google_container_cluster.cluster.endpoint
username = google_container_cluster.cluster.master_auth.0.username
password = google_container_cluster.cluster.master_auth.0.password
client_certificate = base64decode(google_container_cluster.cluster.master_auth.0.client_certificate)
client_key = base64decode(google_container_cluster.cluster.master_auth.0.client_key)
cluster_ca_certificate = base64decode(google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
// This is a deal breaker, if is set to false I get same error.
// load_config_file = "false"
}
resource "kubernetes_secret" "example" {
metadata {
name = "basic-auth"
}
data = {
username = "admin"
password = "P4ssw0rd"
}
type = "kubernetes.io/basic-auth"
}