sidecar with kubernetes provider for terraform - kubernetes

I have to add 2 containers as sidecars in a k8s deployment. I am using terraform Kubernetes provider. Is it available in terraform Kubernetes provider? If yes, any example would be helpful.
resource "kubernetes_deployment" "test_deployment" {
metadata {
name = test_nginx
namespace = test
labels = {
app = test_nginx
}
}
spec {
replicas = "2"
selector {
match_labels = {
app = test_nginx
}
}
template {
metadata {
labels = {
app = test_nginx
}
}
spec {
container {
image = nginx
name = local_nginx
.
.
.
image = logrotate
name = local_logrotate
.
.
.
}
}
}
}
}
Error:
Error: Attribute redefined │ │ On deployment\deployment.tf line 84: The argument "image" was already set │ at deployment\deployment.tf:28,11-16. Each argument may be set only once.

You need to use another container block:
resource "kubernetes_deployment" "test_deployment" {
metadata {
name = test_nginx
namespace = test
labels = {
app = test_nginx
}
}
spec {
replicas = "2"
selector {
match_labels = {
app = test_nginx
}
}
template {
metadata {
labels = {
app = test_nginx
}
}
spec {
container {
image = nginx
name = local_nginx
.
.
.
}
# new container block
container {
image = logrotate
name = local_logrotate
.
.
.
}
}
}
}
}
I would also suggest moving to a new version of deployments [1], i.e., kubernetes_deployment_v1.
[1] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment_v1

Related

Create Pods using Rancher with Terraform

I created this simple Terraform script with Rancher to create namespace in imported Kubernetes cluster:
terraform {
required_providers {
rancher2 = {
source = "rancher/rancher2"
version = "1.24.1"
}
}
}
provider "rancher2" {
api_url = "https://192.168.1.128/v3"
token_key = "token-n4fxx:4qcgctvph7qh2sdnn762zpzg889rgw8xpd2nvcnpnr4v4wpb9zljtd"
insecure = true
}
resource "rancher2_namespace" "zone-1" {
name = "zone-1"
project_id = "c-m-xmhbjzdt:p-sd86v"
description = "zone-1 namespace"
resource_quota {
limit {
limits_cpu = "100m"
limits_memory = "100Mi"
requests_storage = "1Gi"
}
}
container_resource_limit {
limits_cpu = "20m"
limits_memory = "20Mi"
requests_cpu = "1m"
requests_memory = "1Mi"
}
}
The question is how I can create Pods into the Kubernetes cluster using again Terraform script?
Terraform offers the Kubernetes Provider which allows you to create all kind of Kubernetes objects.
To quote the documentation of the "kubernetes_pod"-resource:
resource "kubernetes_pod" "test" {
metadata {
name = "terraform-example"
}
spec {
container {
image = "nginx:1.21.6"
name = "example"
env {
name = "environment"
value = "test"
}
port {
container_port = 80
}
liveness_probe {
http_get {
path = "/"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
dns_config {
nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
searches = ["example.com"]
option {
name = "ndots"
value = 1
}
option {
name = "use-vc"
}
}
dns_policy = "None"
}
}

Terraform is successful but I can not find resource

I'm trying to setup my AWS EKS cluster via terraform and I'm having issues with creating resources within the EKS cluster. My cluster is created and I was able to create node groups using aws_eks_node_group. However, when I attempt to create kubernetes_daemonset and kubernetes_namespace the terraform is successful and says that the resources were created, but I do not see them in the console. I can view the node groups in this cluster and when I route to another cluster I can view the namespaces and daemonsets, so it's not a permissions issue.
Terraform output
[docker] [deploy-terraform] [terraform apply] kubernetes_namespace.jupyterhub: Creation complete after 0s [id=jupyterhub]
[docker] [deploy-terraform] [terraform apply] kubernetes_daemon_set_v1.example: Creation complete after 0s [id=kube-system/nvidia-device-plugin-daemonset-0.9.0]
TF resources
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
command = "aws"
args = [
"eks",
"get-token",
"--cluster-name",
data.aws_eks_cluster.cluster.name
]
}
}
resource "kubernetes_daemonset" "nvidia-device-plugin" {
metadata {
name = "nvidia-device-plugin-daemonset-0.9.0"
namespace = "kube-system"
}
spec {
selector {
match_labels = { name : "nvidia-device-plugin-ds" }
}
template {
metadata {
labels = {
name = "nvidia-device-plugin-ds"
}
}
spec {
toleration {
key = "nvidia.com/gpu"
operator = "Exists"
effect = "NoSchedule"
}
toleration {
key = "hub.jupyter.org/dedicated"
operator = "Exists"
effect = "NoSchedule"
}
container {
image = "nvcr.io/nvidia/k8s-device-plugin:v0.9.0"
name = "nvidia-device-plugin-ctr"
security_context {
allow_privilege_escalation = false
capabilities {
drop = ["ALL"]
}
}
volume_mount {
name = "device-plugin"
mount_path = "/var/lib/kubelet/device-plugins"
}
}
volume {
name = "device-plugin"
host_path {
path = "/var/lib/kubelet/device-plugins"
}
}
affinity {
node_affinity {
required_during_scheduling_ignored_during_execution {
node_selector_term {
match_expressions {
key = "beta.kubernetes.io/instance-type"
operator = "In"
values = ["p2.xlarge"]
}
}
}
}
}
}
}
}
}
resource "kubernetes_namespace" "jupyterhub" {
metadata {
name = "jupyterhub"
labels = {
name = "jupyterhub"
}
}
}
``
My problem was that I did not have the manage_aws_auth_configmap boolean set to true in the eks module.

Terraform fails to create ingress (could not find the requested resource ingresses.extensions)

I'm using minikube locally.
The following is the .tf file I use to create my kubernetes cluster:
provider "kubernetes" {
config_path = "~/.kube/config"
}
resource "kubernetes_namespace" "tfs" {
metadata {
name = "tfs" # terraform-sandbox
}
}
resource "kubernetes_deployment" "golang_webapp" {
metadata {
name = "golang-webapp"
namespace = "tfs"
labels = {
app = "webapp"
}
}
spec {
replicas = 3
selector {
match_labels = {
app = "webapp"
}
}
template {
metadata {
labels = {
app = "webapp"
}
}
spec {
container {
image = "golang-docker-example"
name = "golang-webapp"
image_pull_policy = "Never" # this is set so that kuberenetes wont try to download the image but use the localy built one
liveness_probe {
http_get {
path = "/"
port = 8080
}
initial_delay_seconds = 15
period_seconds = 15
}
readiness_probe {
http_get {
path = "/"
port = 8080
}
initial_delay_seconds = 3
period_seconds = 3
}
}
}
}
}
}
resource "kubernetes_service" "golang_webapp" {
metadata {
name = "golang-webapp"
namespace = "tfs"
labels = {
app = "webapp_ingress"
}
}
spec {
selector = {
app = kubernetes_deployment.golang_webapp.metadata.0.labels.app
}
port {
port = 8080
target_port = 8080
protocol = "TCP"
}
# type = "ClusterIP"
type = "NodePort"
}
}
resource "kubernetes_ingress" "main_ingress" {
metadata {
name = "main-ingress"
namespace = "tfs"
}
spec {
rule {
http {
path {
backend {
service_name = "golang-webapp"
service_port = 8080
}
path = "/golang-webapp"
}
}
}
}
}
When executing terraform apply, I am successfully able to create all of the resources except for the ingress.
The error is:
Error: Failed to create Ingress 'tfs/main-ingress' because: the server could not find the requested resource (post ingresses.extensions)
with kubernetes_ingress.main_ingress,
on main.tf line 86, in resource "kubernetes_ingress" "main_ingress":
86: resource "kubernetes_ingress" "main_ingress" {
When I try to create an ingress service with kubectl using the same configuration as the one above (only in .yaml and using the kubectl apply command) it works, so it seems that kubectl & minikube are able to create this type of ingress, but terraform cant for some reason...
Thanks in advance for any help!
Edit 1:
adding the .yaml that I'm able to create the ingress with
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
namespace: tfs
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: golang-webapp
port:
number: 8080
The kubernetes_ingress resource generate an ingress with an apiVersion which is not supported by your kubernetes cluster. You have to use [kubernetes_ingress_v1][1] resource which looks similar to kubernetes_ingress resource with some diferences. For your example, it will be like this :
resource "kubernetes_ingress_v1" "jenkins-ingress" {
metadata {
name = "example-ingress"
namespace = "tfs"
annotations = {
"nginx.ingress.kubernetes.io/rewrite-target" = "/$1"
}
}
spec {
rule {
http {
path {
path = "/"
backend {
service {
name = "golang-webapp"
port {
number = 8080
}
}
}
}
}
}
}
}
I think the issue can be related to the ingress classname. May be you need to explicitely provide it in your .tf:
metadata {
name = "example"
annotations = {
"kubernetes.io/ingress.class" = "nginx or your classname"
}
Or may be it's ingresses.extensions that does not exist in your cluster. Can you provide the .yaml that executed correctly ?
Something like this should help using kubernetes_ingress_v1
locals{
ingress_rules = [
{
service_path = "/"
service_name = "golang-webapp"
service_port = 8080
}
}
resource "kubernetes_ingress_v1" "jenkins-ingress" {
metadata {
annotations = var.ingress_annotations
name = "example-ingress"
namespace = "tfs"
labels = var.labels
}
spec {
ingress_class_name = var.ingress_class_name
rule {
http {
dynamic "path" {
for_each = local.ingress_rules
content {
backend {
service {
name = path.value.service_name
port {
number = path.value.service_port
}
}
}
path = path.value.service_path
}
}
}
}
tls {
secret_name = "tls-secret"
}
}
}

Deployment invalid Terraform + Kubernetes: spec.template.spec.containers[0].envFrom: Invalid value: ""

I'm experimenting with terraform to deploy k8s resources.
I created a mongodb deployment
provider "kubernetes" {
config_context = "kubernetes-admin#kubernetes"
}
resource "kubernetes_namespace" "demo-namespace" {
metadata {
name = "my-demo-namespace"
}
}
// mongodb
resource "kubernetes_deployment" "mongodb" {
metadata {
name = "mongodb"
namespace = kubernetes_namespace.demo-namespace.metadata[0].name
labels = {
app = "mongodb"
}
}
spec {
replicas = 1
selector {
match_labels = {
app = "mongodb"
}
}
template {
metadata {
labels = {
app = "mongodb"
}
}
spec {
container {
image = "mongo"
name = "mongodb"
env_from {
secret_ref {
name = kubernetes_secret.scrt-mongodb.metadata[0].name
}
config_map_ref {
name = kubernetes_config_map.cm-mongodb.metadata[0].name
}
}
resources {
limits {
cpu = "500m"
memory = "1Gi"
}
requests {
cpu = "150m"
memory = "256Mi"
}
}
liveness_probe {
exec {
command = ["bash", "-c", "mongo -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --eval db.adminCommand(\"ping\")"]
}
initial_delay_seconds = 3
period_seconds = 1
}
}
}
}
}
}
// mongodb configmap
resource "kubernetes_config_map" "cm-mongodb" {
metadata {
name = "cm-mongodb"
namespace = kubernetes_namespace.demo-namespace.metadata.0.name
}
// improve creds with secret
data = {
MONGO_INITDB_DATABASE = "movies"
}
}
// monbodb secret
resource "kubernetes_secret" "scrt-mongodb" {
metadata {
name = "mongodb-creds"
}
data = {
MONGO_INITDB_ROOT_USERNAME = "root-user"
MONGO_INITDB_ROOT_PASSWORD = "secret"
}
type = "opaque"
}
This fails with:
kubernetes_config_map.cm-mongodb: Creation complete after 0s [id=my-demo-namespace/cm-mongodb]
kubernetes_deployment.mongodb: Creating...
Error: Failed to create deployment: Deployment.apps "mongodb" is invalid: spec.template.spec.containers[0].envFrom: Invalid value: "": may not have more than one field specified at a time
on template.tf line 12, in resource "kubernetes_deployment" "mongodb":
12: resource "kubernetes_deployment" "mongodb" {
What is wrong here?
You missed this line:
namespace = kubernetes_namespace.demo-namespace.metadata.0.name
You did not define the resource in the desired namespace so terraform failed to "find" the desired value.
// monbodb secret
resource "kubernetes_secret" "scrt-mongodb" {
metadata {
name = "mongodb-creds"
# -------------------------------------------------------------
# -------------------------------------------------------------
# Add the namespace here
namespace = kubernetes_namespace.demo-namespace.metadata.0.name
# -------------------------------------------------------------
# -------------------------------------------------------------
}
data = {
MONGO_INITDB_ROOT_USERNAME = "root-user"
MONGO_INITDB_ROOT_PASSWORD = "secret"
}
type = "opaque"
}

terraform azurerm - cannot destroy public ip

New to terraform so i'm hoping this is an easy issue. I'm creating some resources in azure and deploying a simple flask application to AKS. Creating works fine using terraform plan. I can see that azure is provisioned correctly and I can hit the flask app.
When I try to run terraform destroy I get the error - "StatusCode=400...In order to delete the public IP, disassociate/detach the Public IP address from the resource.
Main.tf
variable "subscription_id" {}
variable "client_id" {}
variable "client_secret" {}
variable "tenant_id" {}
provider "azurerm" {
version = "=1.28.0"
tenant_id = "${var.tenant_id}"
subscription_id = "${var.subscription_id}"
}
resource "azurerm_resource_group" "aks" {
name = "${var.name_prefix}"
location = "${var.location}"
}
resource "azurerm_kubernetes_cluster" "k8s" {
name = "${var.name_prefix}-aks"
kubernetes_version = "${var.kubernetes_version}"
location = "${azurerm_resource_group.aks.location}"
resource_group_name = "${azurerm_resource_group.aks.name}"
dns_prefix = "AKS-${var.dns_prefix}"
agent_pool_profile {
name = "${var.node_pool_name}"
count = "${var.node_pool_size}"
vm_size = "${var.node_pool_vmsize}"
os_type = "${var.node_pool_os}"
os_disk_size_gb = 30
}
service_principal {
client_id = "${var.client_id}"
client_secret = "${var.client_secret}"
}
tags = {
environment = "${var.env_tag}"
}
}
provider "helm" {
install_tiller = true
kubernetes {
host = "${azurerm_kubernetes_cluster.k8s.kube_config.0.host}"
client_certificate = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)}"
}
}
# Create Static Public IP Address to be used by Nginx Ingress
resource "azurerm_public_ip" "nginx_ingress" {
name = "nginx-ingress-public-ip"
location = "${azurerm_kubernetes_cluster.k8s.location}"
resource_group_name = "${azurerm_kubernetes_cluster.k8s.node_resource_group}"
allocation_method = "Static"
domain_name_label = "${var.name_prefix}"
}
# Add Kubernetes Stable Helm charts repo
data "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
# Install Nginx Ingress using Helm Chart
resource "helm_release" "nginx_ingress" {
name = "nginx-ingress"
repository = "${data.helm_repository.stable.metadata.0.name}"
chart = "nginx-ingress"
set {
name = "rbac.create"
value = "false"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.loadBalancerIP"
value = "${azurerm_public_ip.nginx_ingress.ip_address}"
}
}
Also deploying my kubernetes stuff in this file k8s.tf
provider "kubernetes" {
host = "${azurerm_kubernetes_cluster.k8s.kube_config.0.host}"
username = "${azurerm_kubernetes_cluster.k8s.kube_config.0.username}"
password = "${azurerm_kubernetes_cluster.k8s.kube_config.0.password}"
client_certificate = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)}"
}
resource "kubernetes_deployment" "flask-api-deployment" {
metadata {
name = "flask-api-deployment"
}
spec {
replicas = 2
selector {
match_labels {
component = "api"
}
}
template {
metadata {
labels = {
component = "api"
}
}
spec {
container {
image = "xxx.azurecr.io/sampleflask:0.1.0"
name = "flask-api"
port {
container_port = 5000
}
}
}
}
}
}
resource "kubernetes_service" "api-cluster-ip-service" {
metadata {
name = "flask-api-cluster-ip-service"
}
spec {
selector {
component = "api"
}
port {
port = 5000
target_port = 5000
}
}
}
resource "kubernetes_ingress" "flask-ingress-service" {
metadata {
name = "flask-ingress-service"
}
spec {
backend {
service_name = "flask-api-cluster-ip-service"
service_port = 5000
}
}
}
For your issue, this is a problem about the sequence of the resources. When you create the nginx ingress with the public IP, the public IP should be created first. But when you delete the public IP, it's still in use by the nginx ingress. So It causes the error.
The solution is that you can detach the public IP from the resource which uses it. Then use the destroy the resource from the Terraform. You can take a look at the explanation in the issue.
The user #4c74356b41 is right, but to give more information assuming a config like this:
resource "azurerm_kubernetes_cluster" "k8s" {
name = "aks-e2x-nuffield-uat"
resource_group_name = azurerm_resource_group.core_rg.name
location = azurerm_resource_group.core_rg.location
dns_prefix = "aks-e2x-nuffield-uat-dns"
kubernetes_version = var.k8s_version
# NOTE currently only a single node pool, default, is configured
private_cluster_enabled = true
...
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "standard"
service_cidr = var.k8s_service_subnet
pod_cidr = var.k8s_pod_subnet
docker_bridge_cidr = "172.17.0.1/16"
dns_service_ip = "40.0.8.10" # within the service subnet
}
}
Where the load_balancer_sku is set to standard, you can access the public IP to be used elsewhere like this:
data "azurerm_public_ip" "k8s_load_balancer_ip" {
name = reverse(split("/", tolist(azurerm_kubernetes_cluster.k8s.network_profile.0.load_balancer_profile.0.effective_outbound_ips)[0]))[0]
resource_group_name = azurerm_kubernetes_cluster.k8s.node_resource_group
}
output "ingress_public_ip" {
# value = azurerm_public_ip.ingress.ip_address
value = data.azurerm_public_ip.k8s_load_balancer_ip.ip_address
}