Error by creating a namespace with terraform kubernetes provider - kubernetes

I'm struggling to create a namespace with kubernetes provider.
This is the simple terraform code I'm using:
provider "kubernetes" {
host = "https://ocp-test-1.srv.xxxx.it:8443"
username = "admin"
password = "admin"
load_config_file = "false" # when you wish not to load the local config file
}
resource "kubernetes_namespace" "gfexample" {
metadata {
annotations = {
name = "exampleannotation"
}
labels = {
mylabel = "labelvalue"
}
name = "terraformspace"
}
}
And here is the error:
kubernetes_namespace.gfexample: Creating...
Error: namespaces is forbidden: User "system:anonymous" cannot create namespaces at the cluster scope: no RBAC policy matched
on create_nm.tf line 14, in resource "kubernetes_namespace" "gfexample":
14: resource "kubernetes_namespace" "gfexample" {
Any suggestion will be welcome.
Gian Filippo
Finally I found the solution:
client_certificate = file("/terraform/certificates/admin.crt")
client_key = file("/terraform/certificates/admin.key")
cluster_ca_certificate = file("/terraform/certificates/ca.crt")
This worked fine. I found out the certificates above mentioned under /etc/origin/master (I'm running Openshift 3.11)

Related

Resource "kubernetes_service_account" cannot be made depend_on resource "aws_eks_cluster"

I'm trying to create EKS Cluster using terraform and also wanted to create few services after the creation of the cluster. But when i tried to add depends_on it doesn't work.
resource "aws_eks_cluster" "eks" {
name = "${var.clustername}"
version = "${var.kubeversion}"
role_arn = aws_iam_role.eks-iam-role.arn
enabled_cluster_log_types = ["api", "authenticator", "audit", "scheduler", "controllerManager"]
vpc_config {
endpoint_private_access = true
endpoint_public_access = false
subnet_ids = [var.subnet_id_1, var.subnet_id_2]
}
kubernetes_network_config {
ip_family = "ipv4"
}
depends_on = [
aws_iam_role.eks-iam-role,
]
}
I also wanted to create a namespace after the cluster has been created so i added below code which is failing with an error even after adding depends_on
#Creating namespace as test
resource "kubernetes_namespace" "test" {
metadata {
annotations = {
name = "test-annotation"
}
labels = {
mylabel = "test-label"
}
name = "test"
}
depends_on = [
aws_eks_cluster.eks,
]
}
Error:
Error: Get "http://localhost/api/v1/namespaces/test": dial tcp 127.0.0.1:80: connect: connection refused
I've added the authentication info in kubernetes provider but however the namespace creation should be skipped before the cluster has been created.
Any thoughts??
I tried adding the depends_on in resource "kubernetes_namespace" which should wait until the cluster has been created but it doesn't. I'm expecting the resources to be created only after the cluster has been created.

Kubernetes cluster unreachable when the vm_size was changed in azurerm_kubernetes_cluster

Terraform version
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.16.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "=2.11.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.6.0"
}
}
required_version = "=1.2.6"
}
Terraform Code
resource "azurerm_kubernetes_cluster" "my_cluster" {
name = local.cluster_name
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = local.dns_prefix
node_resource_group = local.resource_group_node_name
kubernetes_version = "1.24.3"
automatic_channel_upgrade = "patch"
sku_tier = var.sku_tier
default_node_pool {
name = "default"
type = "VirtualMachineScaleSets"
vm_size = var.default_pool_vm_size
enable_auto_scaling = true
max_count = var.default_pool_max_count
min_count = var.default_pool_min_count
os_disk_type = "Ephemeral"
os_disk_size_gb = var.default_pool_os_disk_size_gb
}
identity {
type = "SystemAssigned"
}
network_profile {
network_plugin = "kubenet"
}
}
provider "helm" {
kubernetes {
host = azurerm_kubernetes_cluster.my_cluster.kube_admin_config.0.host
client_certificate = base64decode(azurerm_kubernetes_cluster.my_cluster.kube_admin_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.my_cluster.kube_admin_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.my_cluster.kube_admin_config.0.cluster_ca_certificate)
}
}
resource "helm_release" "argocd" {
name = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
version = "4.10.5"
create_namespace = true
namespace = "argocd"
}
Steps to Reproduce
All resources were created successfully when I executed the terraform code at first time creation.
But it was failed on terraform plan when I changed the vm_size in default node pool.
$ terraform plan
Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
with helm_release.argocd,
│ on argocd.tf line 1, in resource "helm_release" "argocd":
│ 1: resource "helm_release" "argocd" {
Expected Behavior
The cluster should be reachable even the vm_size was changed.
Actual Behavior
Kubernetes cluster is unreachable for other terraform providers (ex: kubernetes, helm)
Test
I removed resource argocd to prevent the above situation, then terraform could plan and apply successfully.
I get the cluster config data from azure portal, the azurePortalFQDN is different between first time creation.
Question
Will The cluster be recreated if I change default node pool config which is commented “Changing this forces a new resource to be created” on terraform documents? Or only the default node pool will be deleted then create new one however the aks cluster doesn't changed?
Why provider helm could connect to the cluster at first time creation, but it was failed to connect when the resource recreation?
Thanks for your reply.

Terraform kubectl provider error: failed to created kubernetes rest client for read of resource

I have a Terraform config that (among other resources) creates a Google Kubernetes Engine cluster on Google Cloud. I'm using the kubectl provider to add YAML manifests for a ManagedCertificate and a FrontendConfig, since these are not part of the kubernetes or google providers.
This works as expected when applying the Terraform config from my local machine, but when I try to execute it in our CI pipeline, I get the following error for both of the kubectl_manifest resources:
Error: failed to create kubernetes rest client for read of resource: Get "http://localhost/api?timeout=32s": dial tcp 127.0.0.1:80: connect: connection refused
Since I'm only facing this issue during CI, my first guess is that the service account is missing the right scopes, but as far as I can tell, all scopes are present. Any suggestions and ideas are greatly appreciated!
The provider trying to connect with localhost, which means either to you need to provide a proper kube-config file or set it dynamically in the terraform.
Although you didn't mention how are setting the auth, but here is two way
Poor way
resource "null_resource" "deploy-app" {
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
kubectl apply -f myapp.yaml ./temp/kube-config.yaml;
EOT
}
# will run always, its bad
triggers = {
always_run = "${timestamp()}"
}
depends_on = [
local_file.kube_config
]
}
resource "local_file" "kube_config" {
content = var.my_kube_config # pass the config file from ci variable
filename = "${path.module}/temp/kube-config.yaml"
}
Proper way
data "google_container_cluster" "cluster" {
name = "your_cluster_name"
}
data "google_client_config" "current" {
}
provider "kubernetes" {
host = data.google_container_cluster.cluster.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(
data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate
)
}
data "kubectl_file_documents" "app_yaml" {
content = file("myapp.yaml")
}
resource "kubectl_manifest" "app_installer" {
for_each = data.kubectl_file_documents.app_yaml.manifests
yaml_body = each.value
}
If the cluster in the same module , then provider should be
provider "kubernetes" {
load_config_file = "false"
host = google_container_cluster.my_cluster.endpoint
client_certificate = google_container_cluster.my_cluster.master_auth.0.client_certificate
client_key = google_container_cluster.my_cluster.master_auth.0.client_key
cluster_ca_certificate = google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate
}
Fixed the issue by adding load_config_file = false to the kubectl provider config. My provider config now looks like this:
data "google_client_config" "default" {}
provider "kubernetes" {
host = "https://${endpoint from GKE}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(CA certificate from GKE)
}
provider "kubectl" {
host = "https://${endpoint from GKE}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(CA certificate from GKE)
load_config_file = false
}

Add secret to freshly created Azure AKS using Terraform Kubernetes provider fails

I am creating a kubernetes cluster with the Azure Terraform provider and trying to add a secret to it. The cluster gets created fine but I am getting errors with authenticating to the cluster when creating the secret. I tried 2 different Terraform Kubernetes provider configurations. Here is the main configuration:
variable "client_id" {}
variable "client_secret" {}
resource "azurerm_resource_group" "rg-example" {
name = "rg-example"
location = "East US"
}
resource "azurerm_kubernetes_cluster" "k8s-example" {
name = "k8s-example"
location = azurerm_resource_group.rg-example.location
resource_group_name = azurerm_resource_group.rg-example.name
dns_prefix = "k8s-example"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2s"
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
role_based_access_control {
enabled = true
}
}
resource "kubernetes_secret" "secret_example" {
metadata {
name = "mysecret"
}
data = {
"something" = "super secret"
}
depends_on = [
azurerm_kubernetes_cluster.k8s-example
]
}
provider "azurerm" {
version = "=2.29.0"
features {}
}
output "host" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
}
output "cluster_username" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.username
}
output "cluster_password" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.password
}
output "client_key" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_key
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_certificate
}
output "cluster_ca_certificate" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.cluster_ca_certificate
}
Here is the first Kubernetes provider configuration using certificates:
provider "kubernetes" {
version = "=1.13.2"
load_config_file = "false"
host = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
client_certificate = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_certificate
client_key = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_key
cluster_ca_certificate = azurerm_kubernetes_cluster.k8s-example.kube_config.0.cluster_ca_certificate
}
And the error I'm receiving:
kubernetes_secret.secret_example: Creating...
Error: Failed to configure client: tls: failed to find any PEM data in certificate input
Here is the second Kubernetes provider configuration using HTTP Basic Authorization:
provider "kubernetes" {
version = "=1.13.2"
load_config_file = "false"
host = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
username = azurerm_kubernetes_cluster.k8s-example.kube_config.0.username
password = azurerm_kubernetes_cluster.k8s-example.kube_config.0.password
}
And the error I'm receiving:
kubernetes_secret.secret_example: Creating...
Error: Post "https://k8s-example-c4a78c03.hcp.eastus.azmk8s.io:443/api/v1/namespaces/default/secrets": x509: certificate signed by unknown authority
ANALYSIS
I checked the outputs of azurerm_kubernetes_cluster.k8s-example and the data seems valid (username, password, host, etc..) Maybe I need a SSL certificate on my Kubernetes cluster, however I'm am not certain, as I'm new to this. Can someone help me out ?
According to this issue in hashicorp/terraform-provider-kubernetes, you need to use base64decode(). The example that author used:
provider "kubernetes" {
host = "${google_container_cluster.k8sexample.endpoint}"
username = "${var.master_username}"
password = "${var.master_password}"
client_certificate = "${base64decode(google_container_cluster.k8sexample.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.k8sexample.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.k8sexample.master_auth.0.cluster_ca_certificate)}"
}
That author said they got the same error as you if they left out the base64decode. You can read more about that function here: https://www.terraform.io/docs/configuration/functions/base64decode.html

Managing GKE and its deployments with Terraform

I can use terraform to deploy a Kubernetes cluster in GKE.
Then I have set up the provider for Kubernetes as follows:
provider "kubernetes" {
host = "${data.google_container_cluster.primary.endpoint}"
client_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(data.google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
By default, terraform interacts with Kubernetes with the user client, which has no power to create (for example) deployments. So I get this error when I try to apply my changes with terraform:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_deployment.foo: 1 error(s) occurred:
* kubernetes_deployment.foo: Failed to create deployment: deployments.apps is forbidden: User "client" cannot create deployments.apps in the namespace "default"
I don't know how should I proceed now, how should I give this permissions to the client user?
If the following fields are added to the provider, I am able to perform deployments, although after reading the documentation it seems these credentials are used for HTTP communication with the cluster, which is insecure if it is done through the internet.
username = "${data.google_container_cluster.primary.master_auth.0.username}"
password = "${data.google_container_cluster.primary.master_auth.0.password}"
Is there any other better way of doing so?
you can use the service account that are running the terraform
data "google_client_config" "default" {}
provider "kubernetes" {
host = "${google_container_cluster.default.endpoint}"
token = "${data.google_client_config.default.access_token}"
cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}"
load_config_file = false
}
OR
give permissions to the default "client"
But you need a valid authentication on GKE cluster provider to run this :/ ups circular dependency here
resource "kubernetes_cluster_role_binding" "default" {
metadata {
name = "client-certificate-cluster-admin"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "User"
name = "client"
api_group = "rbac.authorization.k8s.io"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = "kube-system"
}
subject {
kind = "Group"
name = "system:masters"
api_group = "rbac.authorization.k8s.io"
}
}
It looks like the user that you are using is missing the required RBAC role for creating deployments. Make sure that user has the correct verbs for the deployments resource. You can take a look at this Role examples to have an idea about it.
You need to provide both. Check this example on how to integrate the Kubernetes provider with the Google Provider.
Example of how to configure the Kubernetes provider:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}