I've created a GKE cluster with Terraform and I also want to manage Kubernetes with Terraform as well. However, I don't know how to pass GKE's credentials to the kubernetes provider.
I followed the example in the google_client_config data source documentation and I got
data.google_container_cluster.cluster.endpoint is null
Here is my failed attempt https://github.com/varshard/gke-cluster-terraform/tree/title-terraform
cluster.tf is responsible for creating a GKE cluster, which work fine.
kubernetes.tf is responsible for managing Kubernetes, which failed to get GKE credential.
You don't need the google_container_cluster data source here at all because the relevant information is also in the google_container_cluster resource that you are creating in the same context.
Data sources are for accessing data about a resource that is created either entirely outside of Terraform or in a different Terraform context (eg different state file and different directory that is terraform apply'd).
I'm not sure how you're in your current state where the data source is selecting an existing container cluster and then you define a resource to create that container cluster using the outputs of the data source but this is way overcomplicated and slightly broken - if you destroyed everything and reapplied it wouldn't work as is.
Instead you should remove the google_container_cluster data source and amend your google_container_cluster resource to instead be:
resource "google_container_cluster" "cluster" {
name = "${var.project}-cluster"
location = var.region
# ...
}
And then refer to this resource in your kubernetes provider:
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.cluster.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
The answer to above question is below:
While creating a cluster you need used the kubernetes provider and data source google_client_config
check my code below its working fine for me.
resource "google_container_cluster" "primary" {
project = var.project_id
name = var.cluster-name
location = var.region
remove_default_node_pool = true
initial_node_count = 1
}
data "google_client_config" "current" {}
provider "kubernetes" {
host = "https://${google_container_cluster.primary.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
Related
I'm trying to bootstrap a HA Kubernetes cluster with terraform and hetzner cloud provider. In my setup the loadbalancer in front of the control plane nodes needs to know the ip addresses of the master nodes in the cluster. This is so that i can register the master nodes as targets for the loadbalancer.
Similarly, when bootstrapping the master nodes, knowledge of the loadbalancer ip address is required to populate their configuration.
I could use a dns name inside the masters configuration and later create the association between lb ip and name but I wanted to avoid using dns names. Is there some other way of achiving this result?
For some context here is an extract from my code :
resource "hcloud_load_balancer" "cluster-lb" {
name = "my-load-balancer"
load_balancer_type = "lb11"
location = "nbg1"
dynamic "target" {
for_each = var.master_node_ids # this is an input parameter
content { # that requires the master servers to exist.
type = "server"
server_id = target.value["id"]
}
}
}
locals {
# Here I must crate both a InitConfiguration and a ClusterConfiguration. These config files are used
# by kubeadm to bootstrap the cluster. Among other things, ClusterConfiguration requires the
# controlPlaneEndpoint argument to be specified. This represents the shared endpoint to access the
# cluster. In a HA scenario it is the ip address of the loadbalancer.
kubeadm_init = templatefile(
"kubeadm_init.tmpl",
{
controlPlaneEndpoint = ???
}
}
# Later on the kubeadm_init is incorporated in a cloud-init write_files attribute so it is copied to
# the server. I've omitted this section as it is quite verbose and not really useful in answering the
# question. If necessary i can provide it as well.
# Here I create the master nodes :
resource "hcloud_server" "cluster-masters" {
for_each = local.masters
name = "server-${each.key}"
server_type = "cpx11"
image = "ubuntu-20.04"
location = each.value["availability_zone"]
user_data = local.cloud_init_data
network {
network_id = var.network_id
ip = each.value["ip"]
}
}
Seems to me that there is a cyclic dependency between the cluster loadbalancer and the server. The first one must await the creation of the master nodes so to add them as targets. The master nodes on the other hand must await the loadbalancer in order to get its ip and populate their configuration files before beeing created. How could I go solving this issue and is it an actual issue in the first place?
Thanks in advance to everyone and let me know how to improve my question!
Create hcloud_load_balancer cluster-lb resource without optional target list
https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs/resources/load_balancer#target
Use hcloud_load_balancer.cluster-lb.ipv4 for controlPlaneEndpoint
https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs/resources/load_balancer#ipv4
Create hcloud_load_balancer_target resource with type = label_selector and load_balancer_id = hcloud_load_balancer.cluster-lb.id and label_selector = hcloud_server.cluster-masters.labels
https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs/resources/load_balancer_target
I suggest to use some existing Terraform Module for deploying Kubernetes on Hetzner Cloud.
How to change the existing GKE cluster to GKE private cluster? Will I be able to connect to the Kubectl API from internet based on firewall rules or should I have a bastion host? I don't want to implement Cloud Nat or nat gateway. I have a squid proxy VM that can handle internet access for pods. I just need to be able to connect to Kubectl to apply or modify anything.
I'm unsure how to modify the existing module I wrote to make the nodes private and I'm not sure if the cluster will get deleted if I try and apply the new changes related to private gke cluster.
resource "google_container_cluster" "primary" {
name = "prod"
network = "prod"
subnetwork = "private-subnet-a"
location = "us-west1-a"
remove_default_node_pool = true
initial_node_count = 1
depends_on = [var.depends_on_vpc]
}
resource "google_container_node_pool" "primary_nodes" {
depends_on = [var.depends_on_vpc]
name = "prod-node-pool"
location = "us-west1-a"
cluster = google_container_cluster.primary.name
node_count = 2
node_config {
preemptible = false
machine_type = "n1-standard-2"
metadata = {
disable-legacy-endpoints = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/compute",
]
}
}
Answering the part of the question:
How to change the existing GKE cluster to GKE private cluster?
GKE setting: Private cluster is immutable. This setting can only be set during the GKE cluster provisioning.
To create your cluster as a private one you can either:
Create a new GKE private cluster.
Duplicate existing cluster and set it to private:
This setting is available in GCP Cloud Console -> Kubernetes Engine -> CLUSTER-NAME -> Duplicate
This setting will clone the configuration of your infrastructure of your previous cluster but not the workload (Pods, Deployments, etc.)
Will I be able to connect to the Kubectl API from internet based on firewall rules or should I have a bastion host?
Yes, you could but it will heavily depend on the configuration that you've chosen during the GKE cluster creation process.
As for ability to connect to your GKE private cluster, there is a dedicated documentation about it:
Cloud.google.com: Kubernetes Engine: Docs: How to: Private clusters
As for how you can create a private cluster with Terraform, there is the dedicated site with configuration options specific to GKE. There are also parameters responsible for provisioning a private cluster:
Registry.terraform.io: Providers: Hashicorp: Google: Latest: Docs: Resources: Container cluster
As for a basic example of creating a private GKE cluster with Terraform:
main.tf
provider "google" {
project = "INSERT_PROJECT_HERE"
region = "europe-west3"
zone = "europe-west3-c"
}
gke.tf
resource "google_container_cluster" "primary-cluster" {
name = "gke-private"
location = "europe-west3-c"
initial_node_count = 1
private_cluster_config {
enable_private_nodes = "true"
enable_private_endpoint = "false" # this option will make your cluster available through public endpoint
master_ipv4_cidr_block = "172.16.0.0/28"
}
ip_allocation_policy {
cluster_secondary_range_name = ""
services_secondary_range_name = ""
}
node_config {
machine_type = "e2-medium"
}
}
A side note!
I've created a public GKE cluster, modified the .tf responsible for it's creation to support private cluster. After running: $ terraform plan Terraform responded with the information that the cluster will be recreated.
You will have to recreate the cluster since the private/public option is immutable. Terraform will recreate the cluster.
To access the private cluster endpoints, you can choose appropriate methods
1) Public endpoint access disabled:- creates a private cluster with no client access to the public endpoint.
2) Public endpoint access enabled, authorized networks enabled:- creates a private cluster with limited access to the public endpoint.
3) Public endpoint access enabled, authorized networks disabled:- creates a private cluster with unrestricted access to the public endpoint.
To ssh into the node/pod from the authorized network, you can setup access via IAP.
I am using this terraform module to manage multiple clusters with the 2nd option, it is fully configurable.
I am somewhat new to Kubernetes, and I am trying to learn about deploying airflow to Kubernetes.
My objective is to try to deploy an "out-of-the-box" (or at least closer to that) deployment for airflow on Kubernetes. I have created the Kubernetes cluster via Terraform (on EKS), and would like to deploy airflow to the cluster. I found that Helm can help me deploy airflow easier relative to other solutions.
Here is what I have tried so far (snippet and not complete code):
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
data "helm_repository" "airflow" {
name = "airflow"
url = "https://airflow-helm.github.io/charts"
}
resource "helm_release" "airflow" {
name = "airflow-helm"
repository = data.helm_repository.airflow.metadata[0].name
chart = "airflow-chart"
}
I am not necessarily fixed on using Terraform (I just thought it might be easier and wanted to keep state). So I am also happy to discover other solutions that will help me airflow with all the pods needed.
You can install it using Helm from official repository, but there are a lot of additional configuration to consider. The Airflow config is described in chart's values.yaml. You can take a look on this article to check example configuration.
For installation using terraform you can take a look into this article, where both Terraform config and helm chart's values are described in detail.
I want to create a secret in several k8s clusters in the Google Kubernetes Engine using the Terraform.
I know that I can use "host", "token" and some else parameters in "kubernetes" provider, but I can describe these parameters only once, and I don’t know how to connect to another cluster during the file of terraform.
My question is how to create a secret (or do other operations) in multiple k8s cluster via Terraform. Maybe you know some tools on github or other tips for doing via single terraform file?
You can use alias for provider in terraform like described in documentation
So you can define multiple providers for multiple k8s clusters and then refer them by alias.
e.g.
provider "kubernetes" {
config_context_auth_info = "ops1"
config_context_cluster = "mycluster1"
alias = "cluster1"
}
provider "kubernetes" {
config_context_auth_info = "ops2"
config_context_cluster = "mycluster2"
alias = "cluster2"
}
resource "kubernetes_secret" "example" {
...
provider = kubernetes.cluster1
}
If you're using terraform submodules, the setup is a bit more involved. See this terraform github issue comment.
In my terraform config files I create a Kubernetes cluster on GKE and when created, set up a Kubernetes provider to access said cluster and perform various actions such as setting up namespaces.
The problem is that some new namespaces were created in the cluster without terraform and now my attempts to import these namespaces into my state seem fail due to inability to connect to the cluster, which I believe is due to the following (taken from Terraform's official documentation of the import command):
The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.
The command I used to import the namespaces is pretty straightforward:
terraform import kubernetes_namespace.my_new_namespace my_new_namespace
I also tried using the -provdier="" and -config="" but to no avail.
My Kubernetes provider configuration is this:
provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
An example for a namespace resource I am trying to import is this:
resource "kubernetes_namespace" "my_new_namespace" {
metadata {
name = "my_new_namespace"
}
}
The import command results in the following:
Error: Get http://localhost/api/v1/namespaces/my_new_namespace: dial tcp [::1]:80: connect: connection refused
It's obvious it's doomed to fail since it's trying to reach localhost instead of the actual cluster IP and configurations.
Is there any workaround for this use case?
Thanks in advance.
the issue lies with the dynamic data provider. The import statement doesn't have access to it.
For the process of importing, you have to hardcode the provider values.
Change this:
provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
to:
provider "kubernetes" {
version = "~> 1.8"
host = "https://<ip-of-cluster>"
token = "<token>"
cluster_ca_certificate = base64decode(<cert>)
load_config_file = false
}
The token can be retrieved from gcloud auth print-access-token.
The IP and cert can be retrieved by inspecting the created container resource using terraform state show module.gke.google_container_cluster.your_cluster_resource_name_here
For provider version 2+ you have to drop load_config_file.
Once in place, import and revert the changes on the provider.
(1) Create an entry in your kubeconfig file for your GKE cluster.
gcloud container clusters get-credentials cluster-name
see: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#generate_kubeconfig_entry
(2) Point terraform Kubernetes provider to your kubeconfig:
provider "kubernetes" {
config_path = "~/.kube/config"
}