I am using Terraform script to spin up a GKE cluster and then use helm 3 to install splunk connector on the cluster.
How do I connect to the newly created cluster in terraform kubernetes provider dynamically ?
Let the provider depend on the cluster certificate:
data "google_client_config" "terraform_config" {
provider = google
}
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate)
token = data.google_client_config.terraform_config.access_token
}
Related
How to change the existing GKE cluster to GKE private cluster? Will I be able to connect to the Kubectl API from internet based on firewall rules or should I have a bastion host? I don't want to implement Cloud Nat or nat gateway. I have a squid proxy VM that can handle internet access for pods. I just need to be able to connect to Kubectl to apply or modify anything.
I'm unsure how to modify the existing module I wrote to make the nodes private and I'm not sure if the cluster will get deleted if I try and apply the new changes related to private gke cluster.
resource "google_container_cluster" "primary" {
name = "prod"
network = "prod"
subnetwork = "private-subnet-a"
location = "us-west1-a"
remove_default_node_pool = true
initial_node_count = 1
depends_on = [var.depends_on_vpc]
}
resource "google_container_node_pool" "primary_nodes" {
depends_on = [var.depends_on_vpc]
name = "prod-node-pool"
location = "us-west1-a"
cluster = google_container_cluster.primary.name
node_count = 2
node_config {
preemptible = false
machine_type = "n1-standard-2"
metadata = {
disable-legacy-endpoints = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/compute",
]
}
}
Answering the part of the question:
How to change the existing GKE cluster to GKE private cluster?
GKE setting: Private cluster is immutable. This setting can only be set during the GKE cluster provisioning.
To create your cluster as a private one you can either:
Create a new GKE private cluster.
Duplicate existing cluster and set it to private:
This setting is available in GCP Cloud Console -> Kubernetes Engine -> CLUSTER-NAME -> Duplicate
This setting will clone the configuration of your infrastructure of your previous cluster but not the workload (Pods, Deployments, etc.)
Will I be able to connect to the Kubectl API from internet based on firewall rules or should I have a bastion host?
Yes, you could but it will heavily depend on the configuration that you've chosen during the GKE cluster creation process.
As for ability to connect to your GKE private cluster, there is a dedicated documentation about it:
Cloud.google.com: Kubernetes Engine: Docs: How to: Private clusters
As for how you can create a private cluster with Terraform, there is the dedicated site with configuration options specific to GKE. There are also parameters responsible for provisioning a private cluster:
Registry.terraform.io: Providers: Hashicorp: Google: Latest: Docs: Resources: Container cluster
As for a basic example of creating a private GKE cluster with Terraform:
main.tf
provider "google" {
project = "INSERT_PROJECT_HERE"
region = "europe-west3"
zone = "europe-west3-c"
}
gke.tf
resource "google_container_cluster" "primary-cluster" {
name = "gke-private"
location = "europe-west3-c"
initial_node_count = 1
private_cluster_config {
enable_private_nodes = "true"
enable_private_endpoint = "false" # this option will make your cluster available through public endpoint
master_ipv4_cidr_block = "172.16.0.0/28"
}
ip_allocation_policy {
cluster_secondary_range_name = ""
services_secondary_range_name = ""
}
node_config {
machine_type = "e2-medium"
}
}
A side note!
I've created a public GKE cluster, modified the .tf responsible for it's creation to support private cluster. After running: $ terraform plan Terraform responded with the information that the cluster will be recreated.
You will have to recreate the cluster since the private/public option is immutable. Terraform will recreate the cluster.
To access the private cluster endpoints, you can choose appropriate methods
1) Public endpoint access disabled:- creates a private cluster with no client access to the public endpoint.
2) Public endpoint access enabled, authorized networks enabled:- creates a private cluster with limited access to the public endpoint.
3) Public endpoint access enabled, authorized networks disabled:- creates a private cluster with unrestricted access to the public endpoint.
To ssh into the node/pod from the authorized network, you can setup access via IAP.
I am using this terraform module to manage multiple clusters with the 2nd option, it is fully configurable.
I have a terraform configuration that will create a GKE cluster, node pools and then call kubernetes to setup my app. When I run this configuration on a new project which doesn't have the cluster created yet the kubernetes provider throws below error
Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin-binding": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/rabbitmq": dial tcp [::1]:80: connect: connection refused
If I comment out all kubernetes part, run terraform apply to create the cluster and then uncomment the kubernetes part and try it works fine and creates all kubernetes resource.
I checked the docs for kubernetes provider and it says the cluster should exists already.
k8s getting started
There are at least 2 steps involved in scheduling your first container on a Kubernetes cluster. You need the Kubernetes cluster with all its components running somewhere and then schedule the Kubernetes resources, like Pods, Replication Controllers, Services etc.
How can I tell terraform to wait for the cluster to created before planning for kubernetes?
My config looks like below
main.tf
.
.
.
module "gke" {
source = "./modules/gke"
name = var.gke_cluster_name
project_id = data.google_project.project.project_id
gke_location = var.gke_zone
.
.
.
}
data "google_client_config" "provider" {}
provider "kubernetes" {
version = "~> 1.13.3"
alias = "my-kuber"
host = "https://${module.gke.endpoint}"
token = data.google_client_config.provider.access_token
cluster_ca_certificate = module.gke.cluster_ca_certificate
load_config_file = false
}
resource "kubernetes_namespace" "ns" {
provider = kubernetes.my-kuber
depends_on = [module.gke]
metadata {
name = var.namespace
}
}
I am somewhat new to Kubernetes, and I am trying to learn about deploying airflow to Kubernetes.
My objective is to try to deploy an "out-of-the-box" (or at least closer to that) deployment for airflow on Kubernetes. I have created the Kubernetes cluster via Terraform (on EKS), and would like to deploy airflow to the cluster. I found that Helm can help me deploy airflow easier relative to other solutions.
Here is what I have tried so far (snippet and not complete code):
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
data "helm_repository" "airflow" {
name = "airflow"
url = "https://airflow-helm.github.io/charts"
}
resource "helm_release" "airflow" {
name = "airflow-helm"
repository = data.helm_repository.airflow.metadata[0].name
chart = "airflow-chart"
}
I am not necessarily fixed on using Terraform (I just thought it might be easier and wanted to keep state). So I am also happy to discover other solutions that will help me airflow with all the pods needed.
You can install it using Helm from official repository, but there are a lot of additional configuration to consider. The Airflow config is described in chart's values.yaml. You can take a look on this article to check example configuration.
For installation using terraform you can take a look into this article, where both Terraform config and helm chart's values are described in detail.
I've created a GKE cluster with Terraform and I also want to manage Kubernetes with Terraform as well. However, I don't know how to pass GKE's credentials to the kubernetes provider.
I followed the example in the google_client_config data source documentation and I got
data.google_container_cluster.cluster.endpoint is null
Here is my failed attempt https://github.com/varshard/gke-cluster-terraform/tree/title-terraform
cluster.tf is responsible for creating a GKE cluster, which work fine.
kubernetes.tf is responsible for managing Kubernetes, which failed to get GKE credential.
You don't need the google_container_cluster data source here at all because the relevant information is also in the google_container_cluster resource that you are creating in the same context.
Data sources are for accessing data about a resource that is created either entirely outside of Terraform or in a different Terraform context (eg different state file and different directory that is terraform apply'd).
I'm not sure how you're in your current state where the data source is selecting an existing container cluster and then you define a resource to create that container cluster using the outputs of the data source but this is way overcomplicated and slightly broken - if you destroyed everything and reapplied it wouldn't work as is.
Instead you should remove the google_container_cluster data source and amend your google_container_cluster resource to instead be:
resource "google_container_cluster" "cluster" {
name = "${var.project}-cluster"
location = var.region
# ...
}
And then refer to this resource in your kubernetes provider:
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.cluster.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
The answer to above question is below:
While creating a cluster you need used the kubernetes provider and data source google_client_config
check my code below its working fine for me.
resource "google_container_cluster" "primary" {
project = var.project_id
name = var.cluster-name
location = var.region
remove_default_node_pool = true
initial_node_count = 1
}
data "google_client_config" "current" {}
provider "kubernetes" {
host = "https://${google_container_cluster.primary.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
In my terraform config files I create a Kubernetes cluster on GKE and when created, set up a Kubernetes provider to access said cluster and perform various actions such as setting up namespaces.
The problem is that some new namespaces were created in the cluster without terraform and now my attempts to import these namespaces into my state seem fail due to inability to connect to the cluster, which I believe is due to the following (taken from Terraform's official documentation of the import command):
The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.
The command I used to import the namespaces is pretty straightforward:
terraform import kubernetes_namespace.my_new_namespace my_new_namespace
I also tried using the -provdier="" and -config="" but to no avail.
My Kubernetes provider configuration is this:
provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
An example for a namespace resource I am trying to import is this:
resource "kubernetes_namespace" "my_new_namespace" {
metadata {
name = "my_new_namespace"
}
}
The import command results in the following:
Error: Get http://localhost/api/v1/namespaces/my_new_namespace: dial tcp [::1]:80: connect: connection refused
It's obvious it's doomed to fail since it's trying to reach localhost instead of the actual cluster IP and configurations.
Is there any workaround for this use case?
Thanks in advance.
the issue lies with the dynamic data provider. The import statement doesn't have access to it.
For the process of importing, you have to hardcode the provider values.
Change this:
provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
to:
provider "kubernetes" {
version = "~> 1.8"
host = "https://<ip-of-cluster>"
token = "<token>"
cluster_ca_certificate = base64decode(<cert>)
load_config_file = false
}
The token can be retrieved from gcloud auth print-access-token.
The IP and cert can be retrieved by inspecting the created container resource using terraform state show module.gke.google_container_cluster.your_cluster_resource_name_here
For provider version 2+ you have to drop load_config_file.
Once in place, import and revert the changes on the provider.
(1) Create an entry in your kubeconfig file for your GKE cluster.
gcloud container clusters get-credentials cluster-name
see: https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#generate_kubeconfig_entry
(2) Point terraform Kubernetes provider to your kubeconfig:
provider "kubernetes" {
config_path = "~/.kube/config"
}