How to change the existing GKE cluster to GKE private cluster? Will I be able to connect to the Kubectl API from internet based on firewall rules or should I have a bastion host? I don't want to implement Cloud Nat or nat gateway. I have a squid proxy VM that can handle internet access for pods. I just need to be able to connect to Kubectl to apply or modify anything.
I'm unsure how to modify the existing module I wrote to make the nodes private and I'm not sure if the cluster will get deleted if I try and apply the new changes related to private gke cluster.
resource "google_container_cluster" "primary" {
name = "prod"
network = "prod"
subnetwork = "private-subnet-a"
location = "us-west1-a"
remove_default_node_pool = true
initial_node_count = 1
depends_on = [var.depends_on_vpc]
}
resource "google_container_node_pool" "primary_nodes" {
depends_on = [var.depends_on_vpc]
name = "prod-node-pool"
location = "us-west1-a"
cluster = google_container_cluster.primary.name
node_count = 2
node_config {
preemptible = false
machine_type = "n1-standard-2"
metadata = {
disable-legacy-endpoints = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/compute",
]
}
}
Answering the part of the question:
How to change the existing GKE cluster to GKE private cluster?
GKE setting: Private cluster is immutable. This setting can only be set during the GKE cluster provisioning.
To create your cluster as a private one you can either:
Create a new GKE private cluster.
Duplicate existing cluster and set it to private:
This setting is available in GCP Cloud Console -> Kubernetes Engine -> CLUSTER-NAME -> Duplicate
This setting will clone the configuration of your infrastructure of your previous cluster but not the workload (Pods, Deployments, etc.)
Will I be able to connect to the Kubectl API from internet based on firewall rules or should I have a bastion host?
Yes, you could but it will heavily depend on the configuration that you've chosen during the GKE cluster creation process.
As for ability to connect to your GKE private cluster, there is a dedicated documentation about it:
Cloud.google.com: Kubernetes Engine: Docs: How to: Private clusters
As for how you can create a private cluster with Terraform, there is the dedicated site with configuration options specific to GKE. There are also parameters responsible for provisioning a private cluster:
Registry.terraform.io: Providers: Hashicorp: Google: Latest: Docs: Resources: Container cluster
As for a basic example of creating a private GKE cluster with Terraform:
main.tf
provider "google" {
project = "INSERT_PROJECT_HERE"
region = "europe-west3"
zone = "europe-west3-c"
}
gke.tf
resource "google_container_cluster" "primary-cluster" {
name = "gke-private"
location = "europe-west3-c"
initial_node_count = 1
private_cluster_config {
enable_private_nodes = "true"
enable_private_endpoint = "false" # this option will make your cluster available through public endpoint
master_ipv4_cidr_block = "172.16.0.0/28"
}
ip_allocation_policy {
cluster_secondary_range_name = ""
services_secondary_range_name = ""
}
node_config {
machine_type = "e2-medium"
}
}
A side note!
I've created a public GKE cluster, modified the .tf responsible for it's creation to support private cluster. After running: $ terraform plan Terraform responded with the information that the cluster will be recreated.
You will have to recreate the cluster since the private/public option is immutable. Terraform will recreate the cluster.
To access the private cluster endpoints, you can choose appropriate methods
1) Public endpoint access disabled:- creates a private cluster with no client access to the public endpoint.
2) Public endpoint access enabled, authorized networks enabled:- creates a private cluster with limited access to the public endpoint.
3) Public endpoint access enabled, authorized networks disabled:- creates a private cluster with unrestricted access to the public endpoint.
To ssh into the node/pod from the authorized network, you can setup access via IAP.
I am using this terraform module to manage multiple clusters with the 2nd option, it is fully configurable.
Related
We use Terraform to create all of our infrastructure resources then we use Helm to deploy apps in our cluster.
We're looking for a way to streamline the creation of infra and apps, so currently this is what we do:
Terraform creates kubernetes cluster, VPC network etc and a couple of static public IP addresses
We have to wait for the dynamic creation of these static IPs by Terraform to complete
We find out what the public IP is that's been created, and manually add that to our loadBalancerIP: spec on our ingress controller helm chart
If at all possible, I'd like to store the generated public IP somewhere via terraform (config map would be nice), and then reference that in the ingress service loadBalancerIP: spec, so the end to end process is sorted.
I know configmaps are for pods and I don't think they can be used for kubernetes service objects - does anyone have any thoughts/ideas on how I could achieve this?
I suggest creating a static public IP in GCP using terraform by specifying the name you want like this:
module "address" {
source = "terraform-google-modules/address/google"
version = "3.0.0"
project_id = "your-project-id"
region = "your-region"
address_type = "EXTERNAL"
names = [ "the-name-you-want" ]
global = true
}
You can then refer to this static public IP name in the Kubernetes ingress resource by specifying the annotations kubernetes.io/ingress.global-static-ip-name: "the-name-you-want" like this:
resource "kubernetes_ingress_v1" "example" {
wait_for_load_balancer = true
metadata {
name = "example"
namespace = "default"
annotations = {
"kubernetes.io/ingress.global-static-ip-name" = "the-name-you-want"
}
}
spec {
....
This will create ingress resource 'example' in GKE and attach static public IP named 'the-name-you-want' to it.
I'm trying to bootstrap a HA Kubernetes cluster with terraform and hetzner cloud provider. In my setup the loadbalancer in front of the control plane nodes needs to know the ip addresses of the master nodes in the cluster. This is so that i can register the master nodes as targets for the loadbalancer.
Similarly, when bootstrapping the master nodes, knowledge of the loadbalancer ip address is required to populate their configuration.
I could use a dns name inside the masters configuration and later create the association between lb ip and name but I wanted to avoid using dns names. Is there some other way of achiving this result?
For some context here is an extract from my code :
resource "hcloud_load_balancer" "cluster-lb" {
name = "my-load-balancer"
load_balancer_type = "lb11"
location = "nbg1"
dynamic "target" {
for_each = var.master_node_ids # this is an input parameter
content { # that requires the master servers to exist.
type = "server"
server_id = target.value["id"]
}
}
}
locals {
# Here I must crate both a InitConfiguration and a ClusterConfiguration. These config files are used
# by kubeadm to bootstrap the cluster. Among other things, ClusterConfiguration requires the
# controlPlaneEndpoint argument to be specified. This represents the shared endpoint to access the
# cluster. In a HA scenario it is the ip address of the loadbalancer.
kubeadm_init = templatefile(
"kubeadm_init.tmpl",
{
controlPlaneEndpoint = ???
}
}
# Later on the kubeadm_init is incorporated in a cloud-init write_files attribute so it is copied to
# the server. I've omitted this section as it is quite verbose and not really useful in answering the
# question. If necessary i can provide it as well.
# Here I create the master nodes :
resource "hcloud_server" "cluster-masters" {
for_each = local.masters
name = "server-${each.key}"
server_type = "cpx11"
image = "ubuntu-20.04"
location = each.value["availability_zone"]
user_data = local.cloud_init_data
network {
network_id = var.network_id
ip = each.value["ip"]
}
}
Seems to me that there is a cyclic dependency between the cluster loadbalancer and the server. The first one must await the creation of the master nodes so to add them as targets. The master nodes on the other hand must await the loadbalancer in order to get its ip and populate their configuration files before beeing created. How could I go solving this issue and is it an actual issue in the first place?
Thanks in advance to everyone and let me know how to improve my question!
Create hcloud_load_balancer cluster-lb resource without optional target list
https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs/resources/load_balancer#target
Use hcloud_load_balancer.cluster-lb.ipv4 for controlPlaneEndpoint
https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs/resources/load_balancer#ipv4
Create hcloud_load_balancer_target resource with type = label_selector and load_balancer_id = hcloud_load_balancer.cluster-lb.id and label_selector = hcloud_server.cluster-masters.labels
https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs/resources/load_balancer_target
I suggest to use some existing Terraform Module for deploying Kubernetes on Hetzner Cloud.
I am using Terraform script to spin up a GKE cluster and then use helm 3 to install splunk connector on the cluster.
How do I connect to the newly created cluster in terraform kubernetes provider dynamically ?
Let the provider depend on the cluster certificate:
data "google_client_config" "terraform_config" {
provider = google
}
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate)
token = data.google_client_config.terraform_config.access_token
}
I want to create a secret in several k8s clusters in the Google Kubernetes Engine using the Terraform.
I know that I can use "host", "token" and some else parameters in "kubernetes" provider, but I can describe these parameters only once, and I don’t know how to connect to another cluster during the file of terraform.
My question is how to create a secret (or do other operations) in multiple k8s cluster via Terraform. Maybe you know some tools on github or other tips for doing via single terraform file?
You can use alias for provider in terraform like described in documentation
So you can define multiple providers for multiple k8s clusters and then refer them by alias.
e.g.
provider "kubernetes" {
config_context_auth_info = "ops1"
config_context_cluster = "mycluster1"
alias = "cluster1"
}
provider "kubernetes" {
config_context_auth_info = "ops2"
config_context_cluster = "mycluster2"
alias = "cluster2"
}
resource "kubernetes_secret" "example" {
...
provider = kubernetes.cluster1
}
If you're using terraform submodules, the setup is a bit more involved. See this terraform github issue comment.
I've created a GKE cluster with Terraform and I also want to manage Kubernetes with Terraform as well. However, I don't know how to pass GKE's credentials to the kubernetes provider.
I followed the example in the google_client_config data source documentation and I got
data.google_container_cluster.cluster.endpoint is null
Here is my failed attempt https://github.com/varshard/gke-cluster-terraform/tree/title-terraform
cluster.tf is responsible for creating a GKE cluster, which work fine.
kubernetes.tf is responsible for managing Kubernetes, which failed to get GKE credential.
You don't need the google_container_cluster data source here at all because the relevant information is also in the google_container_cluster resource that you are creating in the same context.
Data sources are for accessing data about a resource that is created either entirely outside of Terraform or in a different Terraform context (eg different state file and different directory that is terraform apply'd).
I'm not sure how you're in your current state where the data source is selecting an existing container cluster and then you define a resource to create that container cluster using the outputs of the data source but this is way overcomplicated and slightly broken - if you destroyed everything and reapplied it wouldn't work as is.
Instead you should remove the google_container_cluster data source and amend your google_container_cluster resource to instead be:
resource "google_container_cluster" "cluster" {
name = "${var.project}-cluster"
location = var.region
# ...
}
And then refer to this resource in your kubernetes provider:
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.cluster.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
The answer to above question is below:
While creating a cluster you need used the kubernetes provider and data source google_client_config
check my code below its working fine for me.
resource "google_container_cluster" "primary" {
project = var.project_id
name = var.cluster-name
location = var.region
remove_default_node_pool = true
initial_node_count = 1
}
data "google_client_config" "current" {}
provider "kubernetes" {
host = "https://${google_container_cluster.primary.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}