Terraform: GKE Cluster with Nodes in different zones - kubernetes

I have this Terraform GKE cluster with 3 nodes. When I deploy this cluster all nodes are localised in the same zones which is europe-west1-b.
gke-cluster.yml
resource "google_container_cluster" "primary" {
name = var.cluster_name
initial_node_count = var.initial_node_count
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
node_config {
//machine_type = "e2-medium"
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
metadata = {
disable-legacy-endpoints = "true"
}
labels = {
app = var.app_name
}
tags = ["app", var.app_name]
}
timeouts {
create = "30m"
update = "40m"
}
}
variables.tf
variable "cluster_name" {
default = "cluster"
}
variable "app_name" {
default = "my-app"
}
variable "initial_node_count" {
default = 3
}
variable "kubernetes_min_ver" {
default = "latest"
}
variable "kubernetes_max_ver" {
default = "latest"
}
variable "remove_default_node_pool" {
default = false
}
variable "project" {
default = "your-project-name"
}
variable "credentials" {
default = "terraform-key.json"
}
variable "region" {
default = "europe-west1"
}
variable "zone" {
type = list(string)
description = "The zones to host the cluster in."
default = ["europe-west1-b", "europe-west1-c", "europe-west1-d"]
}
And would like to know if it's possible to deploy each node in a different zone.
If yes how can I do it using Terraform?

Simply add the following line
resource "google_container_cluster" "primary" {
name = "cluster"
location = "us-central1"
initial_node_count = "3"
in order to create a regional cluster. The above will bring up 9 nodes with each zone (f a b) containing 3 nodes. If you only want 1 node per zone, then just change initial_node_count to 1.
More info here at Argument reference.

Related

Is it possible to create a zone only node pool in a regional cluster in GKE?

I have a regional cluster for redundancy. In this cluster I want to create a node-pool in just 1 zone in this region. Is this configuration possible? reason I trying this is, I want to run service like RabbitMQ in just 1 zone to avoid split, and my application services running on all zones in the region for redundancy.
I am using terraform to create the cluster and node pools, below is my config for creating region cluster and zone node pool
resource "google_container_cluster" "regional_cluster" {
provider = google-beta
project = "my-project"
name = "my-cluster"
location = "us-central1"
node_locations = ["us-central1-a", "us-central1-b", "us-central1-c"]
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "one_zone" {
project = google_container_cluster.regional_cluster.project
name = "zone-pool"
location = "us-central1-b"
cluster = google_container_cluster.regional_cluster.name
node_config {
machine_type = var.machine_type
image_type = var.image_type
disk_size_gb = 100
disk_type = "pd-standard"
}
}
This throws an error message
error creating NodePool: googleapi: Error 404: Not found: projects/my-project/zones/us-central1-b/clusters/my-cluster., notFound
Found out that location in google_container_node_pool should specify cluster master's region/zone. To actually specify the node-pool location node_locations should be used. Below is the config that worked
resource "google_container_cluster" "regional_cluster" {
provider = google-beta
project = "my-project"
name = "my-cluster"
location = "us-central1"
node_locations = ["us-central1-a", "us-central1-b", "us-central1-c"]
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "one_zone" {
project = google_container_cluster.regional_cluster.project
name = "zone-pool"
location = google_container_cluster.regional_cluster.location
node_locations = ["us-central1-b"]
cluster = google_container_cluster.regional_cluster.name
node_config {
machine_type = var.machine_type
image_type = var.image_type
disk_size_gb = 100
disk_type = "pd-standard"
}
}

Autoscaling GKE node pool stuck at 0 instances even with autoscaling set at min 3 max 5?

I've created a cluster using terraform with:
provider "google" {
credentials = "${file("gcp.json")}"
project = "${var.gcp_project}"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_container_cluster" "primary" {
name = "${var.k8s_cluster_name}"
location = "us-central1-a"
project = "${var.gcp_project}"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
project = "${var.gcp_project}"
name = "my-node-pool"
location = "us-central1-a"
cluster = "${google_container_cluster.primary.name}"
# node_count = 3
autoscaling {
min_node_count = 3
max_node_count = 5
}
node_config {
# preemptible = true
machine_type = "g1-small"
metadata = {
disable-legacy-endpoints = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only"
]
}
}
Surprisingly this node pool seems to be 'stuck' at 0 instances? Why? How can I diagnose this?
you should add "initial_node_count" (like initial_node_count = 3) to "google_container_node_pool" resourse.
Official documentation says you should not to use "node_count" with "autoscaling".

Terraform: Create single node GKE cluster

I am trying to create a GKE cluster of node size 1. However, it always create a cluster of 3 nodes. Why is that?
resource "google_container_cluster" "gke-cluster" {
name = "sonarqube"
location = "asia-southeast1"
remove_default_node_pool = true
initial_node_count = 1
}
resource "google_container_node_pool" "gke-node-pool" {
name = "sonarqube"
location = "asia-southeast1"
cluster = google_container_cluster.gke-cluster.name
node_count = 1
node_config {
machine_type = "n1-standard-1"
metadata = {
disable-legacy-endpoints = "true"
}
labels = {
app = "sonarqube"
}
}
}
Ok, found I can do so using node_locations:
resource "google_container_cluster" "gke-cluster" {
name = "sonarqube"
location = "asia-southeast1"
node_locations = [
"asia-southeast1-a"
]
remove_default_node_pool = true
initial_node_count = 1
}
Without that, it seems GKE will create 1 node per zone.

Alibaba Cloud Managed Kubernetes Terraform

I want to create Kubernetes cluster with Terraform,
Regarding the doc page here: https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html
variable "name" {
default = "my-first-k8s"
}
data "alicloud_zones" main {
available_resource_creation = "VSwitch"
}
data "alicloud_instance_types" "default" {
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
cpu_core_count = 1
memory_size = 2
}
Where do I insert vswitch id? and how to set the region id?
You can insert the vswitch id in the resource definition:
resource "alicloud_cs_managed_kubernetes" "k8s" {
name = "${var.name}"
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
new_nat_gateway = true
worker_instance_types = ["${data.alicloud_instance_types.default.instance_types.0.id}"]
worker_numbers = [2]
password = "Test12345"
pod_cidr = "172.20.0.0/16"
service_cidr = "172.21.0.0/20"
install_cloud_monitor = true
worker_disk_category = "cloud_efficiency"
vswitch_ids = ["your-alibaba-vswitch-id"]
}
For the zones (if you want to override the defaults) based on this and the docs, you need to do something like this:
data "alicloud_zones" main {
available_resource_creation = "VSwitch"
zones = [
{
id = "..."
local_name = "..."
...
},
{
id = "..."
local_name = "..."
...
},
...
]
}
To set region:
While configuring Alicloud provider in Terraform itself you can set the region:
provider "alicloud" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
For instance, let me consider Beijing as the region:
provider "alicloud" {
access_key = "accesskey"
secret_key = "secretkey"
region = "cn-beijing"
}
To set vswitch IDs:
while defining the resource section we can insert the desired vswitches
resource "alicloud_instance"{
# ...
instance_name = "in-the-vpc"
vswitch_id = "${data.alicloud_vswitches.vswitches_ds.vswitches.0.id}"
# ...
}
For instance, let me consider vsw-25naue4gz as the vswitch id:
resource "alicloud_instance"{
# ...
vswitch_id = "vsw-25naue4gz"
# ...
}

how to create a multi-node redshift cluster only for prod using Terraform

I have 2 redshift cluster prod and dev , i am using the same terraform module.
How can i have 2 nodes only for prod cluster . Please let me know the what is the interpolation syntax i should be using
variable "node_type" {
default = "dc1.large"
}
resource "aws_redshift_cluster" "****" {
cluster_identifier = "abc-${var.env}"
node_type = "${var.node_type}"
cluster_type = "single-node" ==> multi node
number_of_nodes = 2 ==> only for prod
Use the map type:
variable "node_type" {
default = "dc1.large"
}
variable "env" {
default = "development"
}
variable "redshift_cluster_type" {
type = "map"
default = {
development = "single-node"
production = "multi-node"
}
}
variable "redshift_node" {
type = "map"
default = {
development = "1"
production = "2"
}
}
resource "aws_redshift_cluster" "****" {
cluster_identifier = "abc-${var.env}"
node_type = "${var.node_type}"
cluster_type = "${var.redshift_cluster_type[var.env]}"
number_of_nodes = "${var.redshift_node[var.env]}"
}
Sometime I am lazy, and just do this
resource "aws_redshift_cluster" "****" {
cluster_identifier = "abc-${var.env}"
node_type = "${var.node_type}"
cluster_type = "${var.env == "production" ? "multi_node" : "single_node" }"
number_of_nodes = "${var.env == "production" ? 2 : 1 }"
}