I am trying to create a GKE cluster of node size 1. However, it always create a cluster of 3 nodes. Why is that?
resource "google_container_cluster" "gke-cluster" {
name = "sonarqube"
location = "asia-southeast1"
remove_default_node_pool = true
initial_node_count = 1
}
resource "google_container_node_pool" "gke-node-pool" {
name = "sonarqube"
location = "asia-southeast1"
cluster = google_container_cluster.gke-cluster.name
node_count = 1
node_config {
machine_type = "n1-standard-1"
metadata = {
disable-legacy-endpoints = "true"
}
labels = {
app = "sonarqube"
}
}
}
Ok, found I can do so using node_locations:
resource "google_container_cluster" "gke-cluster" {
name = "sonarqube"
location = "asia-southeast1"
node_locations = [
"asia-southeast1-a"
]
remove_default_node_pool = true
initial_node_count = 1
}
Without that, it seems GKE will create 1 node per zone.
Related
I'm running a kubernetes cluster on GKE. I'd like to enable auto_upgrade for each node pool and I'd like to do this in terraform. I'm not sure how.
The node pools are defined with terraform like this
module "main-gke-cluster" {
source = "../modules/gke-cluster"
cluster_name = local.stage_main_cluster_name
// SNIP...
node_pools = {
default-pool = {
machine_type = "e2-standard-2"
image_type = "UBUNTU"
initial_node_count = 1
min_nodes = 0
max_nodes = 10
preemptible = true
node_locations = [
"europe-west4-a"
]
labels = {}
taints = []
oauth_scopes = local.default_pool_scopes
has_gpu = false
}
I attempted to set auto_upgrade on the node pool like so
module "main-gke-cluster" {
source = "../modules/gke-cluster"
cluster_name = local.stage_main_cluster_name
// SNIP...
node_pools = {
default-pool = {
machine_type = "e2-standard-2"
image_type = "UBUNTU"
initial_node_count = 1
min_nodes = 0
max_nodes = 10
auto_upgrade = true
preemptible = true
node_locations = [
"europe-west4-a"
]
labels = {}
taints = []
oauth_scopes = local.default_pool_scopes
has_gpu = false
}
ie I added an auto_upgrade parameter.
This appears to have no effect on the terraform plan.
Any idea what I'm missing here?
Assuming you are using the Jetstack gke-module, then by default node pools will be set to use auto-upgrade. But if you want to explicitly enable auto-upgrade, then I believe the parameter you want to use is "management_auto_upgrade".
I have a kubernetes setup with a cluster and two pools (nodes), I have also setup an (nginx) ingress server for kubernetes with helm. All of this is written in terraform for scaleway. What I am struggling with is how to config the ingress server to route to my kubernetes pools/nodes depending on the url path.
For example, I want [url]/api to go to my scaleway_k8s_pool.api and [url]/auth to go to my scaleway_k8s_pool.auth.
This is my terraform code
provider "scaleway" {
zone = "fr-par-1"
region = "fr-par"
}
resource "scaleway_registry_namespace" "main" {
name = "main_container_registry"
description = "Main container registry"
is_public = false
}
resource "scaleway_k8s_cluster" "main" {
name = "main"
description = "The main cluster"
version = "1.20.5"
cni = "calico"
tags = ["i'm an awsome tag"]
autoscaler_config {
disable_scale_down = false
scale_down_delay_after_add = "5m"
estimator = "binpacking"
expander = "random"
ignore_daemonsets_utilization = true
balance_similar_node_groups = true
expendable_pods_priority_cutoff = -5
}
}
resource "scaleway_k8s_pool" "api" {
cluster_id = scaleway_k8s_cluster.main.id
name = "api"
node_type = "DEV1-M"
size = 1
autoscaling = true
autohealing = true
min_size = 1
max_size = 5
}
resource "scaleway_k8s_pool" "auth" {
cluster_id = scaleway_k8s_cluster.main.id
name = "auth"
node_type = "DEV1-M"
size = 1
autoscaling = true
autohealing = true
min_size = 1
max_size = 5
}
resource "null_resource" "kubeconfig" {
depends_on = [scaleway_k8s_pool.api, scaleway_k8s_pool.auth] # at least one pool here
triggers = {
host = scaleway_k8s_cluster.main.kubeconfig[0].host
token = scaleway_k8s_cluster.main.kubeconfig[0].token
cluster_ca_certificate = scaleway_k8s_cluster.main.kubeconfig[0].cluster_ca_certificate
}
}
output "cluster_url" {
value = scaleway_k8s_cluster.main.apiserver_url
}
provider "helm" {
kubernetes {
host = null_resource.kubeconfig.triggers.host
token = null_resource.kubeconfig.triggers.token
cluster_ca_certificate = base64decode(
null_resource.kubeconfig.triggers.cluster_ca_certificate
)
}
}
resource "helm_release" "ingress" {
name = "ingress"
chart = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
namespace = "kube-system"
}
How would i go about configuring the nginx ingress server for routing to my kubernetes pools?
I have a regional cluster for redundancy. In this cluster I want to create a node-pool in just 1 zone in this region. Is this configuration possible? reason I trying this is, I want to run service like RabbitMQ in just 1 zone to avoid split, and my application services running on all zones in the region for redundancy.
I am using terraform to create the cluster and node pools, below is my config for creating region cluster and zone node pool
resource "google_container_cluster" "regional_cluster" {
provider = google-beta
project = "my-project"
name = "my-cluster"
location = "us-central1"
node_locations = ["us-central1-a", "us-central1-b", "us-central1-c"]
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "one_zone" {
project = google_container_cluster.regional_cluster.project
name = "zone-pool"
location = "us-central1-b"
cluster = google_container_cluster.regional_cluster.name
node_config {
machine_type = var.machine_type
image_type = var.image_type
disk_size_gb = 100
disk_type = "pd-standard"
}
}
This throws an error message
error creating NodePool: googleapi: Error 404: Not found: projects/my-project/zones/us-central1-b/clusters/my-cluster., notFound
Found out that location in google_container_node_pool should specify cluster master's region/zone. To actually specify the node-pool location node_locations should be used. Below is the config that worked
resource "google_container_cluster" "regional_cluster" {
provider = google-beta
project = "my-project"
name = "my-cluster"
location = "us-central1"
node_locations = ["us-central1-a", "us-central1-b", "us-central1-c"]
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "one_zone" {
project = google_container_cluster.regional_cluster.project
name = "zone-pool"
location = google_container_cluster.regional_cluster.location
node_locations = ["us-central1-b"]
cluster = google_container_cluster.regional_cluster.name
node_config {
machine_type = var.machine_type
image_type = var.image_type
disk_size_gb = 100
disk_type = "pd-standard"
}
}
I have this Terraform GKE cluster with 3 nodes. When I deploy this cluster all nodes are localised in the same zones which is europe-west1-b.
gke-cluster.yml
resource "google_container_cluster" "primary" {
name = var.cluster_name
initial_node_count = var.initial_node_count
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
node_config {
//machine_type = "e2-medium"
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
metadata = {
disable-legacy-endpoints = "true"
}
labels = {
app = var.app_name
}
tags = ["app", var.app_name]
}
timeouts {
create = "30m"
update = "40m"
}
}
variables.tf
variable "cluster_name" {
default = "cluster"
}
variable "app_name" {
default = "my-app"
}
variable "initial_node_count" {
default = 3
}
variable "kubernetes_min_ver" {
default = "latest"
}
variable "kubernetes_max_ver" {
default = "latest"
}
variable "remove_default_node_pool" {
default = false
}
variable "project" {
default = "your-project-name"
}
variable "credentials" {
default = "terraform-key.json"
}
variable "region" {
default = "europe-west1"
}
variable "zone" {
type = list(string)
description = "The zones to host the cluster in."
default = ["europe-west1-b", "europe-west1-c", "europe-west1-d"]
}
And would like to know if it's possible to deploy each node in a different zone.
If yes how can I do it using Terraform?
Simply add the following line
resource "google_container_cluster" "primary" {
name = "cluster"
location = "us-central1"
initial_node_count = "3"
in order to create a regional cluster. The above will bring up 9 nodes with each zone (f a b) containing 3 nodes. If you only want 1 node per zone, then just change initial_node_count to 1.
More info here at Argument reference.
I've created a cluster using terraform with:
provider "google" {
credentials = "${file("gcp.json")}"
project = "${var.gcp_project}"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_container_cluster" "primary" {
name = "${var.k8s_cluster_name}"
location = "us-central1-a"
project = "${var.gcp_project}"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
project = "${var.gcp_project}"
name = "my-node-pool"
location = "us-central1-a"
cluster = "${google_container_cluster.primary.name}"
# node_count = 3
autoscaling {
min_node_count = 3
max_node_count = 5
}
node_config {
# preemptible = true
machine_type = "g1-small"
metadata = {
disable-legacy-endpoints = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only"
]
}
}
Surprisingly this node pool seems to be 'stuck' at 0 instances? Why? How can I diagnose this?
you should add "initial_node_count" (like initial_node_count = 3) to "google_container_node_pool" resourse.
Official documentation says you should not to use "node_count" with "autoscaling".