Alibaba Cloud Managed Kubernetes Terraform - kubernetes

I want to create Kubernetes cluster with Terraform,
Regarding the doc page here: https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html
variable "name" {
default = "my-first-k8s"
}
data "alicloud_zones" main {
available_resource_creation = "VSwitch"
}
data "alicloud_instance_types" "default" {
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
cpu_core_count = 1
memory_size = 2
}
Where do I insert vswitch id? and how to set the region id?

You can insert the vswitch id in the resource definition:
resource "alicloud_cs_managed_kubernetes" "k8s" {
name = "${var.name}"
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
new_nat_gateway = true
worker_instance_types = ["${data.alicloud_instance_types.default.instance_types.0.id}"]
worker_numbers = [2]
password = "Test12345"
pod_cidr = "172.20.0.0/16"
service_cidr = "172.21.0.0/20"
install_cloud_monitor = true
worker_disk_category = "cloud_efficiency"
vswitch_ids = ["your-alibaba-vswitch-id"]
}
For the zones (if you want to override the defaults) based on this and the docs, you need to do something like this:
data "alicloud_zones" main {
available_resource_creation = "VSwitch"
zones = [
{
id = "..."
local_name = "..."
...
},
{
id = "..."
local_name = "..."
...
},
...
]
}

To set region:
While configuring Alicloud provider in Terraform itself you can set the region:
provider "alicloud" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
For instance, let me consider Beijing as the region:
provider "alicloud" {
access_key = "accesskey"
secret_key = "secretkey"
region = "cn-beijing"
}
To set vswitch IDs:
while defining the resource section we can insert the desired vswitches
resource "alicloud_instance"{
# ...
instance_name = "in-the-vpc"
vswitch_id = "${data.alicloud_vswitches.vswitches_ds.vswitches.0.id}"
# ...
}
For instance, let me consider vsw-25naue4gz as the vswitch id:
resource "alicloud_instance"{
# ...
vswitch_id = "vsw-25naue4gz"
# ...
}

Related

Best way to create several complex resources of the same type with terraform variables

I am converting existing kubernetes infrastructure to terraform. I ran a terraform import on the kubernetes cluster that I wanted to convert to terraform. Now that I have the infrastructure terraform code I'm trying to make it reusable. My organization has several clusters and they all have different node pools. I'm working on creating the variables.tf file and I am unsure of the best method to do this. I want to make it so any number of node_pools with specific variables can be created. Ideally I don't want to have to utilize different files/ variables for each node pool i create. Is there a way to place 6 different node_pools into variables without creating inidividual variables for each node pool and resource for the node pool?
For simpler objects I could see count being a viable solution but this might be too complicated. Below are 2 of the 6 node pools I am working with.
node_pool {
initial_node_count = 12
# instance_group_urls = []
max_pods_per_node = 16
name = "test-pool"
node_count = 12
node_locations = [
"us-central1-b",
"us-central1-c",
"us-central1-f",
]
version = "1.21.14-gke.700"
management {
auto_repair = true
auto_upgrade = true
}
node_config {
disk_size_gb = 50
disk_type = "pd-standard"
guest_accelerator = []
image_type = "COS_CONTAINERD"
labels = {
"integrationtestnode" = "true"
}
local_ssd_count = 0
machine_type = "n1-standard-2"
metadata = {
"disable-legacy-endpoints" = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform",
]
preemptible = false
service_account = "svcs-dev#megacorp-dev-project.iam.gserviceaccount.com"
spot = false
tags = []
taint = [
{
effect = "NO_SCHEDULE"
key = "integrationtest"
value = "true"
},
]
shielded_instance_config {
enable_integrity_monitoring = true
enable_secure_boot = true
}
}
upgrade_settings {
max_surge = 1
max_unavailable = 0
}
}
node_pool {
initial_node_count = 1
max_pods_per_node = 110
name = "promop-n2s8"
node_count = 1
node_locations = [
"us-central1-b",
"us-central1-c",
"us-central1-f",
]
version = "1.21.13-gke.900"
management {
auto_repair = true
auto_upgrade = true
}
node_config {
disk_size_gb = 100
disk_type = "pd-standard"
guest_accelerator = []
image_type = "COS_CONTAINERD"
labels = {
"megacorp.reserved" = "promop-dev"
}
local_ssd_count = 0
machine_type = "n2-standard-8"
metadata = {
"disable-legacy-endpoints" = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform",
]
preemptible = false
service_account = "svcs-dev#megacorp-dev-project.iam.gserviceaccount.com"
spot = false
tags = []
taint = [
{
effect = "NO_SCHEDULE"
key = "app"
value = "prometheus-operator-dev"
},
]
shielded_instance_config {
enable_integrity_monitoring = true
enable_secure_boot = false
}
workload_metadata_config {
mode = "GKE_METADATA"
}
}
upgrade_settings {
max_surge = 2
max_unavailable = 0
}
}
...
```

Dynamic creation of kubernetes manifest in Terraform

I'm trying to create multiple K8s manifests based on VPC subnets as the following code suggests:
resource "aws_subnet" "pod_subnets" {
for_each = module.pods_subnet_addrs.network_cidr_blocks
depends_on = [
aws_vpc_ipv4_cidr_block_association.pod_cidr
]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
availability_zone = each.key
cidr_block = each.value
tags = merge(
local.common_tags,
{
"Name" = format(
"${var.environment_name}-pods-network-%s",
each.key)
} )
}
resource "kubernetes_manifest" "ENIconfig" {
for_each = module.pods_subnet_addrs.network_cidr_blocks
manifest = {
"apiVersion" = "crd.k8s.amazonaws.com/v1alpha1"
"kind" = "ENIConfig"
"metadata" = {
"name" = each.key
}
"spec" = {
"securityGroups" = [
aws_security_group.worker_node.id,
]
"subnet" = aws_subnet.pod_subnets[each.key].id
}
}
}
However, when I'm running Terraform I'm getting the following error:
Provider "registry.terraform.io/hashicorp/kubernetes" planned an invalid value for kubernetes_manifest.ENIconfig["eu-west-3a"].manifest: planned value cty.ObjectVal(map[string]cty.Value{"apiVersion":cty.StringVal("crd.k8s.amazonaws.com/v1alpha1"), "kind":cty.StringVal("ENIConfig"),"metadata":cty.ObjectVal(map[string]cty.Value{"name":cty.StringVal("eu-west-3a")}), "spec":cty.ObjectVal(map[string]cty.Value{"securityGroups":cty.TupleVal([]cty.Value{cty.StringVal("sg-07e264400925e9a4a")}),"subnet":cty.NullVal(cty.String)})}) does not match config value cty.ObjectVal(map[string]cty.Value{"apiVersion":cty.StringVal("crd.k8s.amazonaws.com/v1alpha1"),"kind":cty.StringVal("ENIConfig"),"metadata":cty.ObjectVal(map[string]cty.Value{"name":cty.StringVal("eu-west-3a")}), "spec":cty.ObjectVal(map[string]cty.Value{"securityGroups":cty.TupleVal([]cty.Value{cty.StringVal("sg-07e264400925e9a4a")}),"subnet":cty.UnknownVal(cty.String)})}).
Any idea what I'm doing wrong here?
Turns out that kubernetes_manifest cannot be rendered with values that have not been created first. Only static values that can populate the resource are working.

Terraform: GKE Cluster with Nodes in different zones

I have this Terraform GKE cluster with 3 nodes. When I deploy this cluster all nodes are localised in the same zones which is europe-west1-b.
gke-cluster.yml
resource "google_container_cluster" "primary" {
name = var.cluster_name
initial_node_count = var.initial_node_count
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
node_config {
//machine_type = "e2-medium"
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
metadata = {
disable-legacy-endpoints = "true"
}
labels = {
app = var.app_name
}
tags = ["app", var.app_name]
}
timeouts {
create = "30m"
update = "40m"
}
}
variables.tf
variable "cluster_name" {
default = "cluster"
}
variable "app_name" {
default = "my-app"
}
variable "initial_node_count" {
default = 3
}
variable "kubernetes_min_ver" {
default = "latest"
}
variable "kubernetes_max_ver" {
default = "latest"
}
variable "remove_default_node_pool" {
default = false
}
variable "project" {
default = "your-project-name"
}
variable "credentials" {
default = "terraform-key.json"
}
variable "region" {
default = "europe-west1"
}
variable "zone" {
type = list(string)
description = "The zones to host the cluster in."
default = ["europe-west1-b", "europe-west1-c", "europe-west1-d"]
}
And would like to know if it's possible to deploy each node in a different zone.
If yes how can I do it using Terraform?
Simply add the following line
resource "google_container_cluster" "primary" {
name = "cluster"
location = "us-central1"
initial_node_count = "3"
in order to create a regional cluster. The above will bring up 9 nodes with each zone (f a b) containing 3 nodes. If you only want 1 node per zone, then just change initial_node_count to 1.
More info here at Argument reference.

Terraform .12 InvalidSubnetID.NotFound

I have two public Subnets declared in my VPC and now I want to create an EC2 instance in each of the two public subnets, but Terraform doesn't properly resolve the subnet ids
Here is what I have defined:
resource "aws_subnet" "archer-public-1" {
vpc_id = aws_vpc.archer.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
availability_zone = "${var.AZ1}"
}
resource "aws_subnet" "archer-public-2" {
vpc_id = aws_vpc.archer.id
cidr_block = "10.0.2.0/24"
map_public_ip_on_launch = "true"
availability_zone = "${var.AZ2}"
}
Here is my EC2 resource definition with the subnet expression that I tried unsuccessfully.
resource "aws_instance" "nginx" {
count = 2
ami = var.AMIS[var.AWS_REGION]
instance_type = "t2.micro"
subnet_id = "aws_subnet.archer-public-${count.index+1}.id" <== why doesn't this work?!
}
The variable interpolation does produce the proper values for the two subnets: archer-public-1 and archer-public-2, yet, the terraform produces these errors:
Error: Error launching source instance: InvalidSubnetID.NotFound: The subnet ID 'aws_subnet.archer-public-1.id' does not exist
status code: 400, request id: 26b4f710-e968-484d-a17a-6faa5a9d15d5
Yet when I invoke the terraform console, I can see that it properly resolves these objects as expected:
> aws_subnet.archer-public-1
{
"arn" = "arn:aws:ec2:us-west-2:361879417564:subnet/subnet-0fb47d0d30f501585"
"assign_ipv6_address_on_creation" = false
"availability_zone" = "us-west-2a"
"availability_zone_id" = "usw2-az1"
"cidr_block" = "10.0.1.0/24"
"id" = "subnet-0fb47d0d30f501585"
"ipv6_cidr_block" = ""
"ipv6_cidr_block_association_id" = ""
"map_public_ip_on_launch" = true
"outpost_arn" = ""
"owner_id" = "361879417564"
"tags" = {
"Name" = "archer-public-1"
}
"vpc_id" = "vpc-074637b06747e227b"
}

how to create a multi-node redshift cluster only for prod using Terraform

I have 2 redshift cluster prod and dev , i am using the same terraform module.
How can i have 2 nodes only for prod cluster . Please let me know the what is the interpolation syntax i should be using
variable "node_type" {
default = "dc1.large"
}
resource "aws_redshift_cluster" "****" {
cluster_identifier = "abc-${var.env}"
node_type = "${var.node_type}"
cluster_type = "single-node" ==> multi node
number_of_nodes = 2 ==> only for prod
Use the map type:
variable "node_type" {
default = "dc1.large"
}
variable "env" {
default = "development"
}
variable "redshift_cluster_type" {
type = "map"
default = {
development = "single-node"
production = "multi-node"
}
}
variable "redshift_node" {
type = "map"
default = {
development = "1"
production = "2"
}
}
resource "aws_redshift_cluster" "****" {
cluster_identifier = "abc-${var.env}"
node_type = "${var.node_type}"
cluster_type = "${var.redshift_cluster_type[var.env]}"
number_of_nodes = "${var.redshift_node[var.env]}"
}
Sometime I am lazy, and just do this
resource "aws_redshift_cluster" "****" {
cluster_identifier = "abc-${var.env}"
node_type = "${var.node_type}"
cluster_type = "${var.env == "production" ? "multi_node" : "single_node" }"
number_of_nodes = "${var.env == "production" ? 2 : 1 }"
}