I am using Terraform to check if latest Kubernetes version installed on EKS cluster.
There is way to find desired latest AMI for EC2 using data sources
data "aws_ami" "latest_ami" {
most_recent = True
owners = ["amazon"]
filter {
name = "name"
values = ["amazon-ami-hvm*"]
}
}
I can get the current version of running cluster
data "aws_eks_cluster" "example" {
name = "example"
}
output "current_version" {
value = data.aws_eks_cluster.example.version
}
Looking for way to compare it with latest available version
Related
I have created a VPC in AWS manually and I would like to create in this existing VPC an EKS via terraform.
I provide you part of the vpc.tf file:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.2.0"
cidr = "10.11.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["10.11.1.0/24", "10.11.2.0/24", "10.11.3.0/24"]
public_subnets = ["10.11.4.0/24", "10.11.5.0/24", "10.11.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
"Name" = "10.11.0.0/16 - < name of existing VPC >" --> **if i provide here this tag EKS will be created under the existing VPC ?**
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
tags = {
"Name" = "10.11.0.0/16 - < name of existing VPC >" --> if i provide here this tag EKS will be created under the existing VPC ?
is this tag will make the job ?
I don't understand the relevance of the tags here. You have created a vpc. Now when you create the EKS Cluster, just reference these values for the vpc_config.
resource "aws_eks_cluster" "example" {
name = "example"
vpc_config {
subnet_ids = module.vpc.private_subnets
}
}
Here is a good guide, How to Provision and EKS Cluster, that shows great examples of the vpc configuration and the eks configuration using the provided module.
I suspect you are afraid of screwing up something in an existing VPC. I would suggest that you try this in a separate account by simply create a VPC manually and then play around with your eks terraform. If you destroy the cluster and corresponding resources immediately after the provisioning, your cost should be minimal or none for testing out.
I don't think the tags you provided will put your cluster in an existing VPC. You need to configure more than that. You could potentially end up with an additional VPC.
This should be fairly easy, or I might doing something wrong, but after a while digging into it I couldn't find a solution.
I have a Terraform configuration that contains a Kubernetes Secret resource which data comes from Vault. The resource configuration looks like this:
resource "kubernetes_secret" "external-api-token" {
metadata {
name = "external-api-token"
namespace = local.platform_namespace
annotations = {
"vault.security.banzaicloud.io/vault-addr" = var.vault_addr
"vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
}
data = {
"EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN"
}
}
Everything is working fine so far, but every time I do terraform plan or terraform apply, it marks that resource as "changed" and updates it, even when I didn't touch the resource or other resources related to it. E.g.:
... (other actions to be applied, unrelated to the offending resource) ...
# kubernetes_secret.external-api-token will be updated in-place
~ resource "kubernetes_secret" "external-api-token" {
~ data = (sensitive value)
id = "platform/external-api-token"
type = "Opaque"
metadata {
annotations = {
"vault.security.banzaicloud.io/vault-addr" = "https://vault.infra.megacorp.io:8200"
"vault.security.banzaicloud.io/vault-path" = "kubernetes/gke-pipe-stg-2"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
generation = 0
labels = {}
name = "external-api-token"
namespace = "platform"
resource_version = "160541784"
self_link = "/api/v1/namespaces/platform/secrets/external-api-token"
uid = "40e93d16-e8ef-47f5-92ac-6d859dfee123"
}
}
Plan: 3 to add, 1 to change, 0 to destroy.
It is saying that the data for this resource has been changed. However the data in Vault remains the same, nothing has been modified there. This update happens every single time now.
I was thinking on to use the ignore_changes lifecycle feature, but I assume this will make any changes done in Vault secret to be ignored by Terraform, which I also don't want. I would like the resource to be updated only when the secret in Vault was changed.
Is there a way to do this? What am I missing or doing wrong?
You need to add in the Terraform Lifecycle ignore changes meta argument to your code. For data with API token values but also annotations for some reason Terraform seems to assume that, that data changes every time a plan or apply or even destroy has been run. I had a similar issue with Azure KeyVault.
Here is the code with the lifecycle ignore changes meta argument included:
resource "kubernetes_secret" "external-api-token" {
metadata {
name = "external-api-token"
namespace = local.platform_namespace
annotations = {
"vault.security.banzaicloud.io/vault-addr" = var.vault_addr
"vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
}
data = {
"EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN"
}
lifecycle {
ignore_changes = [
# Ignore changes to data, and annotations e.g. because a management agent
# updates these based on some ruleset managed elsewhere.
data,annotations,
]
}
}
link to meta arguments with lifecycle:
https://www.terraform.io/language/meta-arguments/lifecycle
Note: the code here is Go but happy to see answers in any CDK language.
In AWS CDK, you can create Launch Configurations:
// Create the launch configuration
lc := awsautoscaling.NewCfnLaunchConfiguration(
stack,
jsii.String("asg-lc"),
&awsautoscaling.CfnLaunchConfigurationProps{
...
},
)
But there is no obvious parameter or function in the Auto-Scaling Group props to attach it.
I have set the update policy:
UpdatePolicy: awsautoscaling.UpdatePolicy_RollingUpdate,
What I want to do is be able to call an auto-refresh in the CI system when an AMI configuration has changed:
aws autoscaling start-instance-refresh --cli-input-json file://asg-refresh.json
The problem is that the launch configuration appears to have been created automatically when the stack is first created and doesn't change on update and has incorrect values (AMI ID is outdated).
Is there a way to define/refresh the launch config using CDK to update the AMI ID? It's a simple change in the UI.
If you use the L2 AutoScalingGroup Construct, you can run cdk deploy after updating the AMI and it should launch a new one for you. Also with this Construct, the Launch Configuration is created for you. You don't really need to worry about it.
IMachineImage image = MachineImage.Lookup(new LookupMachineImageProps()
{
Name = "MY-AMI", // this can be updated on subsequent deploys
});
AutoScalingGroup asg = new AutoScalingGroup(this, $"MY-ASG", new AutoScalingGroupProps()
{
AllowAllOutbound = false,
AssociatePublicIpAddress = false,
AutoScalingGroupName = $"MY-ASG",
Vpc = network.Vpc,
VpcSubnets = new SubnetSelection() { Subnets = network.Vpc.PrivateSubnets },
MinCapacity = 1,
MaxCapacity = 2,
MachineImage = image,
InstanceType = new InstanceType("m5.xlarge"),
SecurityGroup = sg,
UpdatePolicy = UpdatePolicy.RollingUpdate(new RollingUpdateOptions()
{
}),
});
I would like to manage configuration for a service using terraform to a GKE cluster defined using external terraform script.
I created the configuration using kubernetes_secret.
Something like below
resource "kubernetes_secret" "service_secret" {
metadata {
name = "my-secret"
namespace = "my-namespace"
}
data = {
username = "admin"
password = "P4ssw0rd"
}
}
And I also put this google client configuration to configure the kubernetes provider.
data "google_client_config" "current" {
}
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
}
provider "kubernetes" {
host = "https://${data.google_container_cluster.cluster.endpoint}"
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate)
}
when I apply the terraform it shows error message below
data.google_container_cluster.cluster.endpoint is null
Do I miss some steps here?
I just had the same/similar issue when trying to initialize the kubernetes provider from a google_container_cluster data source. terraform show just displayed all null values for the data source attributes. The fix for me was to specify the project in the data source, e.g.,
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
project = "my-project"
}
https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/container_cluster#project
project - (Optional) The project in which the resource belongs. If it is not provided, the provider project is used.
In my case the google provider was pointing to a different project than the one containing the cluster I wanted to get info about.
In addition, you should be able to remove the zone attribute from that block. location should refer to the zone if it is a zonal cluster or the region if it is a regional cluster.
I had the same problem when I wanted to start the cluster from scrach. No varible was declared then.
I solved it with depends on parameter in google_container_cluster block.
data "google_container_cluster" "my_cluster" {
name = var.project_name
depends_on = [
google_container_node_pool.e2-small
]
}
In my case the provider is waiting on nodes to be deployed. Kubernete provider starts working after the nodes are provisioned and variables are declared.
my problem
the name wasn't right so the data returned nothing
I tried
data "google_container_cluster" "my_cluster" {
name = "gke_my-project-name_us-central1_my-cluster-name"
location = "us-central1"
}
But the name is wrong and needs to be just my-cluster-name
And most importantly, the data directive didn't say that this cluster didn't exist --sad_beeps--
Solution (in my case): double check that the cluster exists
data "google_container_cluster" "my_cluster" {
name = "my-cluster-name"
location = "us-central1"
}
hope it will save time for someone else
I need to boot the RDS of Postgres inside a VPC. I need the code of Google Provider.
I have Tried official terraform.io and also github https://github.com/GoogleCloudPlatform/terraform-google-sql-db/blob/227b1ec7a830622560bff85194a816638be1c7c5/examples/mysql-and-postgres/main.tf#L82 but didnt have any luck
name = "name"
project = "project_name"
region = "us-east-1"
database_version = "${var.database_version}"
settings {
tier = "${var.machine_type}"
ip_configuration {
ipv4_enabled = true
authorized_networks = {
name = "${data.terraform_remote_state.vpc.outputs.network_name}"
value = "10.10.22.0/24"
}
}
i had also tried
ip_configuration = [{
private_network = "${var.network_cird_range}"
}]
"I expect the RDS to be boot inside a VPC , but i couldn't find any luck. can anyone help me out here
Thanks in Advance
If I understood your issue correctly, you are getting an error while creating a Google Cloud SQL instance with Private IP using Terraform, am I right? if so, here is my code to achieve what you want:
provider "google" {
credentials = "${file("CREDENTIALS.json")}"
project = "PROJECT-ID"
region = "us-central1"
}
resource "google_compute_network" "private_network" {
name = "testnw"
}
resource "google_compute_global_address" "private_ip_address" {
provider="google"
name = "${google_compute_network.private_network.name}"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = "${google_compute_network.private_network.name}"
}
resource "google_service_networking_connection" "private_vpc_connection" {
provider="google"
network = "${google_compute_network.private_network.self_link}"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = ["${google_compute_global_address.private_ip_address.name}"]
}
resource "google_sql_database_instance" "instance" {
provider="google"
depends_on = ["google_service_networking_connection.private_vpc_connection"]
name = "privateinstance"
region = "us-central1"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = "false"
private_network = "projects/PROJECT-ID/global/networks/${google_compute_network.private_network.name}"
}
}
}
This code will create a VPC NW, then will allocate an IP range automatically, then will create a "service networking peering", and after that, a Cloud SQL instance with private IP.
I had faced the same issue when I tried to use Google Terraform Module to create a PostgreSQL managed instance. In my case, I'm fixing that using this:
private_network = "projects/PROJECT_ID/global/networks/NETWORK_NAME"
Try this and let me know if it solves your problem.