Creation of AWS-EKS with Terraform in an existing VPC - kubernetes

I have created a VPC in AWS manually and I would like to create in this existing VPC an EKS via terraform.
I provide you part of the vpc.tf file:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.2.0"
cidr = "10.11.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["10.11.1.0/24", "10.11.2.0/24", "10.11.3.0/24"]
public_subnets = ["10.11.4.0/24", "10.11.5.0/24", "10.11.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
"Name" = "10.11.0.0/16 - < name of existing VPC >" --> **if i provide here this tag EKS will be created under the existing VPC ?**
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
tags = {
"Name" = "10.11.0.0/16 - < name of existing VPC >" --> if i provide here this tag EKS will be created under the existing VPC ?
is this tag will make the job ?

I don't understand the relevance of the tags here. You have created a vpc. Now when you create the EKS Cluster, just reference these values for the vpc_config.
resource "aws_eks_cluster" "example" {
name = "example"
vpc_config {
subnet_ids = module.vpc.private_subnets
}
}
Here is a good guide, How to Provision and EKS Cluster, that shows great examples of the vpc configuration and the eks configuration using the provided module.

I suspect you are afraid of screwing up something in an existing VPC. I would suggest that you try this in a separate account by simply create a VPC manually and then play around with your eks terraform. If you destroy the cluster and corresponding resources immediately after the provisioning, your cost should be minimal or none for testing out.
I don't think the tags you provided will put your cluster in an existing VPC. You need to configure more than that. You could potentially end up with an additional VPC.

Related

Find the latest kubernetes available version of eks using Terraform

I am using Terraform to check if latest Kubernetes version installed on EKS cluster.
There is way to find desired latest AMI for EC2 using data sources
data "aws_ami" "latest_ami" {
most_recent = True
owners = ["amazon"]
filter {
name = "name"
values = ["amazon-ami-hvm*"]
}
}
I can get the current version of running cluster
data "aws_eks_cluster" "example" {
name = "example"
}
output "current_version" {
value = data.aws_eks_cluster.example.version
}
Looking for way to compare it with latest available version

Assigning a launch configuration to an Auto-Scaling Group using CDK

Note: the code here is Go but happy to see answers in any CDK language.
In AWS CDK, you can create Launch Configurations:
// Create the launch configuration
lc := awsautoscaling.NewCfnLaunchConfiguration(
stack,
jsii.String("asg-lc"),
&awsautoscaling.CfnLaunchConfigurationProps{
...
},
)
But there is no obvious parameter or function in the Auto-Scaling Group props to attach it.
I have set the update policy:
UpdatePolicy: awsautoscaling.UpdatePolicy_RollingUpdate,
What I want to do is be able to call an auto-refresh in the CI system when an AMI configuration has changed:
aws autoscaling start-instance-refresh --cli-input-json file://asg-refresh.json
The problem is that the launch configuration appears to have been created automatically when the stack is first created and doesn't change on update and has incorrect values (AMI ID is outdated).
Is there a way to define/refresh the launch config using CDK to update the AMI ID? It's a simple change in the UI.
If you use the L2 AutoScalingGroup Construct, you can run cdk deploy after updating the AMI and it should launch a new one for you. Also with this Construct, the Launch Configuration is created for you. You don't really need to worry about it.
IMachineImage image = MachineImage.Lookup(new LookupMachineImageProps()
{
Name = "MY-AMI", // this can be updated on subsequent deploys
});
AutoScalingGroup asg = new AutoScalingGroup(this, $"MY-ASG", new AutoScalingGroupProps()
{
AllowAllOutbound = false,
AssociatePublicIpAddress = false,
AutoScalingGroupName = $"MY-ASG",
Vpc = network.Vpc,
VpcSubnets = new SubnetSelection() { Subnets = network.Vpc.PrivateSubnets },
MinCapacity = 1,
MaxCapacity = 2,
MachineImage = image,
InstanceType = new InstanceType("m5.xlarge"),
SecurityGroup = sg,
UpdatePolicy = UpdatePolicy.RollingUpdate(new RollingUpdateOptions()
{
}),
});

Terraform data.google_container_cluster.cluster.endpoint is null

I would like to manage configuration for a service using terraform to a GKE cluster defined using external terraform script.
I created the configuration using kubernetes_secret.
Something like below
resource "kubernetes_secret" "service_secret" {
metadata {
name = "my-secret"
namespace = "my-namespace"
}
data = {
username = "admin"
password = "P4ssw0rd"
}
}
And I also put this google client configuration to configure the kubernetes provider.
data "google_client_config" "current" {
}
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
}
provider "kubernetes" {
host = "https://${data.google_container_cluster.cluster.endpoint}"
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate)
}
when I apply the terraform it shows error message below
data.google_container_cluster.cluster.endpoint is null
Do I miss some steps here?
I just had the same/similar issue when trying to initialize the kubernetes provider from a google_container_cluster data source. terraform show just displayed all null values for the data source attributes. The fix for me was to specify the project in the data source, e.g.,
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
project = "my-project"
}
https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/container_cluster#project
project - (Optional) The project in which the resource belongs. If it is not provided, the provider project is used.
In my case the google provider was pointing to a different project than the one containing the cluster I wanted to get info about.
In addition, you should be able to remove the zone attribute from that block. location should refer to the zone if it is a zonal cluster or the region if it is a regional cluster.
I had the same problem when I wanted to start the cluster from scrach. No varible was declared then.
I solved it with depends on parameter in google_container_cluster block.
data "google_container_cluster" "my_cluster" {
name = var.project_name
depends_on = [
google_container_node_pool.e2-small
]
}
In my case the provider is waiting on nodes to be deployed. Kubernete provider starts working after the nodes are provisioned and variables are declared.
my problem
the name wasn't right so the data returned nothing
I tried
data "google_container_cluster" "my_cluster" {
name = "gke_my-project-name_us-central1_my-cluster-name"
location = "us-central1"
}
But the name is wrong and needs to be just my-cluster-name
And most importantly, the data directive didn't say that this cluster didn't exist --sad_beeps--
Solution (in my case): double check that the cluster exists
data "google_container_cluster" "my_cluster" {
name = "my-cluster-name"
location = "us-central1"
}
hope it will save time for someone else

Unable to boot the RDS inside a VPC in GCP

I need to boot the RDS of Postgres inside a VPC. I need the code of Google Provider.
I have Tried official terraform.io and also github https://github.com/GoogleCloudPlatform/terraform-google-sql-db/blob/227b1ec7a830622560bff85194a816638be1c7c5/examples/mysql-and-postgres/main.tf#L82 but didnt have any luck
name = "name"
project = "project_name"
region = "us-east-1"
database_version = "${var.database_version}"
settings {
tier = "${var.machine_type}"
ip_configuration {
ipv4_enabled = true
authorized_networks = {
name = "${data.terraform_remote_state.vpc.outputs.network_name}"
value = "10.10.22.0/24"
}
}
i had also tried
ip_configuration = [{
private_network = "${var.network_cird_range}"
}]
"I expect the RDS to be boot inside a VPC , but i couldn't find any luck. can anyone help me out here
Thanks in Advance
If I understood your issue correctly, you are getting an error while creating a Google Cloud SQL instance with Private IP using Terraform, am I right? if so, here is my code to achieve what you want:
provider "google" {
credentials = "${file("CREDENTIALS.json")}"
project = "PROJECT-ID"
region = "us-central1"
}
resource "google_compute_network" "private_network" {
name = "testnw"
}
resource "google_compute_global_address" "private_ip_address" {
provider="google"
name = "${google_compute_network.private_network.name}"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = "${google_compute_network.private_network.name}"
}
resource "google_service_networking_connection" "private_vpc_connection" {
provider="google"
network = "${google_compute_network.private_network.self_link}"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = ["${google_compute_global_address.private_ip_address.name}"]
}
resource "google_sql_database_instance" "instance" {
provider="google"
depends_on = ["google_service_networking_connection.private_vpc_connection"]
name = "privateinstance"
region = "us-central1"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = "false"
private_network = "projects/PROJECT-ID/global/networks/${google_compute_network.private_network.name}"
}
}
}
This code will create a VPC NW, then will allocate an IP range automatically, then will create a "service networking peering", and after that, a Cloud SQL instance with private IP.
I had faced the same issue when I tried to use Google Terraform Module to create a PostgreSQL managed instance. In my case, I'm fixing that using this:
private_network = "projects/PROJECT_ID/global/networks/NETWORK_NAME"
Try this and let me know if it solves your problem.

how can I attach multiple pre-existing AWS managed roles to a policy?

I want to associate existing policies in AWS to a role, I am using the terraform tool
I want to associate these policies, this code is with the aws cloudformation tool:
AWSCodeCommitFullAccess
AWSCodeBuildAdminAccess
AWSCodeDeployFullAccess
AWSCodePipelineFullAccess
AWSElasticBeanstalkFullAccess
try with the attach
data "aws_iam_policy" "attach-policy" {
arn = ["arn:aws:iam::aws:policy/AWSCodeCommitFullAccess", "arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess", "arn:aws:iam::aws:policy/AWSCodeDeployFullAccess", "arn:aws:iam::aws:policy/AWSCodePipelineFullAccess"]
}
resource "aws_iam_role_policy_attachment" "tc-role-policy-attach" {
role = "${aws_iam_role.toolchain-role.name}"
policy_arn = "${data.aws_iam_policy.attach-policy.arn}"
}
You go with the right direction with terraform resource aws_iam_role_policy_attachment but need some adjustment.
AWS managed policies' ARN are exist in the system. For example, if you need attach the first managed policy to an IAM role,
resource "aws_iam_role_policy_attachment" "test-policy-AWSCodeCommitFullAccess" {
policy_arn = "arn:aws:iam::aws:policy/AWSCodeCommitFullAccess"
role = "${aws_iam_role.toolchain-role.name}"
}
You can add other managed policies one by one.
If you want to do together, you can try below code
variable "managed_policies" {
default = ["arn:aws:iam::aws:policy/AWSCodeCommitFullAccess",
"arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess",
"arn:aws:iam::aws:policy/AWSCodeDeployFullAccess",
"arn:aws:iam::aws:policy/AWSCodePipelineFullAccess",
]
}
resource "aws_iam_role_policy_attachment" "tc-role-policy-attach" {
count = "${length(var.managed_policies)}"
policy_arn = "${element(var.managed_policies, count.index)}"
role = "${aws_iam_role.toolchain-role.name}"
}