Unable to boot the RDS inside a VPC in GCP - postgresql

I need to boot the RDS of Postgres inside a VPC. I need the code of Google Provider.
I have Tried official terraform.io and also github https://github.com/GoogleCloudPlatform/terraform-google-sql-db/blob/227b1ec7a830622560bff85194a816638be1c7c5/examples/mysql-and-postgres/main.tf#L82 but didnt have any luck
name = "name"
project = "project_name"
region = "us-east-1"
database_version = "${var.database_version}"
settings {
tier = "${var.machine_type}"
ip_configuration {
ipv4_enabled = true
authorized_networks = {
name = "${data.terraform_remote_state.vpc.outputs.network_name}"
value = "10.10.22.0/24"
}
}
i had also tried
ip_configuration = [{
private_network = "${var.network_cird_range}"
}]
"I expect the RDS to be boot inside a VPC , but i couldn't find any luck. can anyone help me out here
Thanks in Advance

If I understood your issue correctly, you are getting an error while creating a Google Cloud SQL instance with Private IP using Terraform, am I right? if so, here is my code to achieve what you want:
provider "google" {
credentials = "${file("CREDENTIALS.json")}"
project = "PROJECT-ID"
region = "us-central1"
}
resource "google_compute_network" "private_network" {
name = "testnw"
}
resource "google_compute_global_address" "private_ip_address" {
provider="google"
name = "${google_compute_network.private_network.name}"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = "${google_compute_network.private_network.name}"
}
resource "google_service_networking_connection" "private_vpc_connection" {
provider="google"
network = "${google_compute_network.private_network.self_link}"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = ["${google_compute_global_address.private_ip_address.name}"]
}
resource "google_sql_database_instance" "instance" {
provider="google"
depends_on = ["google_service_networking_connection.private_vpc_connection"]
name = "privateinstance"
region = "us-central1"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = "false"
private_network = "projects/PROJECT-ID/global/networks/${google_compute_network.private_network.name}"
}
}
}
This code will create a VPC NW, then will allocate an IP range automatically, then will create a "service networking peering", and after that, a Cloud SQL instance with private IP.

I had faced the same issue when I tried to use Google Terraform Module to create a PostgreSQL managed instance. In my case, I'm fixing that using this:
private_network = "projects/PROJECT_ID/global/networks/NETWORK_NAME"
Try this and let me know if it solves your problem.

Related

Terraform Kubernetes Secrets not applying due to Namespace

I am learning terraform and trying to translate kubernetes infrastructure over to terraform.
I have a terraform script which creates a given namespace, and then creates secrets from local files. Most of the files do not create properly due to the namespace not being created fast enough.
Is there a correct method to create and wait for confirmation of the name space before continuing within the terraform script? Such as depends_on, etc.?
My current approach:
resource "kubernetes_namespace" "namespace" {
metadata {
name = "specialNamespace"
}
}
resource "kubernetes_secret" "api-env" {
metadata {
name = var.k8s_name_api_env
namespace = "specialNamespace"
}
data = {
".api" = file("${path.cwd},${var.local_dir_path_api_env_file}")
}
}
resource "kubernetes_secret" "password-env" {
metadata {
name = var.k8s_name_password_env
namespace = "specialNamespace"
}
data = {
".password" = file("${path.cwd},${var.local_dir_path_password_env_file}")
}
}
resource "kubernetes_secret" "tls-crt-env" {
metadata {
name = var.k8s_name_tls_crt_env
namespace = "specialNamespace"
}
data = {
"server.crt" = file("${path.cwd},${var.local_dir_path_tls_crt_env_file}")
}
}
resource "kubernetes_secret" "tls-key-env" {
metadata {
name = var.k8s_name_tls_key_env
namespace = "specialNamespace"
}
data = {
"server.key" = file("${path.cwd},${var.local_dir_path_tls_key_env_file}")
}
}
Since there is a way to get the name property of the metadata from the kubernetes_namespace resource, I would advise going with that. For example, for the kubernetes_secret resource:
resource "kubernetes_secret" "api-env" {
metadata {
name = var.k8s_name_api_env
namespace = kubernetes_namespace.namespace.metadata[0].name
}
data = {
".api" = file("${path.cwd},${var.local_dir_path_api_env_file}")
}
}
Also, note that most of the resources also have the _v1 version (e.g., namespace [1], secret [2] etc.), so I would strongly suggest going with those ones.
[1] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace_v1
[2] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret_v1
Such as depends_on, etc.?
Exactly. Here, you should use depends_on:
resource "kubernetes_secret" "api-env" {
depends_on = [resource.kubernetes_namespace.namespace]
...
}
...

Creation of AWS-EKS with Terraform in an existing VPC

I have created a VPC in AWS manually and I would like to create in this existing VPC an EKS via terraform.
I provide you part of the vpc.tf file:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.2.0"
cidr = "10.11.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["10.11.1.0/24", "10.11.2.0/24", "10.11.3.0/24"]
public_subnets = ["10.11.4.0/24", "10.11.5.0/24", "10.11.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
"Name" = "10.11.0.0/16 - < name of existing VPC >" --> **if i provide here this tag EKS will be created under the existing VPC ?**
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
tags = {
"Name" = "10.11.0.0/16 - < name of existing VPC >" --> if i provide here this tag EKS will be created under the existing VPC ?
is this tag will make the job ?
I don't understand the relevance of the tags here. You have created a vpc. Now when you create the EKS Cluster, just reference these values for the vpc_config.
resource "aws_eks_cluster" "example" {
name = "example"
vpc_config {
subnet_ids = module.vpc.private_subnets
}
}
Here is a good guide, How to Provision and EKS Cluster, that shows great examples of the vpc configuration and the eks configuration using the provided module.
I suspect you are afraid of screwing up something in an existing VPC. I would suggest that you try this in a separate account by simply create a VPC manually and then play around with your eks terraform. If you destroy the cluster and corresponding resources immediately after the provisioning, your cost should be minimal or none for testing out.
I don't think the tags you provided will put your cluster in an existing VPC. You need to configure more than that. You could potentially end up with an additional VPC.

AWS Route53 Recovery Controller error when getting or updating the control state using .net

I am trying to get Amazon's Route53 Recovery Controller to update control states from a .net application and I keep getting an error. I see on the documentation that I need to set the region and cluster endpoint, but I can't figure out how to do it.
Here a sample of the code I am using:
AmazonRoute53RecoveryControlConfigConfig configConfig = new AmazonRoute53RecoveryControlConfigConfig();
configConfig.RegionEndpoint = RegionEndpoint.USWest2;
AmazonRoute53RecoveryControlConfigClient configClient = new AmazonRoute53RecoveryControlConfigClient(_awsCredentials, configConfig);
DescribeClusterResponse describeClusterResponse = await configClient.DescribeClusterAsync(new DescribeClusterRequest()
{
ClusterArn = "arn:aws:route53-recovery-control::Account:cluster/data"
});
foreach (ClusterEndpoint clusterEndpoint in describeClusterResponse.Cluster.ClusterEndpoints)
{
AmazonRoute53RecoveryClusterConfig clusterConfig = new AmazonRoute53RecoveryClusterConfig();
clusterConfig.RegionEndpoint = RegionEndpoint.GetBySystemName(clusterEndpoint.Region);
AmazonRoute53RecoveryClusterClient client = new AmazonRoute53RecoveryClusterClient(_awsCredentials, clusterConfig);
GetRoutingControlStateResponse getRoutingControlStateResponseWest = await client.GetRoutingControlStateAsync(new GetRoutingControlStateRequest()
{
RoutingControlArn = "arn:aws:route53-recovery-control::Account:controlpanel/data/routingcontrol/data"
});
GetRoutingControlStateResponse getRoutingControlStateResponseEast = await client.GetRoutingControlStateAsync(new GetRoutingControlStateRequest()
{
RoutingControlArn = "arn:aws:route53-recovery-control::Account:controlpanel/data/routingcontrol/data"
});
UpdateRoutingControlStatesRequest request = new UpdateRoutingControlStatesRequest();
request.UpdateRoutingControlStateEntries = new List<UpdateRoutingControlStateEntry>()
{
new UpdateRoutingControlStateEntry()
{
RoutingControlArn = "arn:aws:route53-recovery-control::Account:controlpanel/data/routingcontrol/data",
RoutingControlState = getRoutingControlStateResponseWest.RoutingControlState == RoutingControlState.On ? RoutingControlState.Off : RoutingControlState.On
},
new UpdateRoutingControlStateEntry()
{
RoutingControlArn = "arn:aws:route53-recovery-control::Account:controlpanel/data/routingcontrol/data",
RoutingControlState = getRoutingControlStateResponseEast.RoutingControlState == RoutingControlState.On ? RoutingControlState.Off : RoutingControlState.On
}
};
UpdateRoutingControlStatesResponse response = await client.UpdateRoutingControlStatesAsync(request);
if (response.HttpStatusCode == HttpStatusCode.OK)
{
break;
}
}
When this code executes I get this error when it tries to get the control state: The requested name is valid, but no data of the requested type was found.
I see in the java example you can set the region and the data plane url endpoint, but I don't see the equivalent in .net.
https://docs.aws.amazon.com/r53recovery/latest/dg/example_route53-recovery-cluster_UpdateRoutingControlState_section.html
This works when I use the cli which I can also set the region and url endpoint.
https://docs.aws.amazon.com/r53recovery/latest/dg/getting-started-cli-routing.control-state.html
What am I doing wrong here?
There is a solution to this question here: https://github.com/aws/aws-sdk-net/issues/1978.
Essentially use the ServiceURL on the configuration object and add a trailing / to the endpoint url.
AmazonRoute53RecoveryClusterConfig clusterRecoveryConfig = new AmazonRoute53RecoveryClusterConfig();
clusterRecoveryConfig.ServiceURL = $"{clusterEndpoint.Endpoint}/";
AmazonRoute53RecoveryClusterClient client = new AmazonRoute53RecoveryClusterClient(_awsCredentials, clusterRecoveryConfig);

Terraform data.google_container_cluster.cluster.endpoint is null

I would like to manage configuration for a service using terraform to a GKE cluster defined using external terraform script.
I created the configuration using kubernetes_secret.
Something like below
resource "kubernetes_secret" "service_secret" {
metadata {
name = "my-secret"
namespace = "my-namespace"
}
data = {
username = "admin"
password = "P4ssw0rd"
}
}
And I also put this google client configuration to configure the kubernetes provider.
data "google_client_config" "current" {
}
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
}
provider "kubernetes" {
host = "https://${data.google_container_cluster.cluster.endpoint}"
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate)
}
when I apply the terraform it shows error message below
data.google_container_cluster.cluster.endpoint is null
Do I miss some steps here?
I just had the same/similar issue when trying to initialize the kubernetes provider from a google_container_cluster data source. terraform show just displayed all null values for the data source attributes. The fix for me was to specify the project in the data source, e.g.,
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
project = "my-project"
}
https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/container_cluster#project
project - (Optional) The project in which the resource belongs. If it is not provided, the provider project is used.
In my case the google provider was pointing to a different project than the one containing the cluster I wanted to get info about.
In addition, you should be able to remove the zone attribute from that block. location should refer to the zone if it is a zonal cluster or the region if it is a regional cluster.
I had the same problem when I wanted to start the cluster from scrach. No varible was declared then.
I solved it with depends on parameter in google_container_cluster block.
data "google_container_cluster" "my_cluster" {
name = var.project_name
depends_on = [
google_container_node_pool.e2-small
]
}
In my case the provider is waiting on nodes to be deployed. Kubernete provider starts working after the nodes are provisioned and variables are declared.
my problem
the name wasn't right so the data returned nothing
I tried
data "google_container_cluster" "my_cluster" {
name = "gke_my-project-name_us-central1_my-cluster-name"
location = "us-central1"
}
But the name is wrong and needs to be just my-cluster-name
And most importantly, the data directive didn't say that this cluster didn't exist --sad_beeps--
Solution (in my case): double check that the cluster exists
data "google_container_cluster" "my_cluster" {
name = "my-cluster-name"
location = "us-central1"
}
hope it will save time for someone else

how can I attach multiple pre-existing AWS managed roles to a policy?

I want to associate existing policies in AWS to a role, I am using the terraform tool
I want to associate these policies, this code is with the aws cloudformation tool:
AWSCodeCommitFullAccess
AWSCodeBuildAdminAccess
AWSCodeDeployFullAccess
AWSCodePipelineFullAccess
AWSElasticBeanstalkFullAccess
try with the attach
data "aws_iam_policy" "attach-policy" {
arn = ["arn:aws:iam::aws:policy/AWSCodeCommitFullAccess", "arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess", "arn:aws:iam::aws:policy/AWSCodeDeployFullAccess", "arn:aws:iam::aws:policy/AWSCodePipelineFullAccess"]
}
resource "aws_iam_role_policy_attachment" "tc-role-policy-attach" {
role = "${aws_iam_role.toolchain-role.name}"
policy_arn = "${data.aws_iam_policy.attach-policy.arn}"
}
You go with the right direction with terraform resource aws_iam_role_policy_attachment but need some adjustment.
AWS managed policies' ARN are exist in the system. For example, if you need attach the first managed policy to an IAM role,
resource "aws_iam_role_policy_attachment" "test-policy-AWSCodeCommitFullAccess" {
policy_arn = "arn:aws:iam::aws:policy/AWSCodeCommitFullAccess"
role = "${aws_iam_role.toolchain-role.name}"
}
You can add other managed policies one by one.
If you want to do together, you can try below code
variable "managed_policies" {
default = ["arn:aws:iam::aws:policy/AWSCodeCommitFullAccess",
"arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess",
"arn:aws:iam::aws:policy/AWSCodeDeployFullAccess",
"arn:aws:iam::aws:policy/AWSCodePipelineFullAccess",
]
}
resource "aws_iam_role_policy_attachment" "tc-role-policy-attach" {
count = "${length(var.managed_policies)}"
policy_arn = "${element(var.managed_policies, count.index)}"
role = "${aws_iam_role.toolchain-role.name}"
}