How to get internal ip of postgreSQL DB in GCP created by Terraform - postgresql

I am learning terraform deployments coupled with GCP to streamline deployments.
I have successfully deployed a postgreSQL db.
Now I am trying to utilize terraform outputs to write a the private ip generated by the postgreSQL DB server to the output directory where terraform is initiated from.
What is not clear to me is:
(1) The output is defined within the same main.tf file?
(2) Where is the output parameters referenced from? I cannot find the documentation to properly aline. Such I keep getting the error upon applying: Error: Reference to undeclared resource
My main.tf looks like this
resource "google_sql_database_instance" "main" {
name = "db"
database_version = "POSTGRES_12"
region = "us-west1"
settings {
availability_type = "REGIONAL"
tier = "db-custom-2-8192"
disk_size = "10"
disk_type = "PD_SSD"
disk_autoresize = "true"
}
}
output "instance_ip_addr" {
value = google_sql_database_instance.private_network.id
description = "The private IP address of the main server instance."
}

As for the code style, usually there would be a separate file called outputs.tf where you would add all the values you want to have outputted after a successful apply. The second part of the question is two-fold:
You have to understand how references to resource attributes/arguments work [1][2]
You have to reference the correct logical ID of the resource, i.e., the name you assigned to it, followed by the argument/attribute [3]
So, in your case that would be:
output "instance_ip_addr" {
value = google_sql_database_instance.main.private_ip_address # <RESOURCE TYPE>.<NAME>.<ATTRIBUTE>
description = "The private IP address of the main server instance."
}
[1] https://www.terraform.io/language/expressions/references#references-to-resource-attributes
[2] https://www.terraform.io/language/resources/behavior#accessing-resource-attributes
[3] https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/sql_database_instance#attributes-reference

To reference an attribute of a resource, you should put something like:
[resource type].[resource name].[attribute]
In this case, the output should be:
output "instance_ip_addr" {
value = google_sql_database_instance.main.private_ip_address
description = "The private IP address of the main server instance."
}
The output attributes are listed in the documentation. It's fine to put that in main.tf.

Related

What does this error mean when trying to use an AppRole from Vault on an Ingress deployment?

Context
We were trying to fix an inconsistency between Terraform and our cloud provider because a database was deleted through the cloud's UI console and the changes were not properly imported into Terraform.
For reasons we preferred to not do terraform import and proceeded to change the state file to remove all references to that database hoping that would allow us to run things like plan, and it did work, but we came across other issues...
Oh, I should add that we run things like Helm through Terraform to set up our Kubernetes infra as well.
The problem
Now Terraform makes a plan to remove a Google Container Node Pool (desired outcome) and to update a Kubernetes resource of kind Ingress. The latter change is not really intended, although it could be because there's a Terraform module dependency between the module that sets up all the cluster (including node pools) and the module that sets up Ingress.
Now the issue comes from updating that Ingress. Here's the plan:
# Terraform will read AppRole from Vault
data "vault_approle_auth_backend_role_id" "role" {
- backend = "approle" -> null
~ id = "auth/approle/role/nginx-ingress/role-id" -> (known after apply)
~ role_id = "<some UUID>" -> (known after apply)
role_name = "nginx-ingress"
}
# Now this is the resource that makes everything blow up
resource "helm_release" "nginx-ingress" {
atomic = false
chart = ".terraform/modules/nginx-ingress/terraform/../helm"
...
...
- set_sensitive {
- name = "appRole.roleId" -> null
- value = (sensitive value)
}
+ set_sensitive {
+ name = "appRole.roleId"
+ value = (sensitive value)
}
- set_sensitive {
- name = "appRole.secretId" -> null
- value = (sensitive value)
}
+ set_sensitive {
+ name = "appRole.secretId"
+ value = (sensitive value)
}
}
And here's the error message we get:
When expanding the plan for module.nginx-ingress.helm_release.nginx-ingress to
include new values learned so far during apply, provider
"registry.terraform.io/hashicorp/helm" produced an invalid new value for
.set_sensitive: planned set element
cty.ObjectVal(map[string]cty.Value{"name":cty.StringVal("appRole.secretId"),
"type":cty.NullVal(cty.String),
"value":cty.StringVal("<some other UUID>")}) does not
correlate with any element in actual.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
What we tried
We thought that maybe the AppRole's secretId had rotated or changed, so we took the secretId from the State of another environment that uses the same AppRole from the same Vault and set it in our modified state file. That didn't work.

fetch and update particular field using terraform

i have a scenario,
How to fetch particular field value and also update particular field value?
For example :
Im deploying an application using terraform "kubernetes_deployment" resource configured with environment variables(endpoint=abc) and replicas=2.
resource "kubernetes_deployment" “app” {
…..….
spec {
replicas = 2
template {
spec {
….
env {
name = “ENDPOINT”
value = “abc”
}
}
Once i deployed using terraform script, another script might change configurations replicas=5 and environment values(endpoint=xyz)
Now i need to update only replicas to 20(if replicas < 20) through terraform script without changing the environment values(endpoint=abc)?
resource "kubernetes_deployment" “app” {
…..….
spec {
replicas = 20 -> only this has to reflect in apply
template {
spec {
….
env {
name = “ENDPOINT”
value = “abc”
}
}
How can i fetch particular field(replicas) to compare if replicas count > 20 and update only replicas count?
Can someone with more Terraform experience help me on this?
Inside the "kubernetes_deployment" resource block, consider adding a lifecycle block. Use it to ignore changes to resource attributes that can be made outside of Terraform's knowledge.
Provide a list of resource attributes to "ignore_changes", which Terrform would ignore in subsequent runs. The arguments are the relative address of the attributes in the resource. Map and list elements can be referenced using index notation.
lifecycle {
ignore_changes = [spec["env"]]
}
Reference: https://www.terraform.io/docs/language/meta-arguments/lifecycle.html#ignore_changes

409 (request "Conflict") when creating second Endpoint connection in MongoDB Atlas using Terraform

I need to create many MongoDB Atlas endpoint connections using terraform.
I successfully create first, using this code:
#Private endpoint connection
resource "mongodbatlas_private_endpoint" "dbpe" {
project_id = var.prj_id
provider_name = "AWS"
region = var.aws_region
}
#AWS endpoint for secure connect to mongo db
resource "aws_vpc_endpoint" "ec2" {
vpc_id = var.sh_vpc
#service_name = "com.amazonaws.${var.aws_region}.ec2"
service_name = mongodbatlas_private_endpoint.dbpe.endpoint_service_name
vpc_endpoint_type = "Interface"
security_group_ids = [
aws_security_group.lb_sg.id,
]
subnet_ids = [
aws_subnet.subnet1.id,
var.sh_subnet
]
tags = {
"Name" = local.tname
}
#private_dns_enabled = true
}
But when I try to use this code second time in another folder (another tfstate) it failed cause error:
Error: error creating MongoDB Private Endpoints Connection: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/***/privateEndpoint: 409 (request "Conflict") A PrivateLink Endpoint Service already exists for AWS region US_EAST_2.
As I understand, a second "mongodbatlas_private_endpoint" "dbpe" trying to create another one Endpoint service. But, when I creating second Endpoint manually through WebUI, it using the same service like first Endpoint.
How I can tell to second Endpoint to use the existing service?
Or maybe it all wrong?
Please, help!
Thank you!
I found the solution.
Creating the "Endpoint Connection" really creates Endpoint only when you do it at first time. All of next times is creating an only association between Atlas endpoint and new AWS Endpoint.
In terraform I tried to create an Atlas endpoint second time and catch an error (because of limit - 1 endpoint per region). All I need to do - is create "Basic Endpoint" one time (by separate folder with own tfstate) and don't delete it. And for each new AWS endpoint need to create a new link from AWS Endpoint to "Basic". I do it by a terraform resource:
mongodbatlas_private_endpoint_interface_link
Resource "mongodbatlas_private_endpoint" is not need now. A "service_name" parameter in "aws_vpc_endpoint" you can hardcoded from "Basic" Endpoint. Use "output" to see mongodbatlas_private_endpoint.test.private_link_id - this is what you need.

Terraform - AWS - API Gateway dependency conundrum

I am trying to provision some AWS resources, specifically an API Gateway which is connected to a Lambda. I am using Terraform v0.8.8.
I have a module which provisions the Lambda and returns the lambda function ARN as an output, which I then provide as a parameter to the following API Gateway provisioning code (which is based on the example in the TF docs):
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
# Variables
variable "myregion" { default = "eu-west-2" }
variable "accountId" { default = "" }
variable "lambdaArn" { default = "" }
variable "stageName" { default = "lab" }
# API Gateway
resource "aws_api_gateway_rest_api" "api" {
name = "myapi"
}
resource "aws_api_gateway_method" "method" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_rest_api.api.root_resource_id}"
http_method = "GET"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "integration" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_rest_api.api.root_resource_id}"
http_method = "${aws_api_gateway_method.method.http_method}"
integration_http_method = "POST"
type = "AWS"
uri = "arn:aws:apigateway:${var.myregion}:lambda:path/2015-03-31/functions/${var.lambdaArn}/invocations"
}
# Lambda
resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = "${var.lambdaArn}"
principal = "apigateway.amazonaws.com"
source_arn = "arn:aws:execute-api:${var.myregion}:${var.accountId}:${aws_api_gateway_rest_api.api.id}/*/${aws_api_gateway_method.method.http_method}/resourcepath/subresourcepath"
}
resource "aws_api_gateway_deployment" "deployment" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
stage_name = "${var.stageName}"
}
When I run the above from scratch (i.e. when none of the resources exist) I get the following error:
Error applying plan:
1 error(s) occurred:
* aws_api_gateway_deployment.deployment: Error creating API Gateway Deployment: BadRequestException: No integration defined for method
status code: 400, request id: 15604135-03f5-11e7-8321-f5a75dc2b0a3
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
If I perform a 2nd TF application it consistently applies successfully, but every time I destroy I then receive the above error upon the first application.
This caused me to wonder if there's a dependency that I need to explicitly declare somewhere, I discovered #7486, which describes a similar pattern (although relating to an aws_api_gateway_integration_response rather than an aws_api_gateway_deployment). I tried manually adding an explicit dependency from the aws_api_gateway_deployment to the aws_api_gateway_integration but this had no effect.
Grateful for any thoughts, including whether this may indeed be a TF bug in which case I will raise it in the issue tracker. I thought I'd check with the community before doing so in case I'm missing something obvious.
Many thanks,
Edd
P.S. I've asked this question on the Terraform user group but this seems to get very little in the way of responses, I'm yet to figure out the cause of the issue hence now asking here.
You are right about the explicit dependency declaration.
Normally Terraform would be able to figure out the relationships and schedule create/update/delete operations accordingly to that - this is mostly possible because of the interpolation mechanisms under the hood (${resource_type.ref_name.attribute}). You can display the relationships affecting this in a graph via terraform graph.
Unfortunately in this specific case there's no direct relationship between API Gateway Deployments and Integrations - meaning the API interface for managing API Gateway resources doesn't require you to reference integration ID or anything like that to create deployment and the api_gateway_deployment resource in turn doesn't require that either.
The documentation for aws_api_gateway_deployment does mention this caveat at the top of the page. Admittedly the Deployment not only requires the method to exist, but integration too.
Here's how you can modify your code to get around it:
resource "aws_api_gateway_deployment" "deployment" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
stage_name = "${var.stageName}"
depends_on = ["aws_api_gateway_method.method", "aws_api_gateway_integration.integration"]
}
Theoretically the "aws_api_gateway_method.method" is redundant since the integration already references the method in the config:
http_method = "${aws_api_gateway_method.method.http_method}"
so it will be scheduled for creation/update prior to the integration either way, but if you were to change that to something like
http_method = "GET"
then it would become necessary.
I have submitted PR to update the docs accordingly.

Ganglia No matching metrics detected

We are getting error as "No matching metrics detected". cluster level metrics are visible.
ganglia core 3.6.0
ganglia web 3.5.12
Please help to resolve this issue.
Regards,
Jayendra
Somewhere, in a .conf file (or .pyconf, et al,) you must specify a 'collection_group' with a list of the metrics you want to collect. From the default gmond.conf, it should look similar to this:
collection_group {
collect_once = yes
time_threshold = 1200
metric {
name = "cpu_num"
title = "CPU Count"
}
metric {
name = "cpu_speed"
title = "CPU Speed"
}
metric {
name = "mem_total"
title = "Memory Total"
}
}
You may use wildcards to match the name.
You'll also need to include the module that provides the metrics you are looking to collect. Again, the example gmond.conf contains something like this:
modules {
module {
name = "core_metrics"
}
module {
name = "cpu_module"
path = "modcpu.so"
}
}
among others.
You can generate an example gmond.conf by typing
gmond -t > /usr/local/etc/gmond.conf
This path is correct for ganglia-3.6.0, I know that many file paths have changed several times since 3.0...
A good reference book is 'Monitoring with Ganglia.' I'd recommend getting a copy if you're going to be getting very deeply involved with configuring / maintaining a ganglia installation.
When summary/cluster graphs are visible, but individual host graph data is not, this might be caused by a mismatch of hostname case (between reported hostname and rrd graph directory names).
Check /var/lib/ganglia/rrds/CLUSTER-NAME/HOSTNAME
This will show you what case the hostnames are getting their graphs generated as.
If the case does not match their hostname, edit: /etc/ganglia/conf.php (this allows overrides to defaults at: /usr/share/ganglia/conf_default.php)
Add the following line:
$conf['case_sensitive_hostnames'] = false;
Another place to check for case sensitiviy is the gmetad settings at /etc/ganglia/gmetad
case_sensitive_hostnames 0
Versions This Was Fixed On:
OS: CentOS 6
Ganglia Core: 3.7.2-2
Ganglia Web: 3.7.1-2
Installed via EPEL