local conf play framework - scala

I have application.conf
{
name {
postgres {
host = ""
username = ""
password = ""
}
}
}
And I want to add my local.conf
{
name {
postgres {
host = "blabla"
username = "aa"
password = "bb"
}
}
}
name.postgres.host.override = "" - doesn't work

In your application.conf, it will remain the same:
{
name {
postgres {
host = ""
username = ""
password = ""
}
}
}
And in your local.conf, you should include application.conf like this:
include "application.conf"
{
name {
postgres {
host = "blabla"
username = "aa"
password = "bb"
}
}
}
When running the sbt you should specifically mention to load local.conf like this (Or else application.conf will get loaded by default):
sbt run -Dconfig.resource=local.conf
With that, local.conf will be extended from application.conf. The value from local.conf will be picked if there is any key that is exists in both files.
Now, you would get:
name.postgres.host=blabla

you can include a conf file in a another conf file like this :
Screen
this will automatically override variables.

Related

Jenkins dynamic choice parameter to read a ansible host file in github

I have an ansible host file that is stored in GitHub and was wondering if there is a way to list out all the host in jenkins with choice parameters? Right now every time I update the host file in Github I would have to manually go into each Jenkins job and update the choice parameter manually. Thanks!
I'm assuming your host file has content something similar to below.
[client-app]
client-app-preprod-01.aws-xxxx
client-app-preprod-02.aws
client-app-preprod-03.aws
client-app-preprod-04.aws
[server-app]
server-app-preprod-01.aws
server-app-preprod-02.aws
server-app-preprod-03.aws
server-app-preprod-04.aws
Option 01
You can do something like the one below. Here you can first checkout the repo and then ask for the user input. I have implemented the function getHostList() to parse the host file to filter the host entries.
pipeline {
agent any
stages {
stage('Build') {
steps {
git 'https://github.com/jglick/simple-maven-project-with-tests.git'
script {
def selectedHost = input message: 'Please select the host', ok: 'Next',
parameters: [
choice(name: 'PRODUCT', choices: getHostList("client-app","ansible/host/location"), description: 'Please select the host')]
echo "Host:::: $selectedHost"
}
}
}
}
}
def getHostList(def appName, def filePath) {
def hosts = []
def content = readFile(file: filePath)
def startCollect = false
for(def line : content.split('\n')) {
if(line.contains("["+ appName +"]")){ // This is a starting point of host entries
startCollect = true
continue
} else if(startCollect) {
if(!line.allWhitespace && !line.contains('[')){
hosts.add(line.trim())
} else {
break
}
}
}
return hosts
}
Option 2
If you want to do this without checking out the source and with Job Parameters. You can do something like the one below using the Active Choice Parameter plugin. If your repository is private, you need to figure out a way to generate an access token to access the Raw GitHub link.
properties([
parameters([
[$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
description: 'Select the Host',
name: 'Host',
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: false,
script:
'return [\'Could not get Host\']'
],
script: [
classpath: [],
sandbox: false,
script:
'''
def appName = "client-app"
def content = new URL ("https://raw.githubusercontent.com/xxx/sample/main/testdir/hosts").getText()
def hosts = []
def startCollect = false
for(def line : content.split("\\n")) {
if(line.contains("["+ appName +"]")){ // This is a starting point of host entries
startCollect = true
continue
} else if(startCollect) {
if(!line.allWhitespace && !line.contains("[")){
hosts.add(line.trim())
} else {
break
}
}
}
return hosts
'''
]
]
]
])
])
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
echo "Host:::: ${params.Host}"
}
}
}
}
}
Update
When you are calling a private repo, you need to send a Basic Auth header with the access token. So use the following groovy script instead.
def accessToken = "ACCESS_TOKEN".bytes.encodeBase64().toString()
def get = new URL("https://raw.githubusercontent.com/xxxx/something/hosts").openConnection();
get.setRequestProperty("authorization", "Basic " + accessToken)
def content = get.getInputStream().getText()

Error: error detecting capabilities: error PostgreSQL version: pq: password authentication failed for user "postgres"

I am trying to create a docker/posgresql infrastrucure with terraform:
I used the examples in the terraform_postgres_provider
+--modules
| +--docker
| +--main.tf
| +--postgres
| +--main.tf
+--root.tf
this is the code for moduels/docker/main.tf
variable "name" {
default = "postgres"
}
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "2.19.0"
}
}
}
provider "docker" {
host = "unix:///var/run/docker.sock"
}
resource "docker_image" "postgres" {
name = "postgres:latest"
}
resource "docker_container" "postgres" {
image = "${docker_image.postgres.latest}"
name = "${var.name}"
restart = "always"
hostname = "${var.name}"
env=[ "host=localhost","port=5432","POSTGRES_USER=postgres","POSTGRES_PASSWORD:123456"]
ports {
internal = "5432"
external = "5432"
}
}
this is the code for moduels/postgres/main.tf
variable "name" {
default = "postgres"
}
terraform {
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
version = "1.16.0"
}
}
}
provider "postgresql" {
alias = "pg1"
# host = "${var.name}" // will not work because 'postgres' is only resolved within the docker dns
host = "localhost"
port = 5432
username = "postgres"
password = "123456"
sslmode = "disable"
connect_timeout = 15
}
resource "postgresql_database" "mutualfunds" {
provider = postgresql.pg1
name = "mutualfunds"
}
this is the code for root.tf
module "docker" {
source = "./modules/docker"
name = "postgres"
}
module "postgres" {
source = "./modules/postgres"
name = "postgres"
}
I got the error mentioned in the title above.
in docker the container log shows this error:
connections without a password. This is *not* recommended.
See PostgreSQL documentation about "trust":
https://www.postgresql.org/docs/current/auth-trust.html
Error: Database is uninitialized and superuser password is not specified.
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
connections without a password. This is *not* recommended.

Unable to connect to RDS Aurora DB locally

I have a somewhat basic understanding of Cloud Architecture and thought I would try to spin up a PostgreSQL DB in Terraform. I am using Secret Manager to store credentials...
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "aws_secretsmanager_secret" "secret" {
name = "admin"
description = "Database admin user password"
}
resource "aws_secretsmanager_secret_version" "version" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = <<EOF
{
"username": "db_user",
"password": "${random_password.password.result}"
}
EOF
}
locals {
db_credentials = jsondecode(data.aws_secretsmanager_secret_version.credentials.secret_string)
}
And an AuoraDB instance which should be publically accessible with the following code
resource "aws_rds_cluster" "cluster-demo" {
cluster_identifier = "aurora-cluster-demo"
database_name = "test_db"
master_username = local.db_credentials["username"]
master_password = local.db_credentials["password"]
port = 5432
engine = "aurora-postgresql"
engine_version = "12.7"
apply_immediately = true
skip_final_snapshot = "true"
}
// child instances inherit the same config
resource "aws_rds_cluster_instance" "cluster_instance" {
identifier = "aurora-cluster-demo-instance"
cluster_identifier = aws_rds_cluster.cluster-demo.id
engine = aws_rds_cluster.cluster-demo.engine
engine_version = aws_rds_cluster.cluster-demo.engine_version
instance_class = "db.r4.large"
publicly_accessible = true # Remove
}
When I terraform apply this, everything gets created as expected, but when I run psql -h <ENDPOINT_TO_CLUSTER> I get prompted to enter the password for admin. Going to the secrets portal copying the password and entering yields:
FATAL: password authentication failed for user "admin"
Similarily, if I try:
psql --username=db_user --host=<ENDPOINT_TO_CLUSTER> --port=5432
I am prompted as expected, to enter the password for db_user, which yields:
psql: FATAL: database "db_user" does not exist
Edit 1
secrets.tf
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "aws_secretsmanager_secret" "secret" {
name = "admin"
description = "Database admin user password"
}
resource "aws_secretsmanager_secret_version" "version" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = <<EOF
{
"username": "db_user",
"password": "${random_password.password.result}"
}
EOF
}
database.tf
resource "aws_rds_cluster" "cluster-demo" {
cluster_identifier = "aurora-cluster-demo"
database_name = "test_db"
master_username = "db_user"
master_password = random_password.password.result
port = 5432
engine = "aurora-postgresql"
engine_version = "12.7"
apply_immediately = true
skip_final_snapshot = "true"
}
// child instances inherit the same config
resource "aws_rds_cluster_instance" "cluster_instance" {
identifier = "aurora-cluster-demo-instance"
cluster_identifier = aws_rds_cluster.cluster-demo.id
engine = aws_rds_cluster.cluster-demo.engine
engine_version = aws_rds_cluster.cluster-demo.engine_version
instance_class = "db.r4.large"
publicly_accessible = true # Remove
}
output "db_user" {
value = aws_rds_cluster.cluster-demo.master_username
}
You're doing a data lookup named data.aws_secretsmanager_secret_version.credentials but you don't show the Terraform code for that. Terraform is going to do that lookup before it updates the aws_secretsmanager_secret_version. So the username and password it is configuring the DB with is going to be pulled from the previous version of the secret, not the new version you are creating when you run apply.
You should never have both a data and a resource in your Terraform that refer to the same thing. Always use the resource if you have it, and only use data for things that aren't being managed by Terraform.
Since you have the resource itself available in your Terraform code (and also the random_password resource), you shouldn't be using a data lookup at all. If you pull the value from one of the resources, then Terraform will handle the order of creation/updates correctly.
For example:
locals {
db_credentials = jsondecode(aws_secretsmanager_secret_version.version.secret_string)
}
resource "aws_rds_cluster" "cluster-demo" {
master_username = local.db_credentials["username"]
master_password = local.db_credentials["password"]
Or just simplify it and get rid of the jsondecode step:
resource "aws_rds_cluster" "cluster-demo" {
master_username = "db_user"
master_password = random_password.password.result
I also suggest adding a few Terraform outputs to help you diagnose this type of issue. The following will let you see exactly what username and password Terraform applied to the database:
output "db_user" {
value = aws_rds_cluster.cluster-demo.master_username
}
output "db_password" {
value = aws_rds_cluster.cluster-demo.master_password
sensitive = true
}

PowerShell script with parameter for Windows VM instance on Google Cloud Platform

I am trying to deploy Windows VM on Google Cloud through terraform. The VM is getting deployed and I am able to execute PowerShell scripts by using windows-startup-script-url.
With this approach, I can only use scripts which are already stored in Google Storage. If the script has parameters / variables, then how to pass that parameter, any clue !
provider "google" {
project = "my-project"
region = "my-location"
zone = "my-zone"
}
resource "google_compute_instance" "default" {
name = "my-name"
machine_type = "n1-standard-2"
zone = "my-zone"
boot_disk {
initialize_params {
image = "windows-cloud/windows-2019"
}
}
metadata {
windows-startup-script-url = "gs://<my-storage>/<my-script.ps1>"
}
network_interface {
network = "default"
access_config {
}
}
tags = ["http-server", "windows-server"]
}
resource "google_compute_firewall" "http-server" {
name = "default-allow-http"
network = "default"
allow {
protocol = "tcp"
ports = ["80"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["http-server"]
}
resource "google_compute_firewall" "windows-server" {
name = "windows-server"
network = "default"
allow {
protocol = "tcp"
ports = ["3389"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["windows-server"]
}
output "ip" {
value = "${google_compute_instance.default.network_interface.0.access_config.0.nat_ip}"
}
Terraform doesn't require startup scripts to be pulled from GCS buckets necessarily.
The example here shows:
}
metadata = {
foo = "bar"
}
metadata_startup_script = "echo hi > /test.txt"
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
More in Official docs for GCE and Powershell scripting here

Terraform: module outputs not being recognised as variables

I think just a quick sanity check, maybe my eyes are getting confused. I'm breaking a monolithic terraform file into modules.
My main.tf call just two modules, gke for the google kubernetes engine and storage which creates a persistent volume on the cluster created previously.
Module gke has an outputs.tf which outputs the following:
output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
Then in the main.tf for the storage module, I have:
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
host = "${var.host}"
Then in the root main.tf I have the following:
client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
host = "${module.gke.host}"
From what I see, it looks right. The values for the certs, key and host variables should be outputted from the gke module by outputs.tf, picked up by main.tf of root, and then delivered to storage as a regular variable.
Have I got it the wrong way around? Or am I just going crazy, something doesn't seem right.
I get questioned about the variable not being filled when I run a plan.
EDIT:
Adding some additional information including my code.
If I manually add dummy entries for the variables it's asking for I get the following error:
Macbook: $ terraform plan
var.client_certificate
Enter a value: 1
var.client_key
Enter a value: 2
var.cluster_ca_certificate
Enter a value: 3
var.host
Enter a value: 4
...
(filtered out usual text)
...
* module.storage.data.google_container_cluster.kube-cluster: 1 error(s) occurred:
* module.storage.data.google_container_cluster.kube-cluster: data.google_container_cluster.kube-cluster: project: required field is not set
It looks like it's complaining that the data.google_container_cluster resource needs the project attribute. But it doesn't it's not a valid resource. It is for the provider, but it's filled out for provider.
Code below:
Folder structure:
root-folder/
├── gke/
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
├── storage/
│ ├── main.tf
│ └── variables.tf
├── main.tf
├── staging.json
├── terraform.tfvars
└── variables.tf
root-folder/gke/main.tf:
provider "google" {
credentials = "${file("staging.json")}"
project = "${var.project}"
region = "${var.region}"
zone = "${var.zone}"
}
resource "google_container_cluster" "kube-cluster" {
name = "kube-cluster"
description = "kube-cluster"
zone = "europe-west2-a"
initial_node_count = "2"
enable_kubernetes_alpha = "false"
enable_legacy_abac = "true"
master_auth {
username = "${var.username}"
password = "${var.password}"
}
node_config {
machine_type = "n1-standard-2"
disk_size_gb = "20"
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
root-folder/gke/outputs.tf:
output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
root-folder/gke/variables.tf:
variable "region" {
description = "GCP region, e.g. europe-west2"
default = "europe-west2"
}
variable "zone" {
description = "GCP zone, e.g. europe-west2-a (which must be in gcp_region)"
default = "europe-west2-a"
}
variable "project" {
description = "GCP project name"
}
variable "username" {
description = "Default admin username"
}
variable "password" {
description = "Default admin password"
}
/root-folder/storage/main.cf:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}
data "google_container_cluster" "kube-cluster" {
name = "${var.cluster_name}"
zone = "${var.zone}"
}
resource "kubernetes_storage_class" "kube-storage-class" {
metadata {
name = "kube-storage-class"
}
storage_provisioner = "kubernetes.io/gce-pd"
parameters {
type = "pd-standard"
}
}
resource "kubernetes_persistent_volume_claim" "kube-claim" {
metadata {
name = "kube-claim"
}
spec {
access_modes = ["ReadWriteOnce"]
storage_class_name = "kube-storage-class"
resources {
requests {
storage = "10Gi"
}
}
}
}
/root/storage/variables.tf:
variable "username" {
description = "Default admin username."
}
variable "password" {
description = "Default admin password."
}
variable "client_certificate" {
description = "Client certificate, output from the GKE/Provider module."
}
variable "client_key" {
description = "Client key, output from the GKE/Provider module."
}
variable "cluster_ca_certificate" {
description = "Cluster CA Certificate, output from the GKE/Provider module."
}
variable "cluster_name" {
description = "Cluster name."
}
variable "zone" {
description = "GCP Zone"
}
variable "host" {
description = "Host endpoint, output from the GKE/Provider module."
}
/root-folder/main.tf:
module "gke" {
source = "./gke"
project = "${var.project}"
region = "${var.region}"
username = "${var.username}"
password = "${var.password}"
}
module "storage" {
source = "./storage"
host = "${module.gke.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
cluster_name = "${var.cluster_name}"
zone = "${var.zone}"
}
/root-folder/variables.tf:
variable "project" {}
variable "region" {}
variable "username" {}
variable "password" {}
variable "gc_disk_size" {}
variable "kpv_vol_size" {}
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
variable "cluster_name" {}
variable "zone" {}
I won't paste the contents of my staging.json and terraform.tfvars for obvious reasons :)
In your /root-folder/variables.tf, delete the following entries:
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
Those are not variables per se that the Terraform code at the root level needs. Instead, they are being passed as 1 module's output --> input to the 2nd module.