mock outputs in Terragrunt dependency - kubernetes

I want to use Terragrunt to deploy this example: https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/examples/complete-kubernetes-addons/main.tf
So far, I was able to create the VPC/EKS resource without a problem, I separated each module into a different module directory, and everything worked as expected.
When I tried to do the same for the Kubernetes-addons module, I faced an issue with the data source trying to call to the cluster and failing since the cluster wasn't created at this point.
Here's my terragrunt.hcl which I'm trying to execute for this specific module:
...
terraform {
source = "git::git#github.com:aws-ia/terraform-aws-eks-blueprints.git//modules/kubernetes-addons?ref=v4.6.1"
}
locals {
# Extract needed variables for reuse
cluster_version = "${include.envcommon.locals.cluster_version}"
name = "${include.envcommon.locals.name}"
}
dependency "eks" {
config_path = "../eks"
mock_outputs = {
eks_cluster_endpoint = "https://000000000000.gr7.eu-west-3.eks.amazonaws.com"
eks_oidc_provider = "something"
eks_cluster_id = "something"
}
}
inputs = {
eks_cluster_id = dependency.eks.outputs.cluster_id
eks_cluster_endpoint = dependency.eks.outputs.eks_cluster_endpoint
eks_oidc_provider = dependency.eks.outputs.eks_oidc_provider
eks_cluster_version = local.cluster_version
...
}
The error that I'm getting here:
`
INFO[0035]
Error: error reading EKS Cluster (something): couldn't find resource
with data.aws_eks_cluster.eks_cluster,
on data.tf line 7, in data "aws_eks_cluster" "eks_cluster":
7: data "aws_eks_cluster" "eks_cluster" {
`

The kubernetes-addons module is deploying addons into an existing Kubernetes cluster. If you don't have a cluster running (apparently you don't have one when you're mocking the cluster_id variable), then you get the error of not having the aws_eks_cluster data source.
You need to create the K8s cluster first, before you can start deploying the addons.

Related

GCP: using image from one account's Artifact Registry on other account

Hello and wish you a great time!
I've got following terraform service account deffinition:
resource "google_service_account" "gke_service_account" {
project = var.context
account_id = var.gke_account_name
display_name = var.gke_account_description
}
That I use in GCP kubernetes node pool:
resource "google_container_node_pool" "gke_node_pool" {
name = "${var.context}-gke-node"
location = var.region
project = var.context
cluster = google_container_cluster.gke_cluster.name
management {
auto_repair = "true"
auto_upgrade = "true"
}
autoscaling {
min_node_count = var.gke_min_node_count
max_node_count = var.gke_max_node_count
}
initial_node_count = var.gke_min_node_count
node_config {
machine_type = var.gke_machine_type
service_account = google_service_account.gke_service_account.email
metadata = {
disable-legacy-endpoints = "true"
}
# Needed for correctly functioning cluster, see
# https://www.terraform.io/docs/providers/google/r/container_cluster.html#oauth_scopes
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
However the current solution requires the prod and dev envs to be on various GCP accounts but use the same image from prod artifact registry.
As for now I have JSON key file for service account in prod having access to it's registry. Maybe there's a pretty way to use the json file as a second service account for kubernetes or update current k8s service account with json file to have additional permissions to the remote registry?
I've seen the solutions like put it to a secret or user cross-account-service-account.
But it's not the way I want to resolve it since we have some internal restrictions.
Hope someone faced similar task and has a solution to share - it'll save me real time.
Thanks in advance!

Probable causes for idempotent error by terraform for infra generation

We are using terraform to launch ECS containers in AWS infra using custom task definition.
As we didn't require full infra to be launched every-time, a part only for launching ECS container was segregated.
The launch was happening correctly, till ECS launch code was segregated, then the ECS service launch started giving an error indicating Idempotent issue.
│ Error: error creating target service: error waiting for ECS service (sandbox) creation: InvalidParameterException: Creation of service was not idempotent.
│
│ with aws_ecs_service.ecs_service_target,
│ on aws_infra_ecs.tf line 100, in resource "aws_ecs_target" "ecs_service_target":
│ 100: resource "aws_ecs_target" "ecs_service_target" {
│
ECS service is defined somewhat like below:
resource "aws_ecs_service" "ecs_service_target" {
desired_count = 1
name = "target"
launch_type = "FARGATE"
cluster = data.aws_ecs_cluster.cluster_target.id
enable_ecs_managed_tasks = true
task_definition = aws_ecs_task_definition.target_taskdef.arn
platform_version = "1.4.0"
...
load_balancer {
...
target_group_arn = data.aws_lb_target_group.aws_target.arn
}
...
network_configuration {
...
security_groups = [ data.aws_security_group.target_sg.id ]
subnets = [ "subet-5767c3c2" ] # A dynamic subnet reference id is used here
}
depends_upon = [
var.second_service_name,
aws_ecs_task_definition.target_taskdef,
data.aws_efs_access_point.target_ap
]
...
}
I was expecting the problems to be one of following kind:
Subnet selected may be different due to variable based selection
Use of indirect data references (rather than direct resource reference) may cause issue
task definition JSON encoding issue
What might be other causes for such a problem.

How to install AGIC in Kubernetes cluster using Terraform

I am trying to install AGIC in AKS using Terraform. I am following this document https://learn.microsoft.com/en-us/azure/terraform/terraform-create-k8s-cluster-with-aks-applicationgateway-ingress but this document shows partial terraform deployment i want to fully automate it with the help of Terraform. Is there any other document/way to do this?
Of course, you can use the Terraform to deploy the Helm charts to the AKS. And here is an example for deploying Helm charts through Terraform:
data "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
resource "helm_release" "example" {
name = "my-redis-release"
repository = data.helm_repository.stable.metadata[0].name
chart = "redis"
version = "6.0.1"
values = [
"${file("values.yaml")}"
]
set {
name = "cluster.enabled"
value = "true"
}
set {
name = "metrics.enabled"
value = "true"
}
set_string {
name = "service.annotations.prometheus\\.io/port"
value = "9127"
}
}
And you can also configure the certificate of the AKS to deploy the Helm charts through Terraform, take a look at the document here.

How to integrate PostgreSQL with Corda 3.0 or with Corda 4.0?

I tried to configure PostgreSQL as node's database in Corda 3.0 and in Corda 4.0.
I have added following things in build.gradle file. (Testdb1 is Database name. I have tried with postgres also)
node{
...
// this part i have added
extraConfig = [
jarDirs: ['path'],
'dataSourceProperties': [
'dataSourceClassName': 'org.postgresql.ds.PGSimpleDataSource',
'"dataSource.url"' : 'jdbc:postgresql://127.0.0.1:5432/Testdb1',
'"dataSource.user"' : 'postgres',
'"dataSource.password"': 'admin#123'
],
'database': [
'transactionIsolationLevel': 'READ_COMMITTED'
]
]
// till here
}
this part in reference.conf file
dataSourceProperties = {
dataSourceClassName = org.postgresql.ds.PGSimpleDataSource
dataSource.url = "jdbc:postgresql://127.0.0.1:5432/Testdb1"
dataSource.user = postgres
dataSource.password = "admin#123"
}
database = {
transactionIsolationLevel = "READ_COMMITTED"
}
jarDirs = ["path"]
I got the follwing Error while deploying the nodes:
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':java-source:deployNodes'.
In node-info-gen.log file it showed the CAPSULE EXCEPTION. Then I updated my JDK to 8u191 but still got the same error.
I have go-through the followings to get the things done. One can get reference from here.
https://docs.corda.net/node-database.html ,
https://github.com/corda/corda/issues/4037 ,
How can the Corda node be extended to work with databases other than H2?
You need to add those properties in node.conf in each of your corda nodes. After you do "deployNodes"
After you add these properties in node.conf file , just run the corda jar . It will automatically start . But before that you need to create the tables (The migration to other DBs is already provided in corda documentation)
I have added following things in .conf file of each node and one referene.conf file. I have given all the privileges for the user postgres which are mentioned in corda documentation.
https://docs.corda.r3.com/node-database.html
(previously I used postgresql-42.2.5.jar file but that didn't work so I used one downgrade version of it postgresql-42.1.4.jar.
one can download jar files from https://jdbc.postgresql.org/download.html )
After deployed Nodes successfully add following things:
dataSourceProperties = {
dataSourceClassName = org.postgresql.ds.PGSimpleDataSource
dataSource.url = "jdbc:postgresql://127.0.0.1:5432/Testdb1"
dataSource.user = postgres
dataSource.password = "admin#123"
}
database = {
transactionIsolationLevel = "READ_COMMITTED"
}
jarDirs = ["path"]
(path = jar file's location) after adding this configuration run file called runnodes.bat

Terraform: How to create a Kubernetes cluster on Google Cloud (GKE) with namespaces?

I'm after an example that would do the following:
Create a Kubernetes cluster on GKE via Terraform's google_container_cluster
... and continue creating namespaces in it, I suppose via kubernetes_namespace
The thing I'm not sure about is how to connect the newly created cluster and the namespace definition. For example, when adding google_container_node_pool, I can do something like cluster = "${google_container_cluster.hosting.name}" but I don't see anything similar for kubernetes_namespace.
In theory it is possible to reference resources from the GCP provider in K8S (or any other) provider in the same way you'd reference resources or data sources within the context of a single provider.
provider "google" {
region = "us-west1"
}
data "google_compute_zones" "available" {}
resource "google_container_cluster" "primary" {
name = "the-only-marcellus-wallace"
zone = "${data.google_compute_zones.available.names[0]}"
initial_node_count = 3
additional_zones = [
"${data.google_compute_zones.available.names[1]}"
]
master_auth {
username = "mr.yoda"
password = "adoy.rm"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
provider "kubernetes" {
host = "https://${google_container_cluster.primary.endpoint}"
username = "${google_container_cluster.primary.master_auth.0.username}"
password = "${google_container_cluster.primary.master_auth.0.password}"
client_certificate = "${base64decode(google_container_cluster.primary.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.primary.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
resource "kubernetes_namespace" "n" {
metadata {
name = "blablah"
}
}
However in practice it may not work as expected due to a known core bug breaking cross-provider dependencies, see https://github.com/hashicorp/terraform/issues/12393 and https://github.com/hashicorp/terraform/issues/4149 respectively.
The alternative solution would be:
Use 2-staged apply and target the GKE cluster first, then anything else that depends on it, i.e. terraform apply -target=google_container_cluster.primary and then terraform apply
Separate out GKE cluster config from K8S configs, give them completely isolated workflow and connect those via remote state.
/terraform-gke/main.tf
terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
provider "google" {
region = "us-west1"
}
data "google_compute_zones" "available" {}
resource "google_container_cluster" "primary" {
name = "the-only-marcellus-wallace"
zone = "${data.google_compute_zones.available.names[0]}"
initial_node_count = 3
additional_zones = [
"${data.google_compute_zones.available.names[1]}"
]
master_auth {
username = "mr.yoda"
password = "adoy.rm"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
output "gke_host" {
value = "https://${google_container_cluster.primary.endpoint}"
}
output "gke_username" {
value = "${google_container_cluster.primary.master_auth.0.username}"
}
output "gke_password" {
value = "${google_container_cluster.primary.master_auth.0.password}"
}
output "gke_client_certificate" {
value = "${base64decode(google_container_cluster.primary.master_auth.0.client_certificate)}"
}
output "gke_client_key" {
value = "${base64decode(google_container_cluster.primary.master_auth.0.client_key)}"
}
output "gke_cluster_ca_certificate" {
value = "${base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}
Here we're exposing all the necessary configuration via outputs and use backend to store the state, along with these outputs in a remote location, GCS in this case. This enables us to reference it in the config below.
/terraform-k8s/main.tf
data "terraform_remote_state" "foo" {
backend = "gcs"
config {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
provider "kubernetes" {
host = "https://${data.terraform_remote_state.foo.gke_host}"
username = "${data.terraform_remote_state.foo.gke_username}"
password = "${data.terraform_remote_state.foo.gke_password}"
client_certificate = "${base64decode(data.terraform_remote_state.foo.gke_client_certificate)}"
client_key = "${base64decode(data.terraform_remote_state.foo.gke_client_key)}"
cluster_ca_certificate = "${base64decode(data.terraform_remote_state.foo.gke_cluster_ca_certificate)}"
}
resource "kubernetes_namespace" "n" {
metadata {
name = "blablah"
}
}
What may or may not be obvious here is that cluster has to be created/updated before creating/updating any K8S resources (if such update relies on updates of the cluster).
Taking the 2nd approach is generally advisable either way (even when/if the bug was not a factor and cross-provider references worked) as it reduces the blast radius and defines much clearer responsibility. It's (IMO) common for such deployment to have 1 person/team responsible for managing the cluster and a different one for managing K8S resources.
There may certainly be overlaps though - e.g. ops wanting to deploy logging & monitoring infrastructure on top of a fresh GKE cluster, so cross provider dependencies aim to satisfy such use cases. For that reason I'd recommend subscribing to the GH issues mentioned above.