Redhat Openshift with AWS (ROSA) Cluster creation using terraform - redhat

I need to create Redhat Openshift with AWS (ROSA) Cluster using terraform. Can someone let me know about sample scripts or modules if available?

One option is to install the rosa cli and use the local-exec provisioner to run the cli installer.
resource "null_resource" "rosa_provisioner" {
provisioner "local-exec" {
command = rosa create cluster $ARGS_LIST
}
environment = {
ARGS_LIST = "--cluster-name=cluster --sts --mode=auto"
}
}
If you need to accept variable arg inputs it gets harder because of how the rosa cli is structured. Be happy to share a couple examples if you need it.

Related

Add existing GKE cluster to terraform stat file

Lets assume I have an existing GKE cluster that contains all my applications. They were all deployed using different methods. Now I want to deploy some resources to that cluster using Terraform. The trouble here is that terraform doesn't see it in his state file so it can't interact with it. Another problem is that even if I get that cluster to my state file, terraform doesn't see all of the created resources in that cluster. This could lead to some conflicts e.g. I'm trying to deploy two resources with the same name. Is there a way to solve this problem or do I just have to deal with the reality of my existence and create a new cluster for every new project that I deploy with terraform?
You can use terraform import command to import your existing GKE cluster to terraform state. Prior to run it, you need to have the adequate terraform configuration for your cluster.
example of import command :
terraform import google_container_cluster.<TF_RESOURCE_NAME> projects/<PROJECT_ID>/locations/<YOUR-CLUSTER-ZONE>/clusters/<CLUSTER_NAME>
for a terraform configuration :
resource "google_container_cluster" "<TF_RESOURCE_NAME>" {
name = "<CLUSTER_NAME>"
location = "<YOUR-CLUSTER-ZONE>"
}
The CLUSTER_NAME is the name displayed in your GKE clusters list on Google Cloud Console.
Then you need also to import the cluster node pool(s) in the same way using terraform google_container_node_pool resource.

Editing Vault High Availability configuration via the Helm chart at installation

I am currently having issues updating the Vault server HA (high-availability) storage to use PostgreSQL upon Vault installation via Helm 3.
Things I have tried:
Setting the values needed for HA (high-availability) manually, using the --set= Helm flag, by running the following command:
helm install vault hashicorp/vault \
--set='server.ha.enabled=true' \
--set='server.ha.replicas=4' \
--set='server.ha.raft.config= |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "postgresql" {
connection_url = "postgres://<pg_user>:<pg_pw>#<pg_host>:5432/<pg_db>"
}
service_registration "kubernetes" {}'
This would be great if it worked, but the storageconfig.hcl was not updated on installation.
I have tried creating a Helm override config file, and replaced the storage section from raft to postgresql. As mentioned here: Vault on Kubernetes Deployment Guide | Vault - HashiCorp Learn
Tried editing the storageconfig.hcl running directly in the pod. I can delete the file, but I can not use vim to edit/replace with a config on my machine – plus, I think this is bad practice since it is not linked with the Helm installation.
Looking for general information about what I might be doing wrong, or maybe some other ideas of what I could try to get this working as intended.

How can I deploy airflow on Kubernetes "out of the box"?

I am somewhat new to Kubernetes, and I am trying to learn about deploying airflow to Kubernetes.
My objective is to try to deploy an "out-of-the-box" (or at least closer to that) deployment for airflow on Kubernetes. I have created the Kubernetes cluster via Terraform (on EKS), and would like to deploy airflow to the cluster. I found that Helm can help me deploy airflow easier relative to other solutions.
Here is what I have tried so far (snippet and not complete code):
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
data "helm_repository" "airflow" {
name = "airflow"
url = "https://airflow-helm.github.io/charts"
}
resource "helm_release" "airflow" {
name = "airflow-helm"
repository = data.helm_repository.airflow.metadata[0].name
chart = "airflow-chart"
}
I am not necessarily fixed on using Terraform (I just thought it might be easier and wanted to keep state). So I am also happy to discover other solutions that will help me airflow with all the pods needed.
You can install it using Helm from official repository, but there are a lot of additional configuration to consider. The Airflow config is described in chart's values.yaml. You can take a look on this article to check example configuration.
For installation using terraform you can take a look into this article, where both Terraform config and helm chart's values are described in detail.

How to manage multiple kubernetes clusters via Terraform?

I want to create a secret in several k8s clusters in the Google Kubernetes Engine using the Terraform.
I know that I can use "host", "token" and some else parameters in "kubernetes" provider, but I can describe these parameters only once, and I don’t know how to connect to another cluster during the file of terraform.
My question is how to create a secret (or do other operations) in multiple k8s cluster via Terraform. Maybe you know some tools on github or other tips for doing via single terraform file?
You can use alias for provider in terraform like described in documentation
So you can define multiple providers for multiple k8s clusters and then refer them by alias.
e.g.
provider "kubernetes" {
config_context_auth_info = "ops1"
config_context_cluster = "mycluster1"
alias = "cluster1"
}
provider "kubernetes" {
config_context_auth_info = "ops2"
config_context_cluster = "mycluster2"
alias = "cluster2"
}
resource "kubernetes_secret" "example" {
...
provider = kubernetes.cluster1
}
If you're using terraform submodules, the setup is a bit more involved. See this terraform github issue comment.

Automate Retrieval and Storing Kubeconfig File After Creating a Cluster with Terraform/GKE

When I use Terraform to create a cluster in GKE everything works fine and as expected.
After the cluster is created, I want to then use Terraform to deploy a workload.
My issue is, how to be able to point at the correct cluster, but I'm not sure I understand the best way of achieving this.
I want to automate the retrieval of the clusters kubeconfig file- the file which is generally stored at ~/.kube/config. This file is updated when users run this command manually to authenticate to the correct cluster.
I am aware if this file is stored on the host machine (the one I have Terraform running on) that it's possible to point at this file to authenticate to the cluster like so:
provider kubernetes {
# leave blank to pickup config from kubectl config of local system
config_path = "~/.kube/config"
}
However, running this command to generate the kubeconfig requires Cloud SDK to be installed on the same machine that Terraform is running on, and its manual execution doesn't exactly seem very elegant.
I am sure I must be missing something in how to achieve this.
Is there a better way to retrieve the kubeconfig file via Terraform from a cluster created by Terraform?
Actually, there is another way to access to fresh created gke.
data "google_client_config" "client" {}
provider "kubernetes" {
load_config_file = false
host = google_container_cluster.main.endpoint
cluster_ca_certificate = base64decode(google_container_cluster.main.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.client.access_token
}
Basically in one step create you cluster. Export the kube config file to S3 for example.
In another step retrieve the file and move to the default folder. Terraform should work following this steps. Then you can apply your obejcts to cluster created previuoly.
I am deplyoing using gitlabCi pipeline, I have one repository code for k8s cluster (infra) and another with the k8s objects. The first pipeline triggers the second.