I'm installing airflow Helm charts i want to use vault as secret backend so airflow get database connection uri from vault secrets. Anyone succeed to configure it this way please help. As I couldn't find how to parse vault namespace to airflow and didn't understand the documentation.
T
This can be accomplished through a customized airflow.cfg configuration file. According to the documentation, for HashiCorp Vault secrets backend you need to add the following to your config:
[secrets]
backend = airflow.providers.hashicorp.secrets.vault.VaultBackend
backend_kwargs = {
"connections_path": "connections",
"url": "http://127.0.0.1:8200",
"mount_point": "airflow"
}
You can read how to set these parameters here in the docs.
Since you are installing using helm charts, you need to be able to inject a custom configuration file to the airflow installation. This is provided by the airflow helm chart in values.yaml file for the chart, which you can see here.
I hope this is helpful.
Related
I installed a Hasicorp Vault server via Helm with my custom values.yaml file (used this as a reference: https://developer.hashicorp.com/vault/docs/platform/k8s/helm/configuration)
I know I can enable different secrets engines after I initialize and unseal Vault (via the UI, CLI or API).
However, I am wondering whether it is possible to enable secrets engines via the values.yaml before initializing and unsealing Vault - i.e., when I open the UI after initializing and unsealing Vault I would like to see these engines already enabled and on the list of secrets engines (without enabling them manually).
I searched online for a way to do this but my efforts were in vain. I would really appreciate any answer on this subject.
Thanks in advance!
I'm trying to connect a KSQLDB Helm chart cp-ksqldb-server with an SSL-secured Kafka broker. The chart I used for the broker is bitnami/kafka.
I've used this script to create keystore and truststore JKS files. Created secret of these files and passed the secret to the auth.tls.existingSecrets parameter in bitnami/kafka helm chart, which is working fine. Followed this doc for the whole process.
Now I'm trying to configure cp-ksqldb-server Helm chart to connect to the broker so that the connection is encrypted with SSL. I'm using SASL_SSL to connect to the broker. From the KSQLDB Doc I have to pass configuration values like this:
security.protocol=SASL_SSL
ssl.truststore.location=/etc/kafka/secrets/kafka.client.truststore.jks
ssl.truststore.password=<password>
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=\
org.apache.kafka.common.security.plain.ScramLoginModule required \
username="<user>" \
password="<password>";
I have to pass these values in the configurationOverrides parameter.
My question is (For KSQLDB chart):
How do I pass ssl.truststore.location value, as my truststore file is in my local machine, and what I'm trying to run is a KSQLDB Helm chart in a Kubernetes cluster?
Is there any way to pass secret values like bitnami/kafka?
Is there any way to pass the truststore file by volume binding in this chart?
Thanks!
The linked chart does not support custom volume mounts for external files, which is what you'd need, and shown how to do using kubectl create secret ... --from-file
Ref.
https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-ksql-server/templates/deployment.yaml#L72-L76
https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-ksql-server/templates/deployment.yaml#L105
These charts are no longer maintained, so you'd be better off cloning and editing the chart to your needs, anyway.
I am trying to enable SSO on ArgoCD.
I am trying the OIDC Way.
I was able to make things work by editing the configmap and the secret.
But, If we again do the Helm Upgrade. There is a chance that we might loss the configuration and again we have to modify the secret and configmap.
So, We want to follow the GitOps Pattern of passing the OIDC.CONFIG during the helm Upgrade.
I tried few ways of setting it.
Now,
--set-file argo-cd.server.config.oidc.config="$(ARGO_CD_SSO_CONFIG_FILE)"
If I try this way and make it template before deploying it.
We are getting the below pattern.
oidc:
config: |
But ArgoCD Expects it to be "oidc.config".
Can anyone help me with this.
I am currently having issues updating the Vault server HA (high-availability) storage to use PostgreSQL upon Vault installation via Helm 3.
Things I have tried:
Setting the values needed for HA (high-availability) manually, using the --set= Helm flag, by running the following command:
helm install vault hashicorp/vault \
--set='server.ha.enabled=true' \
--set='server.ha.replicas=4' \
--set='server.ha.raft.config= |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "postgresql" {
connection_url = "postgres://<pg_user>:<pg_pw>#<pg_host>:5432/<pg_db>"
}
service_registration "kubernetes" {}'
This would be great if it worked, but the storageconfig.hcl was not updated on installation.
I have tried creating a Helm override config file, and replaced the storage section from raft to postgresql. As mentioned here: Vault on Kubernetes Deployment Guide | Vault - HashiCorp Learn
Tried editing the storageconfig.hcl running directly in the pod. I can delete the file, but I can not use vim to edit/replace with a config on my machine – plus, I think this is bad practice since it is not linked with the Helm installation.
Looking for general information about what I might be doing wrong, or maybe some other ideas of what I could try to get this working as intended.
I deploy kong via helm on my kubernetes cluster but I can't configure it as I want.
helm install stable/kong -f values.yaml
value.yaml:
{
"persistence.size":"1Gi",
"persistence.storageClass":"my-kong-storage"
}
Unfortunately, the created persistenceVolumeClaim stays at 8G instead of 1Gi. Even adding "persistence.enabled":false has no effect on deployment. So I think my all my configuration is bad.
What should be a good configuration file?
I am using kubernetes rancher deployment on bare metal servers.
I use Local Persistent Volumes. (working well with mongo-replicaset deployment)
What you are trying to do is to configure a dependency chart (a.k.a subchart ) which is a little different from a main chart when it comes to writing values.yaml. Here is how you can do it:
As postgresql is a dependency chart for kong so you have to use the name of the dependency chart as a key then the rest of the options you need to modify in the following form:
The content of values.yaml does not need to be surrounded with curly braces. so you need to remove it from the code you posted in the question.
<dependcy-chart-name>:
<configuration-key-name>: <configuration-value>
For Rancher you have to write it as the following:
#values.yaml for rancher
postgresql.persistence.storageClass: "my-kong-storage"
postgresql.persistence.size: "1Gi"
Unlike if you are using helm itself with vanilla kubernetes - at least - you can write the values.yml as below:
#values.yaml for helm
postgresql:
persistence:
storageClass: "my-kong-storage"
size: "1Gi"
More about Dealing with SubChart values
More about Postgresql chart configuration
Please tell us which cluster setup you are using. A cloud managed service? Custom setup kubernetes?
The problem you are facing is that there is a "minimum size" of storage to be provisioned. For example in IBM Cloud it is 20 GB.
So even if 2GB are requested in the PVC , you will end up with a 20GB PV.
Please check the documentation of your NFS Provisioner / Storage Class