How can sslcert and sslkey be passed as environment variables in Kubernetes? - postgresql

I'm trying to make my app connect to my PostgreSQL instance through an encrypted and secure connection.
I've configured my server certificate and generated the client cert and key files.
The following command connects without problems:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=<instance_ip> \
port=5432 \
user=db dbname=dbname"
Unfortunately, I couldn't find a way to pass the client key as value, I can only pass the file path. Even using the default environment variables from psql, this is not possible: https://www.postgresql.org/docs/current/libpq-envars.html
Golang follows the same specifications as lib-pq and there is no way to pass the cert and key values: https://pkg.go.dev/github.com/lib/pq?tab=doc#hdr-Connection_String_Parameters.
I want to store the client cert and key in environment variables for security reasons, I don't want to store sensitive files in github/gitlab.

Just set the values in your environment and you can get them in a init function.
func init() {
var := os.Getenv("SOME_KEY")
}
When you want to set these with K8s you would just do this in a yaml file.
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
SOME_KEY: the-value-of-the-key
Then to inject into the environment do.
envFrom:
- secretRef:
name: my-secret
Now when your init function runs it will we able to see SOME_KEY.
If you want to pass a secret as a file you do something like this.
kubectl create secret generic my-secret-files --from-file=my-secret-file-1.stuff --from-file=my-secret-file-2.stuff
Then in your deployment.
volumes:
- name: my-secret-files
secret:
secretName: my-secret-files
Also in your deployment under you container.
volumeMounts:
- name: my-secret-files
mountPath: /config/
Now your init would be able to see.
/config/my-secret-file-1.stuff
/config/my-secret-file-2.stuff

Related

Kubernetes - Create custom secret holding SSL certificates

I have a problem. In my kubernetes cluster I am running a GitLab image for my own project. This image requires a .crt and .key as certificates for HTTPS usage. I have setup an Ingress resource with a letsencrypt-issuer, which successfully obtains the certificates. But to use those they need to be named as my.dns.com.crt and my.dns.com.key. So I manually ran the following 3 commands:
kubectl get secret project-gitlab-tls -n project-utility \
-o jsonpath='{.data.tls\.crt}' | base64 --decode > /mnt/data/project/gitlab/certs/tls.crt
kubectl get secret project-gitlab-tls -n project-utility \
-o jsonpath='{.data.tls\.key}' | base64 --decode > /mnt/data/project/gitlab/certs/tls.key
kubectl create secret generic gitlab-registry-certs \
--from-file=gitlab.project.com.crt=/mnt/data/project/gitlab/certs/tls.crt \
--from-file=gitlab.project.com.key=/mnt/data/project/gitlab/certs/tls.key \
--namespace project-utility
The first 2 commands print the decoded crt/key content in a file, so that the third command can use those files to create a custom mapping to the specific DNS names. Then in the GitLab deployment I mount this gitlab-registry-certs like this:
volumeMounts:
- mountPath: /etc/gitlab/ssl
name: registry-certs
volumes:
- name: registry-certs
secret:
secretName: gitlab-registry-certs
This all works, but I want this process to be automated, because I am using ArgoCD as deployment tool. I thought about a job, but a job runs a ubuntu version which is not allowed to make changes to the cluster, so I need to call a bash script on the external host. How can I achieve this, because I can only find things about jobs which run an image and not how to execute host commands. If there is a way easier method to use the certificates that I am not seeing please let me know, because I kinda feel weird about this way of using the certificates, but GitLab requires the naming convention of <DNS>.crt and <DNS>.key, so thats why I am doing the remapping.
So the question is how to automate this remapping process so that on cluster generation a job will be executed after obtaining the certificates but before the deployment gets created?
Why are you bothering with this complicated process of creating a new secret? Just rename them in your volumeMounts section by using a subPath:
containers:
- ...
volumeMounts:
- name: registry-certs
mountPath: /etc/gitlab/ssl/my.dns.com.crt
subPath: tls.crt
- name: registry-certs
mountPath: /etc/gitlab/ssl/my.dns.com.key
subPath: tls.key
volumes:
- name: registry-certs
secret:
secretName: project-gitlab-tls
More info in the documentation.

How to add protocol prefix in Kubernetes ConfigMap

In my Kubernetes cluster, I have a ConfigMap object containing the address of my Postgres pod. It was created with the following YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: postgres-service
Now I reference this value in one of my Deployment's configuration:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
This deployment is a Spring Boot application that intends to communicate with the database. Thus it reads the database's URL from the DB_ADDRESS environment variable. (ignore the default values, those are used only during development)
datasource:
url: ${DB_ADDRESS:jdbc:postgresql://localhost:5432/users}
username: ${POSTGRES_USER:postgres}
password: ${POSTGRES_PASSWORD:mysecretpassword}
So, according to the logs, the problem is that the address has to have the jdbc:postgresql:// prefix. Either in the ConfigMap's YAML or in the application.yml I would need to concatenate the prefix protocol string with the variable. Any idea how to do it in yml or suggestion of some other workaround?
If you create a Service, that will provide you with a hostname (the name of the service) that you can then use in the ConfigMap. E.g., if you create a service named postgres, then your ConfigMap would look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: jdbc:postgresql://postgres:5432/users
Kubernetes environment variable declarations can embed the values of other environment variables. This is the only string manipulation that Kubernetes supports, and it pretty much only works in env: blocks.
For this setup, once you've retrieved the database hostname from the ConfigMap, you can then embed it into a more complete SPRING_DATASOURCE_URL environment variable:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
- name: SPRING_DATASOURCE_URL
value: 'jdbc:postgresql://$(DB_ADDRESS):5432/users'
You might similarly parameterize the port (though it will almost always be the standard port 5432) and database name. Avoid putting these settings in a Spring profile YAML file, where you'll have to rebuild your application if any of the deploy-time settings change.

How to create keycloak with operator and external database

I follow this but it is not working.
I created custom secret:
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
data:
POSTGRES_DATABASE: ...
POSTGRES_EXTERNAL_ADDRESS: ...
POSTGRES_EXTERNAL_PORT: ...
POSTGRES_HOST: ...
POSTGRES_USERNAME: ...
POSTGRES_PASSWORD: ...
and keycloak with external db:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: keycloak
name: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
but when I check log, keycloak can not connect to db. It is still using default vaule: keycloak-postgresql.keycloak not value defined in my custom secret ? Why it is not using my value from secrets ?
UPDATE
when I check keycloak pod which was created by operator I can see:
env:
- name: DB_VENDOR
value: POSTGRES
- name: DB_SCHEMA
value: public
- name: DB_ADDR
value: keycloak-postgresql.keycloak
- name: DB_PORT
value: '5432'
- name: DB_DATABASE
value: keycloak
- name: DB_USER
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
so now I know why I can not connect to db. It use different DB_ADDR. How I can use address: my-app.postgres (db in another namespace).
I dont know why POSTGRES_HOST in secret not working and pod still using default service name
To connect with service in another namespace you can use.
<servicename>.<namespace>.svc.cluster.local
suppose your Postgres deployment and service running in test namespace it will go like
postgres.test.svc.cluster.local
this is what i am using : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
i have also attached the Postgres file you can use it however in my case i have setup both in the same namespace keycloak and Postgres so working like charm.
I'm using Azure PostgreSQL for that, and it works correctly. In pod configuration, it also uses keycloak-postgresql.keycloak as DB_ADDR, but this is pointing to my internal service created by operator based on keycloak-db-secret.
keycloak-postgresql.keycloak this is the another service created by Keycloak Operator, which is used to connect to Postgresql's service.
You can check its endpoint.
$ kubectl get endpoints keycloak-postgresql -n keycloak
NAME ENDPOINTS AGE
keycloak-postgresql {postgresql's service ip}:5432 4m31s
However, the reason why it fails is due to the selector of this service:
selector:
app: keycloak
component: database
So if your DB Pod has the different Label, the selector will not work.
I reported this issue to the community. If they reply me, I will try to fix this bug by submitting a patch.
I was having this same issue, and then after looking at #JiyeYu 's answer, I have searched the project's issue backlog, and I've found some related issues that are still open (at the moment of this reply).
Particularly this one: https://issues.redhat.com/browse/KEYCLOAK-18602
After reading this, and its comments, I did the following:
Don't use IPs on POSTGRES_EXTERNAL_ADDRESS. If your PostGres is hosted within K8s via a StatefulSet, use the full <servicename>.<namespace>.svc.cluster.local (like #Harsh Manvar 's answer)
Remove the POSTGRES_HOST setting from the secret (don't just set it to the default, delete it). Apparently, it is not only being ignored, but also breaking the keycloak pod initialization process somehow.
After I applied these changes the issue was solved for me.
I also had similar problem, it turned out since I was using SSLMODE: "verify-full", keycloak expected correct hostname of my external db.
Since somehow Keycloak translates internally the real external db address into "keycloak-postgresql.keycloak", it expected something like "keycloak-postgresql.my-keycloak-namespace"
The log went something like this:
SEVERE [org.postgresql.ssl.PGjdbcHostnameVerifier] (ServerService Thread Pool -- 57) Server name validation failed: certificate for host keycloak-postgresql.my-keycloak-namespace dNSName entries subjectAltName, but none of them match. Assuming server name validation failed
After I added the host keycloak-postgresql.my-keycloak-namespace on the db certificate, it worked like advertised.

Load env variables into helm chart from ready made kubernetes secret

I am currently creating pods on AKS from a net core project. The problem is that I have a secret generated from appsettings.json that I created previously in the pipeline. During the deployment phase I load this secret inside a volume of the pod itself. What I want to achieve is to read the values from the Kubernetes secret and load them as env variables inside the helm chart. Any help is appreciated Thanks :)
Please see how you can use secret as environmental variable
As a single variable
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
Or the whole secret
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: mysecret
Your secrets should not be in your appsettings.json because they will end up in your source control repository.
Reading secrets from k8s into helm chart is something you should never attempt to do.
Ideally your secrets sit in a secure secret store (a vault) that either has an API that your k8s hosted app(s) can call into
Or (the vault) has an integration with k8s which mounts your secrets as a volume in your pods (the volume is an in-memory read-only storage).
This way your secrets are only kept in the vault which ensures the secrets are encrypted while at rest and in transit.

How to set kubernetes secret key name when using --from-file other than filename?

Is there a way to set a kubernetes secret key name when using --from-file other than the filename?
I have a bunch of different configuration files that I use as secrets.json within my containers. However, to organize my files, none of them are named secrets.json on my host. For example secrets.dev.json or secrets.test.json. My apps only know to read in secrets.json.
When I create a secret with kubectl create secret generic my-app-secrets --from-file=secrets.dev.json, this results in the key name being secrets.dev.json and not secrets.json.
I'm mounting in my secret contents as a file (this is a carry-over from migrating from Docker swarm).
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
template:
spec:
volumes:
- name: my-secret
secret:
secretName: my-app-secrets
containers:
- name: my-app
volumeMounts:
- name: my-secret
mountPath: "/run/secrets/secrets.json"
subPath: "secrets.json"
Because secrets.json doesn't exist as a key because it used the filename (secrets.dev.json), it ends up getting turned into a directory instead. I end up getting this mount path: /run/secrets/secrets.json/secrets.dev.json.
I'd like to be able to set the key name to secrets.json instead of using the filename of secrets.dev.json.
You can specify key name [--from-file=[key=]source]
kubectl create secret generic my-app-secrets --from-file=secrets.json=secrets.dev.json
Here, secrets.json is key name and secrets.dev.json is source