Kubernetes - Create custom secret holding SSL certificates - kubernetes

I have a problem. In my kubernetes cluster I am running a GitLab image for my own project. This image requires a .crt and .key as certificates for HTTPS usage. I have setup an Ingress resource with a letsencrypt-issuer, which successfully obtains the certificates. But to use those they need to be named as my.dns.com.crt and my.dns.com.key. So I manually ran the following 3 commands:
kubectl get secret project-gitlab-tls -n project-utility \
-o jsonpath='{.data.tls\.crt}' | base64 --decode > /mnt/data/project/gitlab/certs/tls.crt
kubectl get secret project-gitlab-tls -n project-utility \
-o jsonpath='{.data.tls\.key}' | base64 --decode > /mnt/data/project/gitlab/certs/tls.key
kubectl create secret generic gitlab-registry-certs \
--from-file=gitlab.project.com.crt=/mnt/data/project/gitlab/certs/tls.crt \
--from-file=gitlab.project.com.key=/mnt/data/project/gitlab/certs/tls.key \
--namespace project-utility
The first 2 commands print the decoded crt/key content in a file, so that the third command can use those files to create a custom mapping to the specific DNS names. Then in the GitLab deployment I mount this gitlab-registry-certs like this:
volumeMounts:
- mountPath: /etc/gitlab/ssl
name: registry-certs
volumes:
- name: registry-certs
secret:
secretName: gitlab-registry-certs
This all works, but I want this process to be automated, because I am using ArgoCD as deployment tool. I thought about a job, but a job runs a ubuntu version which is not allowed to make changes to the cluster, so I need to call a bash script on the external host. How can I achieve this, because I can only find things about jobs which run an image and not how to execute host commands. If there is a way easier method to use the certificates that I am not seeing please let me know, because I kinda feel weird about this way of using the certificates, but GitLab requires the naming convention of <DNS>.crt and <DNS>.key, so thats why I am doing the remapping.
So the question is how to automate this remapping process so that on cluster generation a job will be executed after obtaining the certificates but before the deployment gets created?

Why are you bothering with this complicated process of creating a new secret? Just rename them in your volumeMounts section by using a subPath:
containers:
- ...
volumeMounts:
- name: registry-certs
mountPath: /etc/gitlab/ssl/my.dns.com.crt
subPath: tls.crt
- name: registry-certs
mountPath: /etc/gitlab/ssl/my.dns.com.key
subPath: tls.key
volumes:
- name: registry-certs
secret:
secretName: project-gitlab-tls
More info in the documentation.

Related

How can sslcert and sslkey be passed as environment variables in Kubernetes?

I'm trying to make my app connect to my PostgreSQL instance through an encrypted and secure connection.
I've configured my server certificate and generated the client cert and key files.
The following command connects without problems:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=<instance_ip> \
port=5432 \
user=db dbname=dbname"
Unfortunately, I couldn't find a way to pass the client key as value, I can only pass the file path. Even using the default environment variables from psql, this is not possible: https://www.postgresql.org/docs/current/libpq-envars.html
Golang follows the same specifications as lib-pq and there is no way to pass the cert and key values: https://pkg.go.dev/github.com/lib/pq?tab=doc#hdr-Connection_String_Parameters.
I want to store the client cert and key in environment variables for security reasons, I don't want to store sensitive files in github/gitlab.
Just set the values in your environment and you can get them in a init function.
func init() {
var := os.Getenv("SOME_KEY")
}
When you want to set these with K8s you would just do this in a yaml file.
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
SOME_KEY: the-value-of-the-key
Then to inject into the environment do.
envFrom:
- secretRef:
name: my-secret
Now when your init function runs it will we able to see SOME_KEY.
If you want to pass a secret as a file you do something like this.
kubectl create secret generic my-secret-files --from-file=my-secret-file-1.stuff --from-file=my-secret-file-2.stuff
Then in your deployment.
volumes:
- name: my-secret-files
secret:
secretName: my-secret-files
Also in your deployment under you container.
volumeMounts:
- name: my-secret-files
mountPath: /config/
Now your init would be able to see.
/config/my-secret-file-1.stuff
/config/my-secret-file-2.stuff

Load env variables into helm chart from ready made kubernetes secret

I am currently creating pods on AKS from a net core project. The problem is that I have a secret generated from appsettings.json that I created previously in the pipeline. During the deployment phase I load this secret inside a volume of the pod itself. What I want to achieve is to read the values from the Kubernetes secret and load them as env variables inside the helm chart. Any help is appreciated Thanks :)
Please see how you can use secret as environmental variable
As a single variable
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
Or the whole secret
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: mysecret
Your secrets should not be in your appsettings.json because they will end up in your source control repository.
Reading secrets from k8s into helm chart is something you should never attempt to do.
Ideally your secrets sit in a secure secret store (a vault) that either has an API that your k8s hosted app(s) can call into
Or (the vault) has an integration with k8s which mounts your secrets as a volume in your pods (the volume is an in-memory read-only storage).
This way your secrets are only kept in the vault which ensures the secrets are encrypted while at rest and in transit.

Kubernetes can not mount a volume to a folder

I am following these docs on how to setup a sidecar proxy to my cloud-sql database. It refers to a manifest on github that -as I find it all over the place on github repos etc- seems to work for 'everyone' but I run into trouble. The proxy container can not mount to /secrets/cloudsql it seems as it can not succesfully start. When I run kubectl logs [mypod] cloudsql-proxy:
invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory
So the secret seems to be the problem.
Relevant part of the manifest:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=pqbq-224713:europe-west4:osm=tcp:5432",
"-credential_file=/secrets/cloudsql/mysecret.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credential
secret:
secretName: mysecret
To test/debug the secret I mount the volume to an another container that does start, but then the path and file /secrets/cloudsql/mysecret.json does not exist either. However when I mount the secret to an already EXISTING folder I can find in this folder not the mysecret.json file (as I expected...) but (in my case) two secrets it contains, so I find: /existingfolder/password and /existingfolder/username (apparently this is how it works!? When I cat these secrets they give the proper strings, so they seem fine).
So it looks like the path can not be made by the system, is this a permission issue? I tried simply mounting in the proxy container to the root ('/') so no folder, but that gives an error saying it is not allowed to do so. As the image gcr.io/cloudsql-docker/gce-proxy:1.11 is from Google and I can not get it running I can not see what folder it has.
My questions:
Is the mountPath created from the manifest or should it be already
in the container?
How can I get this working?
I solved it. I was using the same secret on the cloudsql-proxy as the ones used on the app (env), but it needs to be a key you generate from a service account and then make a secret out of that. Then it works. This tutorial helped me through.

Mounting client.crt, client.key, ca.crt with a service-account or otherwise?

Has anyone used service-accounts to mount ssl certificates to access the aws cluster from within a running job before? How do we do this? I created the job and this is the from the the output of the failing container which is causing the Pod to be in error state.
Error in configuration:
* unable to read client-cert /client.crt for test-user due to open /client.crt: no such file or directory
* unable to read client-key /client.key for test-user due to open /client.key: no such file or directory
* unable to read certificate-authority /ca.crt for test-cluster due to open /ca.crt: no such file or director
The solution is to create a Secret containing the certs, and then getting the job to reference it.
Step 1. Create secret:
kubectl create secret generic job-certs --from-file=client.crt --from-file=client.key --from-file=ca.crt
Step 2. Reference secret in job's manifest. You have to insert the volumes and volumeMounts in the job.
spec:
volumes:
- name: ssl
secret:
secretName: job-certs
containers:
volumeMounts:
- mountPath: "/etc/ssl"
name: "ssl"

how to pass a configuration file thought yaml on kubernetes to create new replication controller

i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile...
There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes.
Here are a couple of alternatives (that may solve your problem):
Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it.
Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects).
Put the configuration file into a volume. Mount this volume into your pod and read the configuration file from the volume.
Use a secret. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod.
I believe you can also download config during container initialization.
See example below, you may download config instead index.html but I would not use it for sensetive info like passwords.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}