Load env variables into helm chart from ready made kubernetes secret - kubernetes

I am currently creating pods on AKS from a net core project. The problem is that I have a secret generated from appsettings.json that I created previously in the pipeline. During the deployment phase I load this secret inside a volume of the pod itself. What I want to achieve is to read the values from the Kubernetes secret and load them as env variables inside the helm chart. Any help is appreciated Thanks :)

Please see how you can use secret as environmental variable
As a single variable
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
Or the whole secret
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: mysecret

Your secrets should not be in your appsettings.json because they will end up in your source control repository.
Reading secrets from k8s into helm chart is something you should never attempt to do.
Ideally your secrets sit in a secure secret store (a vault) that either has an API that your k8s hosted app(s) can call into
Or (the vault) has an integration with k8s which mounts your secrets as a volume in your pods (the volume is an in-memory read-only storage).
This way your secrets are only kept in the vault which ensures the secrets are encrypted while at rest and in transit.

Related

Update config map object from service app

I have my microservice and ConfigMap deployed in cluster.
I have a need to update the value of variable defined in ConfigMap object on the fly. I can re-start the microservice programmatically (like re-create the service pod). The main thing is to keep the ConfigMap object in cluster and only update the value in it.
My current idea is to define env variables in an external file:
key1: foo
key2: bar
and in ConfigMap manifest mount the file:
spec:
containers:
image: ...
volumeMounts:
- mountPath: /clusters-config
name: config-volume
volumes:
- configMap:
name: my-env-file
name: config-volume
I wonder if using this approach, what is the main pitfalls/cons I should consider?
Is there an better option/solution if not using the volume mounted ConfigMap but keep variables inside ConfigMap manifest?

Kubernetes - Create custom secret holding SSL certificates

I have a problem. In my kubernetes cluster I am running a GitLab image for my own project. This image requires a .crt and .key as certificates for HTTPS usage. I have setup an Ingress resource with a letsencrypt-issuer, which successfully obtains the certificates. But to use those they need to be named as my.dns.com.crt and my.dns.com.key. So I manually ran the following 3 commands:
kubectl get secret project-gitlab-tls -n project-utility \
-o jsonpath='{.data.tls\.crt}' | base64 --decode > /mnt/data/project/gitlab/certs/tls.crt
kubectl get secret project-gitlab-tls -n project-utility \
-o jsonpath='{.data.tls\.key}' | base64 --decode > /mnt/data/project/gitlab/certs/tls.key
kubectl create secret generic gitlab-registry-certs \
--from-file=gitlab.project.com.crt=/mnt/data/project/gitlab/certs/tls.crt \
--from-file=gitlab.project.com.key=/mnt/data/project/gitlab/certs/tls.key \
--namespace project-utility
The first 2 commands print the decoded crt/key content in a file, so that the third command can use those files to create a custom mapping to the specific DNS names. Then in the GitLab deployment I mount this gitlab-registry-certs like this:
volumeMounts:
- mountPath: /etc/gitlab/ssl
name: registry-certs
volumes:
- name: registry-certs
secret:
secretName: gitlab-registry-certs
This all works, but I want this process to be automated, because I am using ArgoCD as deployment tool. I thought about a job, but a job runs a ubuntu version which is not allowed to make changes to the cluster, so I need to call a bash script on the external host. How can I achieve this, because I can only find things about jobs which run an image and not how to execute host commands. If there is a way easier method to use the certificates that I am not seeing please let me know, because I kinda feel weird about this way of using the certificates, but GitLab requires the naming convention of <DNS>.crt and <DNS>.key, so thats why I am doing the remapping.
So the question is how to automate this remapping process so that on cluster generation a job will be executed after obtaining the certificates but before the deployment gets created?
Why are you bothering with this complicated process of creating a new secret? Just rename them in your volumeMounts section by using a subPath:
containers:
- ...
volumeMounts:
- name: registry-certs
mountPath: /etc/gitlab/ssl/my.dns.com.crt
subPath: tls.crt
- name: registry-certs
mountPath: /etc/gitlab/ssl/my.dns.com.key
subPath: tls.key
volumes:
- name: registry-certs
secret:
secretName: project-gitlab-tls
More info in the documentation.

Is secret mounted as file is editable from application code in Kubernetes deployment

I am mounting db secrets as a file in my Kubernetes container. Db secrets will get updated after the password expiry time. I am using polling mechanism to check if Db secrets has been reset to updated value. Is it possible to change mounted secret inside file.
is secret mounted as file is editable from application code in kubernetes
The file which gets loaded into the container will be loaded in readonly format, so loaded file can't be edited from inside the container. But secret can be edited from either updating the secret or copying the file into different location within the container.
I'm not sure how you did it. Putting the yaml format of pod configuration would help more.
for example if you use hostPath to mount a file inside the container, every time you change the source file, you see the changes inside the container.
for example
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- image: busybox
name: test-container
command: ["/bin/sh", "-c", "sleep 36000"]
volumeMounts:
- mountPath: /etc/db_pass
name: password-volume
volumes:
- name: password-volume
hostPath:
path: /var/lib/original_password
type: File

Appending key to CATALINA_OPTS using kubernetes

I have some CATALINA_OPTS properties (regarding database port, user and so on) set up in ConfigMap file. Then, this file is added to the docker image via Pod environment variable.
One of the CATALINA_OPTS properties is database password, and it is required to move this from ConfigMap to the Secrets file.
I can expose key from Secrets file through environment variable:
apiVersion: v1
kind: Pod
...
containers:
- name: myContainer
image: myImage
env:
- name: CATALINA_OPTS
valueFrom:
configMapKeyRef:
name: catalina_opts
key: CATALINA_OPTS
- name: MY_ENV_PASSWORD
valueFrom:
secretKeyRef:
name: db-pass
key: my-pass
Thing is, i need to append this password to the CATALINA_OPTS. I tried to do it in Dockerfile:
RUN export CATALINA_OPTS="$CATALINA_OPTS -Dmy.password=$MY_ENV_PASSWORD"
However, MY_ENV_PASSWORD is not appending to the existing CATALINA_OPTS. When I list my environment variables (i'm checking the log in Jenkins) i cannot see the password.
Am I doing something wrong here? Is there any 'regular' way to do this?
Dockerfile RUN steps are run as part of your image build step and NOT during your image execution. Hence, you cannot rely on RUN export (build step) to set K8S environment variables for your container (run step).
Remove the RUN export from your Dockerfile and Ensure you are setting CATALINA_OPTS in your catalina_opts ConfigMap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: catalina_opts
data:
SOME_ENV_VAR: INFO
CATALINA_OPTS: opts... -Dmy.password=$MY_ENV_PASSWORD

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"