Vault secrets into kubernetes secrets or environment variable - kubernetes

I'm using external vault with kubernetes and i want all my secrets be either in pod env or in kubernetes secrets.
I tried to use
apiVersion: apps/v1
kind: Deployment
metadata:
name: orgchart
labels:
app: orgchart
spec:
selector:
matchLabels:
app: orgchart
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "devwebapp"
vault.hashicorp.com/agent-inject-secret-config: "kv/secret/devwebapp/config"
# Environment variable export template
vault.hashicorp.com/agent-inject-template-config: |
{{ with secret "kv/secret/devwebapp/config" -}}
export user="{{ .Data.username }}"
export pass="{{ .Data.password }}"
{{- end }}
labels:
app: orgchart
spec:
serviceAccountName: devwebapp123
containers:
- name: orgchart
image: jweissig/app:0.0.1
args: ["sh", "-c", "source /vault/secrets/config"]
but when i execut pod env there is no secrets in env
kubectl exec -it orgchart-659b57dc47-2dwdf -c orgchart -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
TERM=xterm
HOSTNAME=orgchart-659b57dc47-2dwdf
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.233.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.233.0.1
KUBERNETES_SERVICE_HOST=10.233.0.1
HOME=/root
files in pod on path /vault/secrets/config are existing. After that i got 2 questions. Why its not working and is there any why how can i inject it in kubernetes secrets

You should use this syntax instead:
args: ["sh", "-c", "source /vault/secrets/config && <entry-point script>"]
to inject the environment variables into the application environment.
If I got the right docker image, the entry-point should be /app/web.
It will maybe necessary to overwrite the default one:
image:
name: jweissig/app:0.0.1
entrypoint: [""]

Related

Command is executed before env mount

I'm very new to Kubernetes so sorry if i'm not explaining my problem right.
I'm trying to spin up 3 replicas of a pod that run a php command. After a while the command should crash and restart.
The problem is that it starts with the local .env the first few times, after a few restarts the mounted .env is used. When it fails and restarts it launches with the wrong local env again.
I suspect the the command is run before the mount, what should I try to mount before my entrypoint command starts?
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
labels:
app.kubernetes.io/name: project
app.kubernetes.io/instance: project-release
spec:
imagePullSecrets: {{ toYaml .Values.gitlab.secrets | nindent 8 }}
containers:
- name: project
image: {{ .Values.gitlab.image }}
imagePullPolicy: IfNotPresent
command: [ "/bin/sh","-c" ]
args: [ "bin/console php:command:name" ]
volumeMounts:
- name: env
mountPath: /var/www/deploy/env
volumes:
- name: env
secret:
secretName: project-env

Pass unique configuration to a distroless container

I'm running a StatefulSet where each replica requires its own unique configuration. To achieve that I'm currently using a configuration with two containers per Pod:
An initContainer prepares the configuration and stores it to a shared volume
A main container consumes the configuration by outputting the contents of the shared volume and passing it to the program as CLI flags.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: my-app
serviceName: my-app
template:
metadata:
labels:
app.kubernetes.io/name: my-app
spec:
initContainers:
- name: generate-config
image: myjqimage:latest
command: [ "/bin/sh" ]
args:
- -c
- |
set -eu -o pipefail
POD_INDEX="${HOSTNAME##*-}"
# A configuration is stored as a JSON array in a Secret
# E.g., [{"param1":"string1","param2":"string2"}]
echo "$MY_APP_CONFIG" | jq -rc --arg i "$POD_INDEX" '.[$i|tonumber-1].param1' > /config/param1
echo "$MY_APP_CONFIG" | jq -rc --arg i "$POD_INDEX" '.[$i|tonumber-1].param2' > /config/param2
env:
- name: MY_APP_CONFIG
valueFrom:
secretKeyRef:
name: my-app
key: config
volumeMounts:
- name: configs
mountPath: "/config"
containers:
- name: my-app
image: myapp:latest
command:
- /bin/sh
args:
- -c
- |
/myapp --param1 $(cat /config/param1) --param2 $(cat /config/param2)
volumeMounts:
- name: configs
mountPath: "/config"
volumes:
- name: configs
emptyDir:
medium: "Memory"
---
apiVersion: v1
kind: Secret
metadata:
name: my-app
namespace: default
labels:
app.kubernetes.io/name: my-app
type: Opaque
data:
config: W3sicGFyYW0xIjoic3RyaW5nMSIsInBhcmFtMiI6InN0cmluZzIifV0=
Now I want to switch to distroless for my main container. Distroless images only contain the required dependencies to run the program (glibc in my case). And it is missing a shell. So if previously I could execute cat and output the contents of a file. Now I'm a bit stuck.
Now instead of reading the contents from file, I should pass the CLI flags defined as environment variables. Something like this:
containers:
- name: my-app
image: myapp:latest
command: ["/myapp", "--param1", "$(PARAM1)", "--param2", "$(PARAM2)"]
env:
- name: PARAM1
value: somevalue1
- name: PARAM2
value: somevalue2
Again, each Pod in a StatefulSet should have a unique configuration. I.e., PARAM1 and PARAM2 should be unique across the Pods in a StatefulSet. How do I achieve that?
Options I considered:
Using Debug Containers -- a new feature of K8s. Somehow use it to edit the configuration of a running container in runtime and inject the required variables. But the feature just became beta in 1.23. And I don't want to mutate my StatefulSet in runtime as I'm using a GitOps approach to store the configuration in Git. It'll probably cause a continuous configuration drift
Using a Job to mutate the configuration in runtime. Again, looks very ugly and violates the GitOps principle
Using shareProcessNamespace. Unsure if it can help but maybe I can somehow inject the environment variables from within the initContainer
Limitations:
Application only supports configuration provisioned through CLI flags. No environment variables, no loading the config from a file

Allowing K8S daemonset to exist in the global pid namespace

I'm trying to configure a daemonset to run on the global pid namespace resulting the ability to see other processes in the host, including the containers' processes.
I couldn't find an option to achieve this.
In general, what I'm looking for is close to the sidecar container shareProcessNamespace attribute only on the host level.
There is an attribute that allows this - hostPID: true
So the yaml file should looks something like that:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: busybox
spec:
selector:
matchLabels:
name: busybox
template:
metadata:
labels:
name: busybox
spec:
hostPID: true
containers:
- name: busybox
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
More info in:
https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces
https://medium.com/#chrispisano/limiting-pod-privileges-hostpid-57ce07b05896

k8s: configMap does not work in deployment

We ran into an issue recently as to using environment variables inside container.
OS: windows 10 pro
k8s cluster: minikube
k8s version: 1.18.3
1. The way that doesn't work, though it's preferred way for us
Here is the deployment.yaml using 'envFrom':
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app.kubernetes.io/name: db
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: db
template:
metadata:
labels:
app.kubernetes.io/name: db
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: db
image: "postgres:9.4"
ports:
- name: http
containerPort: 5432
protocol: TCP
envFrom:
- configMapRef:
name: db-configmap
here is the db.properties:
POSTGRES_HOST_AUTH_METHOD=trust
step 1:
kubectl create configmap db-configmap ./db.properties
step 2:
kebuctl apply -f ./deployment.yaml
step 3:
kubectl get pod
Run the above command, get the following result:
db-8d7f7bcb9-7l788 0/1 CrashLoopBackOff 1 9s
That indicates the environment variables POSTGRES_HOST_AUTH_METHOD is not injected.
2. The way that works (we can't work with this approach)
Here is the deployment.yaml using 'env':
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app.kubernetes.io/name: db
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: db
template:
metadata:
labels:
app.kubernetes.io/name: db
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: db
image: "postgres:9.4"
ports:
- name: http
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
step 1:
kubectl apply -f ./deployment.yaml
step 2:
kubectl get pod
Run the above command, get the following result:
db-fc58f998d-nxgnn 1/1 Running 0 32s
the above indicates the environment is injected so that the db starts.
What did I do wrong in the first case?
Thank you in advance for the help.
Update:
Provide the configmap:
kubectl describe configmap db-configmap
Name: db-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
db.properties:
----
POSTGRES_HOST_AUTH_METHOD=trust
For creating config-maps for usecase-1. please use the below command
kubectl create configmap db-configmap --from-env-file db.properties
Are you missing the key? (see "key:" (no quotes) below) And I think you need to provide the name of the env-variable...which people usually use the key-name, but you don't have to. I've repeated the same value ("POSTGRES_HOST_AUTH_METHOD") below as the environment variable NAME and the keyname of the config-map.
#start env .. where we add environment variables
env:
# Define the environment variable
- name: POSTGRES_HOST_AUTH_METHOD
#value: "UseHardCodedValueToDebugSometimes"
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to environment variable (above "name:")
name: db-configmap
# Specify the key associated with the value
key: POSTGRES_HOST_AUTH_METHOD
My example (trying to use your values)....comes from this generic example:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
pods/pod-single-configmap-env-variable.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
restartPolicy: Never
PS
You can use "describe" to take a looksie at your config-map, after you (think:) ) you have set it up correctly.
kubectl describe configmap db-configmap --namespace=IfNotDefaultNameSpaceHere
See when you do it like you described.
deployment# exb db-7785cdd5d8-6cstw
root#db-7785cdd5d8-6cstw:/# env | grep -i TRUST
db.properties=POSTGRES_HOST_AUTH_METHOD=trust
the env set is not exactly POSTGRES_HOST_AUTH_METHOD its actually taking filename in env.
create configmap via
kubectl create cm db-configmap --from-env-file db.properties and it will actually put env POSTGRES_HOST_AUTH_METHOD in pod.

How to copy a local file into a helm deployment

I'm trying to deploy in Kubernetes several pods using a mongo image with a initialization script in them. I'm using helm for the deployment. Since I'm beginning with the official Mongo docker image, I'm trying to add a script at /docker-entrypoint-initdb.d so it will be executed right at the beginning to initialize some parameters of my Mongo.
What I don't know is how can I insert my script, that is, let's say, in my local machine, in /docker-entrypoint-initdb.d using helm.
I'm trying to do something like docker run -v hostfile:mongofile but I need the equivalent in helm, so this will be done in all the pods of the deployment
You can use configmap. Lets put nginx configuration file to container via configmap. We have directory name called nginx with same level values.yml. Inside there we have actual configuration file.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-file
labels:
app: ...
data:
nginx.conf: |-
{{ .Files.Get "nginx/nginx.conf" | indent 4 }}
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: SomeDeployment
...
spec:
replicas:
selector:
matchLabels:
app: ...
release: ...
template:
metadata:
labels:
app: ...
release: ...
spec:
volumes:
- name: nginx-conf
configMap:
name: nginx-config-file
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: ...
image: ...
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
You can also check initContainers concept from this link :
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/