Injecting environment variables to Postgres pod from Hashicorp Vault - postgresql

I'm trying to set the POSTGRES_PASSWORD, POSTGRES_USER and POSTGRES_DB environment variables in a Kubernetes Pod, running the official postgres docker image, with values injected from Hashicorp Vault.
The issue I experience is that the Postgres Pod will not start and provides no logs as to what might have caused it to stop.
I'm trying to source the injected secrets on startup using args /bin/bash/ source /vault/secrets/backend. Nothing seems to happen once this command is reached. If i add an echo statement in front of source it will display this in the kubectl logs.
Steps taken so far include removing the - args part of configuration and setting the required POSTGRES_PASSWORD variable directly with a test value. When done the pod starts and I can exec into it and verify that the secrets are indeed injected and I'm able to source them. Running cat command on it gives me the following output:
export POSTGRES_PASSWORD="jiasjdi9u2easjdu##djasj#!-d2KDKf"
export POSTGRES_USER="postgres"
export POSTGRES_DB="postgres"
To me this indicates that the Vault injection is working as expected and that this part is configured according to my needs.
*edit: commands after sourcing is indeed run. Tested with echo command
My configuration is as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-db
namespace: planet9-demo
labels:
app: postgres-db
environment: development
spec:
serviceName: postgres-service
selector:
matchLabels:
app: postgres-db
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-backend: secret/data/backend
vault.hashicorp.com/agent-inject-template-backend: |
{{ with secret "secret/backend/database" -}}
export POSTGRES_PASSWORD="{{ .Data.data.adminpassword}}"
export POSTGRES_USER="{{ .Data.data.postgresadminuser}}"
export POSTGRES_DB="{{ .Data.data.postgresdatabase}}"
{{- end }}
vault.hashicorp.com/role: postgresDB
labels:
app: postgres-db
tier: backend
spec:
containers:
- args:
- /bin/bash
- -c
- source /vault/secrets/backend
name: postgres-db
image: postgres:latest
resources:
requests:
cpu: 300m
memory: 1Gi
limits:
cpu: 400m
memory: 2Gi
volumeMounts:
- name: postgres-pvc
mountPath: /mnt/data
subPath: postgres-data/planet9-demo
env:
- name: PGDATA
value: /mnt/data
restartPolicy: Always
serviceAccount: sa-postgres-db
serviceAccountName: sa-postgres-db
volumes:
- name: postgres-pvc
persistentVolumeClaim:
claimName: postgres-pvc

Related

container level securityContext fsGroup

I'm trying to play with single pod multi container scenario.
The problem is one of my container (directus) is a node app that run as user 'node' with uid 1000
First try, I use hostpath as storage back end. With this, I need to change the host's directory mode with chmod manualy.
Now, I'm trying using longhorn.
And basicaly I don't want to change a host directory mod/ownership each time i deploy this deployment.
Here is my manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: lh-directus
namespace: lh-directus
spec:
replicas: 1
selector:
matchLabels:
app: lh-directus
template:
metadata:
labels:
app: lh-directus
spec:
nodeSelector:
kubernetes.io/os: linux
isGeneralDeployment: "true"
volumes:
- name: lh-directus-uploads-volume
persistentVolumeClaim:
claimName: lh-directus-uploads-pvc
- name: lh-directus-dbdata-volume
persistentVolumeClaim:
claimName: lh-directus-dbdata-pvc
containers:
# Redis Cache
- name: redis
image: redis:6
# Database
- name: database
image: postgres:12
volumeMounts:
- name: lh-directus-dbdata-volume
mountPath: /var/lib/postgresql/data
# Directus
- name: directus
image: directus/directus:latest
securityContext:
fsGroup: 1000
volumeMounts:
- name: lh-directus-uploads-volume
mountPath: /directus/uploads
When I Appy the manifest, I got error
error: error validating "lh-directus.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[2].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext; if you choose to ignore these errors, turn validation off with --validate=false
I reads about initContainer ....
But Kindly please tell me how to fix this problem without initContainer and without manualy set/change host's path ownership/mod.
Sincerely
-bino-

Pod env var isn't available during /etc/init.d script

I have a solr container that needs to be started with a parameter, either master or slave. I'm trying to put the env var into the container so that the init script can read it, and start as a master or slave.
env:
- name: ROLE
value: "master"
The container starts up, and when I shell into the container, I can see the ROLE var has been set but it appears as if the init script didn't pick it up.
Does the env var get set before or after init scripts run? If so, how do I get this var injected so that the script can have it available? I'd rather not do a ConfigMap or Secret just for this small var.
This container has been converted from EC2 to a container image by Migrate for Anthos and is running in an Anthos cluster.
Dockerfile example, reproducable
# Please refer to the documentation:
# https://cloud.google.com/migrate/anthos/docs/dockerfile-reference
FROM anthos-migrate.gcr.io/v2k-run-embedded:v1.8.1 as migrate-for-anthos-runtime
# Image containing data captured from the source VM
FROM mi5key/testing:v0.0.4
COPY --chown=root:root env-vars /etc/rc.d/init.d/env-vars
RUN /sbin/chkconfig env-vars on
COPY --from=migrate-for-anthos-runtime / /
ADD blocklist.yaml /.m4a/blocklist.yaml
ADD logs.yaml /code/config/logs/logsArtifact.yaml
# Migrate for Anthos image includes entrypoint
ENTRYPOINT [ "/.v2k.go" ]
deployment_spec.yaml
# Stateless application specification
# The Deployment creates a single replicated Pod, indicated by the 'replicas' field
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: env-vars-test
migrate-for-anthos-optimization: "true"
migrate-for-anthos-version: v1.8.1
name: env-vars-test
spec:
replicas: 1
selector:
matchLabels:
app: env-vars-test
migrate-for-anthos-optimization: "true"
migrate-for-anthos-version: v1.8.1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: env-vars-test
migrate-for-anthos-optimization: "true"
migrate-for-anthos-version: v1.8.1
spec:
containers:
- image: docker.io/mi5key/testing:v0.0.4
imagePullPolicy: IfNotPresent
name: env-vars-test
readinessProbe:
exec:
command:
- /code/ready.sh
resources:
limits:
memory: "1Gi"
cpu: "1"
env:
- name: ROLE
value: "single"
securityContext:
privileged: true
volumeMounts:
- mountPath: /sys/fs/cgroup
name: cgroups
volumes:
- hostPath:
path: /sys/fs/cgroup
type: Directory
name: cgroups
status: {}

GCP Firestore: Server request fails with Missing or insufficient permissions from GKE

I am trying to connect to Firestore from code running on GKE Container. Simple REST GET api is working fine, but when I access the Firestore from read/write, I am getting Missing or insufficient permissions.
An unhandled exception was thrown by the application.
Info
2021-06-06 21:21:20.283 EDT
Grpc.Core.RpcException: Status(StatusCode="PermissionDenied", Detail="Missing or insufficient permissions.", DebugException="Grpc.Core.Internal.CoreErrorDetailException: {"created":"#1623028880.278990566","description":"Error received from peer ipv4:172.217.193.95:443","file":"/var/local/git/grpc/src/core/lib/surface/call.cc","file_line":1068,"grpc_message":"Missing or insufficient permissions.","grpc_status":7}")
at Google.Api.Gax.Grpc.ApiCallRetryExtensions.<>c__DisplayClass0_0`2.<<WithRetry>b__0>d.MoveNext()
Update I am trying to provide secret to pod with service account credentails.
Here is the k8 file which deploys a pod to cluster with no issues when no secrets are provided and I can do Get Operations which don't hit Firestore, and they work fine.
kind: Deployment
apiVersion: apps/v1
metadata:
name: foo-worldmanagement-production
spec:
replicas: 1
selector:
matchLabels:
app: foo
role: worldmanagement
env: production
template:
metadata:
name: worldmanagement
labels:
app: foo
role: worldmanagement
env: production
spec:
containers:
- name: worldmanagement
image: gcr.io/foodev/foo/master/worldmanagement.21
resources:
limits:
memory: "500Mi"
cpu: "300m"
imagePullworld: Always
readinessProbe:
httpGet:
path: /api/worldManagement/policies
port: 80
ports:
- name: worldmgmt
containerPort: 80
Now, if I try to mount secret, the pod never gets created fully, and it eventually fails
kind: Deployment
apiVersion: apps/v1
metadata:
name: foo-worldmanagement-production
spec:
replicas: 1
selector:
matchLabels:
app: foo
role: worldmanagement
env: production
template:
metadata:
name: worldmanagement
labels:
app: foo
role: worldmanagement
env: production
spec:
volumes:
- name: google-cloud-key
secret:
secretName: firestore-key
containers:
- name: worldmanagement
image: gcr.io/foodev/foo/master/worldmanagement.21
volumeMounts:
- name: google-cloud-key
mountPath: /var/
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/key.json
resources:
limits:
memory: "500Mi"
cpu: "300m"
imagePullworld: Always
readinessProbe:
httpGet:
path: /api/worldManagement/earth
port: 80
ports:
- name: worldmgmt
containerPort: 80
I tried to deploy the sample application and it works fine.
If I keep only the following the yaml file, the container gets deployed properly
- name: google-cloud-key
secret:
secretName: firestore-key
But once I add the following to yaml, it fails
volumeMounts:
- name: google-cloud-key
mountPath: /var/
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/key.json
And I can see in GCP events that the container is not able to find the google-cloud-key. Any idea how to troubleshoot this issue, i.e why I am not able to mount the secrets, I can bash into the pod if needed.
I am using multi stage docker file made of
From mcr.microsoft.com/dotnet/sdk:5.0 AS build
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS runtime
Thanks
Looks like they key itself might not be correctly visible to the pod. I would start by getting into the pod with kubectl exec --stdin --tty <podname> -- /bin/bash and ensuring that the /var/key.json (per your config) is accessible and has the correct credentials.
The following would be a good way to mount the secret:
volumeMounts:
- name: google-cloud-key
mountPath: /var/run/secret/cloud.google.com
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/run/secret/cloud.google.com/key.json
The above assumes your secret was created with a command like:
kubectl --namespace <namespace> create secret generic firestore-key --from-file key.json
Also it is important to check your Workload Identity setup. The Workload Identity | Kubernetes Engine Documentation has a good section on this.

kubernetes assign configmap as environment variables on deployment

I am trying to deploy my image to Azure Kubernetes Service. I use command:
kubectl apply -f mydeployment.yml
And here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: mycr.azurecr.io/my-api
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: my-existing-config-map
I have configmap my-existing-config-map created with a bunch of values in it but the deployment doesn't add these values as environment variables.
Config map was created from ".env" file this way:
kubectl create configmap my-existing-config-map --from-file=.env
What am I missing here?
If your .env file in this format
a=b
c=d
you need to use --from-env-file=.env instead.
To be more explanatory, using --from-file=aa.xx creates configmap looks like this
aa.xx: |
file content here....
....
....
When the config map used with envFrom.configmapref, it just creates on env variable "aa.xx" with the content. In the case, that filename starts with '.' like .env , the env variable is not even created because the name violates UNIX env variable name rules.
As you are using the .env file the format of the file is important
Create config.env file in the following format which can include comments
echo -e "var1=val1\n# this is a comment\n\nvar2=val2\n#anothercomment" > config.env
Create Config Map
kubectl create cm config --from-env-file=config.env
Use config Map in your pod definition file
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
envFrom:
- configMapRef:
name: config

kubernetes timezone in POD with command and argument

I want to change timezone with command.
I know applying hostpath.
Could you know how to apply command ?
ln -snf /user/share/zoneinfor/$TZ /etc/localtime
it works well within container.
But I don't know applying with command and arguments in yaml file.
You can change the timezone of your pod by using specific timezone config and hostPath volume to set specific timezone. Your yaml file will look something like:
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Prague
type: File
If you want it across all pods or deployments, you need to add volume and volumeMounts to all your deployment file and change the path value in hostPath section to the timezone you want to set.
Setting TZ environment variable as below works fine for me on GCP Kubernetes.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: demo
image: gcr.io/project/image:master
imagePullPolicy: Always
env:
- name: TZ
value: Europe/Warsaw
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 0
In a deployment, you can do it by creating a volumeMounts in /etc/localtime and setting its values. Here is an example I have for a mariadb:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb
ports:
- containerPort: 3306
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
In order to add "hostPath" in the deployment config, as suggested in previous answers, you'll need to be a privileged user. Otherwise your deployment may fail on:
"hostPath": hostPath volumes are not allowed to be used
As a workaround you can try one of theses options:
Add allowedHostPaths: {} next to volumes.
Add TZ environment variable. For example: TZ = Asia/Jerusalem
(Option 2 is similar to running docker exec -it openshift/origin /bin/bash -c "export TZ='Asia/Jerusalem' && /bin/bash").
For me: Setting up of volumes and volumeMounts didn't help. Setting up of TZ environment alone works in my case.