kubernetes deployment file inject environment variables on a pre script - kubernetes

I have an elixir app connection to postgres using sql proxy
here is my deployment.yaml I deploy on kubernetes and works well,
the postgres connection password and user name are taken in the image from the environment variables in the yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: my-app
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: my-app
image: my-image:1.0.1
volumeMounts:
- name: secrets-volume
mountPath: /secrets
readOnly: true
- name: config-volume
mountPath: /beamconfig
ports:
- containerPort: 80
args:
- foreground
env:
- name: POSTGRES_HOSTNAME
value: localhost
- name: POSTGRES_USERNAME
value: postgres
- name: POSTGRES_PASSWORD
value: 123456
# proxy_container
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=my-project:region:my-postgres-instance=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: cloudsql
mountPath: /cloudsql
# volumes
volumes:
- name: secrets-volume
secret:
secretName: gcloud-json
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: cloudsql
emptyDir:
now due to security requirements I'd like to put sensitive environments encrypted, and have a script decrypting them
my yaml file would look like this:
env:
- name: POSTGRES_HOSTNAME
value: localhost
- name: ENCRYPTED_POSTGRES_USERNAME
value: hgkdhrkhgrk
- name: ENCRYPTED_POSTGRES_PASSWORD
value: fkjeshfke
then I have script that would run on all environments with prefix ENCRYPTED_ , will decrypt them and insert the dycrpted value under the environment variable without the ENCRYPTED_ prefix
is there a way to do that?
the environments variables should be injected before the image starts running
another requirement is that the pod running the image would decrypt the variables - since its the only one which has permissions to do it (working with work load identity)
something like:
- command:
- sh
- /decrypt_and_inject_environments.sh

Related

ConfigMap volume is not mounting as volume along with secret

I have references to both Secrets and ConfigMaps in my Deployment YAML file. The Secret volume is mounting as expected, but not the ConfigMap.
I created a ConfigMap with multiple files and when I do a kubectl get configmap ... it shows all of the expected values. Also, when I create just ConfigMaps it's mounting the volume fine, but not along with a Secret.
I have tried different scenarios of having both in same directory to separating them but dont seem to work.
Here is my YAML. Am I referencing the ConfigMap correctly?
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
replicas: 1 # tells deployment to run 1 pod
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: XXX/yyy/image-50
ports:
- containerPort: 9009
protocol: TCP
volumeMounts:
- mountPath: /shared
name: configs-volume
readOnly: true
volumeMounts:
- mountPath: data/secrets
name: atp-secret
readOnly: true
volumes:
- name: configs-volume
configMap:
name: prompts-config
key: application.properties
volumes:
- name: atp-secret
secret:
defaultMode: 420
secretName: dev-secret
restartPolicy: Always
imagePullSecrets:
- name: ocirsecrets
You have two separate lists of volumes: and also two separate lists of volumeMounts:. When Kubernetes tries to find the list of volumes: in the Pod spec, it finds the last matching one in each set.
volumes:
- name: configs-volume
volumes: # this completely replaces the list of volumes
- name: atp-secret
In both cases you need a single volumes: or volumeMounts: key, and then multiple list items underneath those keys.
volumes:
- name: configs-volume
configMap: { ... }
- name: atp-secret # do not repeat volumes: key
secret: { ... }
containers:
- name: hello
volumeMounts:
- name: configs-volume
mountPath: /shared
- name: atp-secret # do not repeat volumeMounts: key
mounthPath: /data/secrets

deployment throwing error for init container only when I add a second regular container to my deployment

Hi There I am currently trying to deploy sonarqube 7.8-community in GKE using a DB cloudsql instance.
This requires 2 containers ( one for sonarqube and the other for the cloudproxy in order to connect to the DB)
Sonarqube container, however, also requires an init container to give it some special memory requirments.
When I create the deployment with just the sonarqube image and the init container it works fine but this wont be of any use as I need the cloudsql proxy container to connect to my external db. When I add this container though the deployment suddenly errors with the below
deirdrerodgers#cloudshell:~ (meta-gear-306013)$ kubectl create -f initsonar.yaml
The Deployment "sonardeploy" is invalid:spec.template.spec.initContainers[0].volumeMounts[0].name: Not found: "init-sysctl"
This is my complete yaml file with the init container and the other two containers. I wonder is the issue because it doesnt know which container to apply the init container to?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
volumes:
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh
spec:
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
Your yaml file is incorrect. You have two spec: blocks. It should be only one. You need to combine it together. Under spec block should be initContainers block, then containers and finally volumes block. Look at the correct yaml file below:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh

Grafana is generating links with Base URL : http://localhost:3000 instead of using my url

I deployed grafana 7 with Kubernetes, here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-core
namespace: monitoring
labels:
app: grafana
component: core
spec:
selector:
matchLabels:
app: grafana
replicas: 1
template:
metadata:
labels:
app: grafana
component: core
spec:
initContainers:
- name: init-chown-data
image: grafana/grafana:7.0.3
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command: ["chown", "-R", "472:472", "/var/lib/grafana"]
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
containers:
- image: grafana/grafana:7.0.3
name: grafana-core
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 472
# env:
envFrom:
- secretRef:
name: grafana-env
env:
# The following env variables set up basic auth twith the default admin user and admin password.
- name: GF_INSTALL_PLUGINS
value: grafana-clock-panel,grafana-simple-json-datasource,camptocamp-prometheus-alertmanager-datasource
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana
key: admin-username
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana
key: admin-password
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
readinessProbe:
httpGet:
path: /login
port: 3000
initialDelaySeconds: 30
timeoutSeconds: 1
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
- name: grafana-datasources
mountPath: /etc/grafana/provisioning/datasources
volumes:
- name: grafana-persistent-storage
persistentVolumeClaim:
claimName: grafana-storage
- name: grafana-datasources
configMap:
name: grafana-datasources
nodeSelector:
kops.k8s.io/instancegroup: monitoring-nodes
It is working well, but each time it generates an URL, it does it with base url : http://localhost:3000 instead of using https://grafana.company.com
Where can I configure that ? I couldn't find a env var that handle it.
Configure the root_url option of [server] in your Grafana config file or env variable GF_SERVER_ROOT_URL to https://grafana.company.com/.
I have fount it can be done through using the env variable inside the grafana pod. This set up is a tricky one, misuse of the url format of the GF_SERVER_ROOT_URL to your.url with no quotes, "your.url" without https:// or http:// and even "http://your.url" with no / at the end may cause problems.
grafana:
env:
GF_SERVER_ROOT_URL: "http://your.url/"
notifiers:
notifiers.yaml:
notifiers:
- name: telegram
type: telegram
uid: telegram
is_default: true
settings:
bottoken: "yourbottoken"
chatid: "-yourchatid"
and then use uid: "telegram" in the provisioned dashboards

Kubernetes postStart seems to wreck shop for everything in deployment

We have the following deployment yaml:
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}}
namespace: {{DEP_ENVIRONMENT}}
labels:
app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}}
spec:
replicas: {{NUM_REPLICAS}}
selector:
matchLabels:
app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}}
template:
metadata:
labels:
app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}}
spec:
# [START volumes]
volumes:
- name: {{CLOUD_DB_INSTANCE_CREDENTIALS}}
secret:
secretName: {{CLOUD_DB_INSTANCE_CREDENTIALS}}
# [END volumes]
containers:
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<PROJECT_ID>:{{CLOUD_DB_CONN_INSTANCE}}=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: {{CLOUD_DB_INSTANCE_CREDENTIALS}}
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
- name: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}}
image: {{IMAGE_NAME}}
ports:
- containerPort: 80
env:
- name: CLOUD_DB_HOST
value: 127.0.0.1
- name: DEV_CLOUD_DB_USER
valueFrom:
secretKeyRef:
name: {{CLOUD_DB_DB_CREDENTIALS}}
key: username
- name: DEV_CLOUD_DB_PASSWORD
valueFrom:
secretKeyRef:
name: {{CLOUD_DB_DB_CREDENTIALS}}
key: password
# [END cloudsql_secrets]
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "supervisord"]
The last lifecycle block is new and is causing the database connection to be refused. This config works fine without the lifecycle block. I'm sure that there is something stupid here that I am missing but for the life of my cannot figure out what it is.
Note: we are only trying to start Supervisor like this as a workaround for huge issues when attempting to start it normally.
Lifecycle hooks are intended to be short foreground commands. You cannot start a background daemon from them, that has to be the main command for the container.

Kubernetes/Openshift: how to use envFile to read from a filesystem file

I've configured two container into a pod.
The first of them is engaged to create a file like:
SPRING_DATASOURCE_USERNAME: username
SPRING_DATASOURCE_PASSWORD: password
I want that the second container reads from this location in order to initialize its env variables.
I'm using envFrom but I don't quite figure out how to use it.
This is my spec:
metadata:
annotations:
configmap.fabric8.io/update-on-change: ${project.artifactId}
labels:
name: wsec
name: wsec
spec:
replicas: 1
selector:
name: wsec
version: ${project.version}
provider: fabric8
template:
metadata:
labels:
name: wsec
version: ${project.version}
provider: fabric8
spec:
containers:
- name: sidekick
image: quay.io/ukhomeofficedigital/vault-sidekick:latest
args:
- -cn=secret:openshift/postgresql:env=USERNAME
env:
- name: VAULT_ADDR
value: "https://vault.vault-sidekick.svc:8200"
- name: VAULT_TOKEN
value: "34f8e679-3fbd-77b4-5de9-68b99217cc02"
volumeMounts:
- name: sidekick-backend-volume
mountPath: /etc/secrets
readOnly: false
- name: wsec
image: ${docker.image}
env:
- name: SPRING_APPLICATION_JSON
valueFrom:
configMapKeyRef:
name: wsec-configmap
key: SPRING_APPLICATION_JSON
envFrom:
???
volumes:
- name: sidekick-backend-volume
emptyDir: {}
It looks like you are packaging the environment variables with the first container and read then in the second container.
If that's the case, you can use initContainers.
Create a volume mapped to an emptyDir. Mount that on the initContainer(propertiesContainer) and the main container(springBootAppContainer) as a volumeMount. This directory is now visible to both containers.
image: properties/container/path
command: [bash, -c]
args: ["cp -r /location/in/properties/container /propsdir"]
volumeMounts:
- name: propsDir
mountPath: /propsdir
This will put the properties in /propsdir. When the main container starts, it can read properties from /propsdir