Pod env var isn't available during /etc/init.d script - kubernetes

I have a solr container that needs to be started with a parameter, either master or slave. I'm trying to put the env var into the container so that the init script can read it, and start as a master or slave.
env:
- name: ROLE
value: "master"
The container starts up, and when I shell into the container, I can see the ROLE var has been set but it appears as if the init script didn't pick it up.
Does the env var get set before or after init scripts run? If so, how do I get this var injected so that the script can have it available? I'd rather not do a ConfigMap or Secret just for this small var.
This container has been converted from EC2 to a container image by Migrate for Anthos and is running in an Anthos cluster.
Dockerfile example, reproducable
# Please refer to the documentation:
# https://cloud.google.com/migrate/anthos/docs/dockerfile-reference
FROM anthos-migrate.gcr.io/v2k-run-embedded:v1.8.1 as migrate-for-anthos-runtime
# Image containing data captured from the source VM
FROM mi5key/testing:v0.0.4
COPY --chown=root:root env-vars /etc/rc.d/init.d/env-vars
RUN /sbin/chkconfig env-vars on
COPY --from=migrate-for-anthos-runtime / /
ADD blocklist.yaml /.m4a/blocklist.yaml
ADD logs.yaml /code/config/logs/logsArtifact.yaml
# Migrate for Anthos image includes entrypoint
ENTRYPOINT [ "/.v2k.go" ]
deployment_spec.yaml
# Stateless application specification
# The Deployment creates a single replicated Pod, indicated by the 'replicas' field
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: env-vars-test
migrate-for-anthos-optimization: "true"
migrate-for-anthos-version: v1.8.1
name: env-vars-test
spec:
replicas: 1
selector:
matchLabels:
app: env-vars-test
migrate-for-anthos-optimization: "true"
migrate-for-anthos-version: v1.8.1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: env-vars-test
migrate-for-anthos-optimization: "true"
migrate-for-anthos-version: v1.8.1
spec:
containers:
- image: docker.io/mi5key/testing:v0.0.4
imagePullPolicy: IfNotPresent
name: env-vars-test
readinessProbe:
exec:
command:
- /code/ready.sh
resources:
limits:
memory: "1Gi"
cpu: "1"
env:
- name: ROLE
value: "single"
securityContext:
privileged: true
volumeMounts:
- mountPath: /sys/fs/cgroup
name: cgroups
volumes:
- hostPath:
path: /sys/fs/cgroup
type: Directory
name: cgroups
status: {}

Related

container level securityContext fsGroup

I'm trying to play with single pod multi container scenario.
The problem is one of my container (directus) is a node app that run as user 'node' with uid 1000
First try, I use hostpath as storage back end. With this, I need to change the host's directory mode with chmod manualy.
Now, I'm trying using longhorn.
And basicaly I don't want to change a host directory mod/ownership each time i deploy this deployment.
Here is my manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: lh-directus
namespace: lh-directus
spec:
replicas: 1
selector:
matchLabels:
app: lh-directus
template:
metadata:
labels:
app: lh-directus
spec:
nodeSelector:
kubernetes.io/os: linux
isGeneralDeployment: "true"
volumes:
- name: lh-directus-uploads-volume
persistentVolumeClaim:
claimName: lh-directus-uploads-pvc
- name: lh-directus-dbdata-volume
persistentVolumeClaim:
claimName: lh-directus-dbdata-pvc
containers:
# Redis Cache
- name: redis
image: redis:6
# Database
- name: database
image: postgres:12
volumeMounts:
- name: lh-directus-dbdata-volume
mountPath: /var/lib/postgresql/data
# Directus
- name: directus
image: directus/directus:latest
securityContext:
fsGroup: 1000
volumeMounts:
- name: lh-directus-uploads-volume
mountPath: /directus/uploads
When I Appy the manifest, I got error
error: error validating "lh-directus.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[2].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext; if you choose to ignore these errors, turn validation off with --validate=false
I reads about initContainer ....
But Kindly please tell me how to fix this problem without initContainer and without manualy set/change host's path ownership/mod.
Sincerely
-bino-

Injecting environment variables to Postgres pod from Hashicorp Vault

I'm trying to set the POSTGRES_PASSWORD, POSTGRES_USER and POSTGRES_DB environment variables in a Kubernetes Pod, running the official postgres docker image, with values injected from Hashicorp Vault.
The issue I experience is that the Postgres Pod will not start and provides no logs as to what might have caused it to stop.
I'm trying to source the injected secrets on startup using args /bin/bash/ source /vault/secrets/backend. Nothing seems to happen once this command is reached. If i add an echo statement in front of source it will display this in the kubectl logs.
Steps taken so far include removing the - args part of configuration and setting the required POSTGRES_PASSWORD variable directly with a test value. When done the pod starts and I can exec into it and verify that the secrets are indeed injected and I'm able to source them. Running cat command on it gives me the following output:
export POSTGRES_PASSWORD="jiasjdi9u2easjdu##djasj#!-d2KDKf"
export POSTGRES_USER="postgres"
export POSTGRES_DB="postgres"
To me this indicates that the Vault injection is working as expected and that this part is configured according to my needs.
*edit: commands after sourcing is indeed run. Tested with echo command
My configuration is as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-db
namespace: planet9-demo
labels:
app: postgres-db
environment: development
spec:
serviceName: postgres-service
selector:
matchLabels:
app: postgres-db
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-backend: secret/data/backend
vault.hashicorp.com/agent-inject-template-backend: |
{{ with secret "secret/backend/database" -}}
export POSTGRES_PASSWORD="{{ .Data.data.adminpassword}}"
export POSTGRES_USER="{{ .Data.data.postgresadminuser}}"
export POSTGRES_DB="{{ .Data.data.postgresdatabase}}"
{{- end }}
vault.hashicorp.com/role: postgresDB
labels:
app: postgres-db
tier: backend
spec:
containers:
- args:
- /bin/bash
- -c
- source /vault/secrets/backend
name: postgres-db
image: postgres:latest
resources:
requests:
cpu: 300m
memory: 1Gi
limits:
cpu: 400m
memory: 2Gi
volumeMounts:
- name: postgres-pvc
mountPath: /mnt/data
subPath: postgres-data/planet9-demo
env:
- name: PGDATA
value: /mnt/data
restartPolicy: Always
serviceAccount: sa-postgres-db
serviceAccountName: sa-postgres-db
volumes:
- name: postgres-pvc
persistentVolumeClaim:
claimName: postgres-pvc

GCP Firestore: Server request fails with Missing or insufficient permissions from GKE

I am trying to connect to Firestore from code running on GKE Container. Simple REST GET api is working fine, but when I access the Firestore from read/write, I am getting Missing or insufficient permissions.
An unhandled exception was thrown by the application.
Info
2021-06-06 21:21:20.283 EDT
Grpc.Core.RpcException: Status(StatusCode="PermissionDenied", Detail="Missing or insufficient permissions.", DebugException="Grpc.Core.Internal.CoreErrorDetailException: {"created":"#1623028880.278990566","description":"Error received from peer ipv4:172.217.193.95:443","file":"/var/local/git/grpc/src/core/lib/surface/call.cc","file_line":1068,"grpc_message":"Missing or insufficient permissions.","grpc_status":7}")
at Google.Api.Gax.Grpc.ApiCallRetryExtensions.<>c__DisplayClass0_0`2.<<WithRetry>b__0>d.MoveNext()
Update I am trying to provide secret to pod with service account credentails.
Here is the k8 file which deploys a pod to cluster with no issues when no secrets are provided and I can do Get Operations which don't hit Firestore, and they work fine.
kind: Deployment
apiVersion: apps/v1
metadata:
name: foo-worldmanagement-production
spec:
replicas: 1
selector:
matchLabels:
app: foo
role: worldmanagement
env: production
template:
metadata:
name: worldmanagement
labels:
app: foo
role: worldmanagement
env: production
spec:
containers:
- name: worldmanagement
image: gcr.io/foodev/foo/master/worldmanagement.21
resources:
limits:
memory: "500Mi"
cpu: "300m"
imagePullworld: Always
readinessProbe:
httpGet:
path: /api/worldManagement/policies
port: 80
ports:
- name: worldmgmt
containerPort: 80
Now, if I try to mount secret, the pod never gets created fully, and it eventually fails
kind: Deployment
apiVersion: apps/v1
metadata:
name: foo-worldmanagement-production
spec:
replicas: 1
selector:
matchLabels:
app: foo
role: worldmanagement
env: production
template:
metadata:
name: worldmanagement
labels:
app: foo
role: worldmanagement
env: production
spec:
volumes:
- name: google-cloud-key
secret:
secretName: firestore-key
containers:
- name: worldmanagement
image: gcr.io/foodev/foo/master/worldmanagement.21
volumeMounts:
- name: google-cloud-key
mountPath: /var/
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/key.json
resources:
limits:
memory: "500Mi"
cpu: "300m"
imagePullworld: Always
readinessProbe:
httpGet:
path: /api/worldManagement/earth
port: 80
ports:
- name: worldmgmt
containerPort: 80
I tried to deploy the sample application and it works fine.
If I keep only the following the yaml file, the container gets deployed properly
- name: google-cloud-key
secret:
secretName: firestore-key
But once I add the following to yaml, it fails
volumeMounts:
- name: google-cloud-key
mountPath: /var/
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/key.json
And I can see in GCP events that the container is not able to find the google-cloud-key. Any idea how to troubleshoot this issue, i.e why I am not able to mount the secrets, I can bash into the pod if needed.
I am using multi stage docker file made of
From mcr.microsoft.com/dotnet/sdk:5.0 AS build
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS runtime
Thanks
Looks like they key itself might not be correctly visible to the pod. I would start by getting into the pod with kubectl exec --stdin --tty <podname> -- /bin/bash and ensuring that the /var/key.json (per your config) is accessible and has the correct credentials.
The following would be a good way to mount the secret:
volumeMounts:
- name: google-cloud-key
mountPath: /var/run/secret/cloud.google.com
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/run/secret/cloud.google.com/key.json
The above assumes your secret was created with a command like:
kubectl --namespace <namespace> create secret generic firestore-key --from-file key.json
Also it is important to check your Workload Identity setup. The Workload Identity | Kubernetes Engine Documentation has a good section on this.

Write to Secret file in pod

I define a Secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: Administrator
password: NewPasswdTest11
And then creating volume mount in Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-webapp-test
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
replicas: 2
selector:
matchLabels:
name: k8s-webapp-test
version: 1.0.4
template:
metadata:
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
nodeSelector:
kubernetes.io/os: windows
volumes:
- name: secret-volume
secret:
secretName: string-data-secret
containers:
- name: k8s-webapp-test
image: dockerstore/k8s-webapp-test:1.0.4
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/secrets"
readOnly: false
So, after the deployment, I have 2 pods with volume mounts in C:\secrets (I do use Windows nodes). When I try to edit config.yaml that is located in C:\secrets folder, I get following error:
Access to the path 'c:\secrets\config.yaml' is denied.
Although I marked file as readOnly false I cannot write to it. How can I modify the file?
As you can see here it is not possible by intention:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location
You can look into using an init container which maps the secret and then copies it to the desired location where you might be able to modify it.
As an alternative to the init container you might also use a container lifecycle hook i.e. a PostStart-hook which executes immediately after a container is created.
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /secrets ~/secrets;
You can create secrets from within a Pod but it seems you need to utilize the Kubernetes REST API to do so:
https://kubernetes.io/docs/concepts/overview/kubernetes-api/

kubernetes timezone in POD with command and argument

I want to change timezone with command.
I know applying hostpath.
Could you know how to apply command ?
ln -snf /user/share/zoneinfor/$TZ /etc/localtime
it works well within container.
But I don't know applying with command and arguments in yaml file.
You can change the timezone of your pod by using specific timezone config and hostPath volume to set specific timezone. Your yaml file will look something like:
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Prague
type: File
If you want it across all pods or deployments, you need to add volume and volumeMounts to all your deployment file and change the path value in hostPath section to the timezone you want to set.
Setting TZ environment variable as below works fine for me on GCP Kubernetes.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: demo
image: gcr.io/project/image:master
imagePullPolicy: Always
env:
- name: TZ
value: Europe/Warsaw
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 0
In a deployment, you can do it by creating a volumeMounts in /etc/localtime and setting its values. Here is an example I have for a mariadb:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb
ports:
- containerPort: 3306
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
In order to add "hostPath" in the deployment config, as suggested in previous answers, you'll need to be a privileged user. Otherwise your deployment may fail on:
"hostPath": hostPath volumes are not allowed to be used
As a workaround you can try one of theses options:
Add allowedHostPaths: {} next to volumes.
Add TZ environment variable. For example: TZ = Asia/Jerusalem
(Option 2 is similar to running docker exec -it openshift/origin /bin/bash -c "export TZ='Asia/Jerusalem' && /bin/bash").
For me: Setting up of volumes and volumeMounts didn't help. Setting up of TZ environment alone works in my case.