configMap volumes are not allowed to be used - kubernetes

I am using OKD4, and I am trying to mount /etc/php.ini in my pods using a ConfigMap.
In order to do so, I am creating the following K8S objects in my project.
Configmap (previously created to Deployment):
- apiVersion: v1
kind: ConfigMap
data:
php.ini: |-
[PHP]
;;;;;;;;;;;;;;;;;;;
; About php.ini ;
;;;;;;;;;;;;;;;;;;;
metadata:
name: php-ini
Deployment object:
- kind: Deployment
apiVersion: apps/v1
metadata:
name: privatebin
labels:
app: privatebin
spec:
replicas: 1
selector:
matchLabels:
app: privatebin
template:
metadata:
creationTimestamp: null
labels:
app: privatebin
deploymentconfig: privatebin
spec:
containers:
- name: privatebin
image: <my container registry>/privatebin:${IMAGE_TAG}
volumeMounts:
- name: config-volume
mountPath: php.ini
livenessProbe:
exec:
command:
- /bin/sh
- -c
- "[ -f /run/nginx.pid ] && ps -C nginx >/dev/null 2>&1 && ps -C php-fpm >/dev/null 2>&1"
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
scheme: HTTP
path: /
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: "250m" # parameterize in tekton pipeline
memory: "368Mi" # parameterize in tekton pipeline, maybe using template
requests:
cpu: "100m" # parameterize in tekton pipeline, maybe using template
memory: "256Mi" # parameterize in tekton pipeline, maybe using template
securityContext:
runAsUser: 1000
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes:
- name: config-volume
configMap:
name: php-ini
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
For some reason my pods are missing and there are the following errors:
lastTransitionTime: "2021-10-08T12:53:01Z"
lastUpdateTime: "2021-10-08T12:53:01Z"
message: 'pods "privatebin-55678c66c5-" is forbidden: unable to validate against
any security context constraint: [spec.containers[0].securityContext.runAsUser:
Invalid value: 1000: must be in the ranges: [1002460000, 1002469999] spec.volumes[0]:
Invalid value: "configMap": configMap volumes are not allowed to be used]'
ReplicaSet times out also with similar error:
status:
conditions:
lastTransitionTime: "2021-10-08T12:53:01Z"
message: 'pods "privatebin-55678c66c5-" is forbidden: unable to validate against
any security context constraint: [spec.containers[0].securityContext.runAsUser:
Invalid value: 1000: must be in the ranges: [1002460000, 1002469999] spec.volumes[0]:
Invalid value: "configMap": configMap volumes are not allowed to be used]'
Why can't I mount the ConfigMap? Is it because of the Securitycontext in the Deployment?
Thanks in advance,

(The error has nothing to do with configmaps, but when you get the error resolved you may need to tweak your configmap slightly to accurately drop the file into the directory you want it to land.)
OKD is OpenShift, so it's using SCC (not PSP).
By default you have access to the "restricted" SCC in your namespace. The UIDs being thrown out in the error are from the namespace annotation (oc get namespace FOO -o yaml) will show them.
To fix:
you can change your runAsUser to match the namespace annotation; or (better) just use "runAsNonRoot: true" which forces it to not run as root and takes the first uid in that annotation range. You may need to update the container image to leverage GROUP membership, not uid, for file access permissions.
you can allow your accounts to use the "nonroot" SCC to run as any uid, meeting your expectation to run as uid=1000. I would suggest you look at the first option as being the preferable option.

Your cluster is installed with a pod security policy that reject your spec. You can get the psp with kubectl get psp, then check the settings with kubectl describe psp <name>. Look at the settings volumes and runAsUser.

Related

GCP Firestore: Server request fails with Missing or insufficient permissions from GKE

I am trying to connect to Firestore from code running on GKE Container. Simple REST GET api is working fine, but when I access the Firestore from read/write, I am getting Missing or insufficient permissions.
An unhandled exception was thrown by the application.
Info
2021-06-06 21:21:20.283 EDT
Grpc.Core.RpcException: Status(StatusCode="PermissionDenied", Detail="Missing or insufficient permissions.", DebugException="Grpc.Core.Internal.CoreErrorDetailException: {"created":"#1623028880.278990566","description":"Error received from peer ipv4:172.217.193.95:443","file":"/var/local/git/grpc/src/core/lib/surface/call.cc","file_line":1068,"grpc_message":"Missing or insufficient permissions.","grpc_status":7}")
at Google.Api.Gax.Grpc.ApiCallRetryExtensions.<>c__DisplayClass0_0`2.<<WithRetry>b__0>d.MoveNext()
Update I am trying to provide secret to pod with service account credentails.
Here is the k8 file which deploys a pod to cluster with no issues when no secrets are provided and I can do Get Operations which don't hit Firestore, and they work fine.
kind: Deployment
apiVersion: apps/v1
metadata:
name: foo-worldmanagement-production
spec:
replicas: 1
selector:
matchLabels:
app: foo
role: worldmanagement
env: production
template:
metadata:
name: worldmanagement
labels:
app: foo
role: worldmanagement
env: production
spec:
containers:
- name: worldmanagement
image: gcr.io/foodev/foo/master/worldmanagement.21
resources:
limits:
memory: "500Mi"
cpu: "300m"
imagePullworld: Always
readinessProbe:
httpGet:
path: /api/worldManagement/policies
port: 80
ports:
- name: worldmgmt
containerPort: 80
Now, if I try to mount secret, the pod never gets created fully, and it eventually fails
kind: Deployment
apiVersion: apps/v1
metadata:
name: foo-worldmanagement-production
spec:
replicas: 1
selector:
matchLabels:
app: foo
role: worldmanagement
env: production
template:
metadata:
name: worldmanagement
labels:
app: foo
role: worldmanagement
env: production
spec:
volumes:
- name: google-cloud-key
secret:
secretName: firestore-key
containers:
- name: worldmanagement
image: gcr.io/foodev/foo/master/worldmanagement.21
volumeMounts:
- name: google-cloud-key
mountPath: /var/
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/key.json
resources:
limits:
memory: "500Mi"
cpu: "300m"
imagePullworld: Always
readinessProbe:
httpGet:
path: /api/worldManagement/earth
port: 80
ports:
- name: worldmgmt
containerPort: 80
I tried to deploy the sample application and it works fine.
If I keep only the following the yaml file, the container gets deployed properly
- name: google-cloud-key
secret:
secretName: firestore-key
But once I add the following to yaml, it fails
volumeMounts:
- name: google-cloud-key
mountPath: /var/
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/key.json
And I can see in GCP events that the container is not able to find the google-cloud-key. Any idea how to troubleshoot this issue, i.e why I am not able to mount the secrets, I can bash into the pod if needed.
I am using multi stage docker file made of
From mcr.microsoft.com/dotnet/sdk:5.0 AS build
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS runtime
Thanks
Looks like they key itself might not be correctly visible to the pod. I would start by getting into the pod with kubectl exec --stdin --tty <podname> -- /bin/bash and ensuring that the /var/key.json (per your config) is accessible and has the correct credentials.
The following would be a good way to mount the secret:
volumeMounts:
- name: google-cloud-key
mountPath: /var/run/secret/cloud.google.com
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/run/secret/cloud.google.com/key.json
The above assumes your secret was created with a command like:
kubectl --namespace <namespace> create secret generic firestore-key --from-file key.json
Also it is important to check your Workload Identity setup. The Workload Identity | Kubernetes Engine Documentation has a good section on this.

Istio Mixer container logs causing high disk space usage

I have an Istio-enabled EKS Cluster, and my nodes are constantly running out of disk space.
Calculating the overall disk usage lead me to the istio-mixer container, which has a log file using more than 50GB of disk space in only 12 days of uptime:
[root#ip-some-ip containers]# pwd
/var/lib/docker/containers
[root#ip-some-ip containers]# du -schx .[!.]* * | sort -h | tail -n 10
66M 8bf5e8ee5a03096c589ad8f53b9e1a3d3088ca67b0064f3796e406f00336b532
73M 657eca261461d10c5b1b81ab3078d2058b931a357395903808b0145b617c1662
101M bb338296ff06ef42ae6177c8a88e63438c26c398a457dc3f5301ffcb4ef2682b
127M 21f2da86055ad76882730abf65d4465386bb85598f797f451e7ad66726243613
134M 9c2be24e8b9345659b6f208c9f2d4650bb1ece11e0c4b0793aa01fdfebadb44e
383M 5d5fdbe6813ddc3ff2f6eb96f62f8317bd73e24730e2f44ebc537367d9987142
419M 475f8dfc74c3df2bc95c47df56e37d1dfb9181fae9aa783dafabba8283023115
592M 9193c50e586e0c7ecaeb87cecd8be13714a5d6ccd6ea63557c034ef56b07772f
52G 9c6b3e4f26603471d0aa9b6a61b3da5a69001e6b9be34432ffa62d577738c149
54G total
[root#ip-192-168-228-194 containers]# du -hs 9c6b3e4*/*.log
52G 9c6b3e4f26603471d0aa9b6a61b3da5a69001e6b9be34432ffa62d577738c149-json.log
[root#ip-ip-some-ip containers]# docker ps -a | grep 9c6b3e4f2660
9c6b3e4f2660 d559bdcd7a88 "/usr/local/bin/mi..." 12 days ago Up 12 days k8s_mixer_istio-telemetry-6b5579595f-fvm5x_istio-system_6324c262-f3b5-11e8-b615-0eccb0bb4724_0
My questions are:
This amount of log output is expected?
The mixer log level can be decreased? How? Changing it affects my telemetry metrics?
There is a way to configure a log "retention period"?
Additional info:
Istio v1.0.2 (deployed with the offical helm charts; no custom configs)
k8s v1.10.11-eks
The cluster has approximately 20 pods running in Istio-enabled namespaces
The default value of logging level in Mixer is info. And the logs provided by you, confirms that you have this settings. Therefore, a lot of redundant information gathered in logs and it is possible to decrease logging level for some sources.
You can change it in two ways:
On working pod without restart.
In your logs you can find the following line:
2018-12-12T17:54:55.461261Z info ControlZ available at 192.168.87.249:9876
It means, that in the mixer container on 9876 port you can find Istio ControlZ web-interface. To get an access to it from a computer with installed kubectl, you need to run the following command:
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l istio=mixer,istio-mixer-type=telemetry -o jsonpath='{.items[0].metadata.name}') 9876:9876 &
After that, in your browser go to the link http://localhost:9876/scopez/, and you will see the following dashboard, where you can change log levels:
Add --log_output_level flag to the istio-telemetry deployment for the mixer container.
Here is the description for the flag from the mixer's documentation:
--log_output_level string
Comma-separated minimum per-scope logging level of messages to output, in the form of :,:,... where scope can be one of [adapters, api, attributes, default, grpcAdapter, loadshedding] and level can be one of [debug, info, warn, error, none] (default "default:info")
Note, that for key --log_output_level attributes:warn,api:error in yaml file you need to use one of the following:
value - --log_output_level=attributes:warn,api:error or
values - --log_output_level and - attributes:warn,api:error on different lines
The example of the deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
labels:
chart: mixer-1.0.4
istio: mixer
release: istio
name: istio-telemetry
namespace: istio-system
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: telemetry
istio: mixer
istio-mixer-type: telemetry
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
sidecar.istio.io/inject: "false"
creationTimestamp: null
labels:
app: telemetry
istio: mixer
istio-mixer-type: telemetry
spec:
containers:
- args: #Flags for the Mixer process
- --address #Flag on two different lines
- unix:///sock/mixer.socket
- --configStoreURL=k8s:// #Flag with '='
- --configDefaultNamespace=istio-system
- --trace_zipkin_url=http://zipkin:9411/api/v1/spans
- --log_output_level=attributes:warn,api:error # <------ THIS LINE IS WHAT YOU ARE LOOKING FOR
env:
- name: GODEBUG
value: gctrace=2
image: docker.io/istio/mixer:1.0.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /version
port: 9093
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
name: mixer
ports:
- containerPort: 9093
protocol: TCP
- containerPort: 42422
protocol: TCP
resources:
requests:
cpu: 10m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /sock
name: uds-socket
- args:
- proxy
- --serviceCluster
- istio-telemetry
- --templateFile
- /etc/istio/proxy/envoy_telemetry.yaml.tmpl
- --controlPlaneAuthPolicy
- MUTUAL_TLS
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
image: docker.io/istio/proxyv2:1.0.4
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
resources:
requests:
cpu: 10m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/certs
name: istio-certs
readOnly: true
- mountPath: /sock
name: uds-socket
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: istio-mixer-service-account
serviceAccountName: istio-mixer-service-account
terminationGracePeriodSeconds: 30
volumes:
- name: istio-certs
secret:
defaultMode: 420
optional: true
secretName: istio.istio-mixer-service-account
- emptyDir: {}
name: uds-socket
Additionally, you can configure log rotation for mixer process using the following flags:
--log_rotate string The path for the optional rotating log file
--log_rotate_max_age int The maximum age in days of a log file beyond which the file is rotated (0 indicates no limit) (default 30)
--log_rotate_max_backups int The maximum number of log file backups to keep before older files are deleted (0 indicates no limit) (default 1000)
--log_rotate_max_size int The maximum size in megabytes of a log file beyond which the file is rotated (default 104857600)
However, I have no possibility to generate a huge amount of such logs and test how it works.
Links:
Unfortunately, the official documentation is not good, but maybe it helps somehow.
And as a bonus, here is the list of all mixer server flags.
This is how I solved the problem and some useful information for new Istio versions.
Istio v1.0.2:
The huge amount of logs was being generated by the Stdio adapter:
The stdio adapter enables Istio to output logs and metrics to the
local machine. Logs and metrics can be directed to Mixer’s standard
output stream, standard error stream, or to any locally reachable
file.
In Istio v1.0.2 this adapter was enabled by default, streaming the logs to Mixer container stderr. To temporarily solve this, I deleted the following rules:
kubectl delete rule stdio --namespace=istio-system
kubectl delete rule stdio-tcp --namespace=istio-system
Deleting these rules does not affect the Prometheus metrics (which are handled by another adapter).
Istio v1.1.0+:
In this version, Istio introduced the mixer.adapters.stdio.enabled to helm installation options, disabling the stdio adapter by default, and including the following comment:
# stdio is a debug adapter in istio-telemetry, it is not recommended
for production use.
The changes were made in the following PRs:
Add adapter tuning install options to helm (#10525)
Turn policy off by default (#12114)

Intermittent failure creating container on Kubernetes - failing to mount default token

For the past couple of days we have been experiencing an intermittent deployment failure when deploying (via Helm) to Kubernetes v1.11.2.
When it fails, kubectl describe <deployment> usually reports that the container failed to create:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1s default-scheduler Successfully assigned default/pod-fc5c8d4b8-99npr to fh1-node04
Normal Pulling 0s kubelet, fh1-node04 pulling image "docker-registry.internal/pod:0e5a0cb1c0e32b6d0e603333ebb81ade3427ccdd"
Error from server (BadRequest): container "pod" in pod "pod-fc5c8d4b8-99npr" is waiting to start: ContainerCreating
and the only issue we can find in the kubelet logs is:
58468 kubelet_pods.go:146] Mount cannot be satisfied for container "pod", because the volume is missing or the volume mounter is nil: {Name:default-token-q8k7w ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}
58468 kuberuntime_manager.go:733] container start failed: CreateContainerConfigError: cannot find volume "default-token-q8k7w" to mount container start failed: CreateContainerConfigError: cannot find volume "default-token-q8k7w" to mount into container "pod"
It's intermittent which means it fails around once in every 20 or so deployments. Re-running the deployment works as expected.
The cluster and node health all look fine at the time of the deployment, so we are at a loss as to where to go from here. Looking for advice on where to start next on diagnosing the issue.
EDIT: As requested, the deployment file is generated via a Helm template and the output is shown below. For further information, the same Helm template is used for a lot of our services, but only this particular service has this intermittent issue:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: pod
labels:
app: pod
chart: pod-0.1.0
release: pod
heritage: Tiller
environment: integration
annotations:
kubernetes.io/change-cause: https://github.com/path_to_release
spec:
replicas: 2
revisionHistoryLimit: 3
selector:
matchLabels:
app: pod
release: pod
environment: integration
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: pod
release: pod
environment: integration
spec:
containers:
- name: pod
image: "docker-registry.internal/pod:0e5a0cb1c0e32b6d0e603333ebb81ade3427ccdd"
env:
- name: VAULT_USERNAME
valueFrom:
secretKeyRef:
name: "pod-integration"
key: username
- name: VAULT_PASSWORD
valueFrom:
secretKeyRef:
name: "pod-integration"
key: password
imagePullPolicy: IfNotPresent
command: ['mix', 'phx.server']
ports:
- name: http
containerPort: 80
protocol: TCP
envFrom:
- configMapRef:
name: pod
livenessProbe:
httpGet:
path: /api/health
port: http
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /api/health
port: http
initialDelaySeconds: 10
resources:
limits:
cpu: 750m
memory: 200Mi
requests:
cpu: 500m
memory: 150Mi

Failed to attach volume ... already being used by

I am running Kubernetes in a GKE cluster using version 1.6.6 and another cluster with 1.6.4. Both are experiencing issues with failing over GCE compute disks.
I have been simulating failures using kill 1 inside the container or killing the GCE node directly. Sometimes I get lucky and the pod will get created on the same node again. But most of the time this isn't the case.
Looking at the event log it shows the error trying to mount 3 times and it fails to do anything more. Without human intervention it never corrects it self. I am forced to kill the pod multiple times until it works. During maintenances this is a giant pain.
How do I get Kubernetes to fail over with volumes properly ? Is there a way to tell the deployment to try a new node on failure ? Is there a way to remove the 3 retry limit ?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: dev-postgres
namespace: jolene
spec:
revisionHistoryLimit: 0
template:
metadata:
labels:
app: dev-postgres
namespace: jolene
spec:
containers:
- image: postgres:9.6-alpine
imagePullPolicy: IfNotPresent
name: dev-postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
env:
[ ** Removed, irrelevant environment variables ** ]
ports:
- containerPort: 5432
livenessProbe:
exec:
command:
- sh
- -c
- exec pg_isready
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 6
readinessProbe:
exec:
command:
- sh
- -c
- exec pg_isready --host $POD_IP
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 5
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: dev-jolene-postgres
I have tried this with and without PersistentVolume / PersistentVolumeClaim.
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: dev-jolene-postgres
spec:
capacity:
storage: "1Gi"
accessModes:
- "ReadWriteOnce"
claimRef:
namespace: jolene
name: dev-jolene-postgres
gcePersistentDisk:
fsType: "ext4"
pdName: "dev-jolene-postgres"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dev-jolene-postgres
namespace: jolene
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
By default, every node is schedulable, so there is no need to explicitly mention it in deployment. and feature which can mention retry limits is still in progress, which can be tracked here, https://github.com/kubernetes/kubernetes/issues/16652

How to run command after initialization

I would like to run specific command after initialization of deployment is successful.
This is my yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: {{my-service-image}}
env:
- name: NODE_ENV
value: "docker-dev"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 3000
However, I would like to run command for db migration after (not before) deployment is successfully initialized and pods are running.
I can do it manually for every pod (with kubectl exec), but this is not very scalable.
I resolved it using lifecycles:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: {{my-service-image}}
env:
- name: NODE_ENV
value: "docker-dev"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 3000
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", {{cmd}}]
You can use Helm to deploy a set of Kubernetes resources. And then, use a Helm hook, e.g. post-install or post-upgrade, to run a Job in a separate docker container. Set your Job to invoke db migration. A Job will run >=1 Pods to completion, so it fits here quite well.
I chose to use a readinessProbe
My application requires configuration after the process has completely started.
The postStart command was running before the app was ready.
readinessProbe:
exec:
command: [healthcheck]
initialDelaySeconds: 30
periodSeconds: 2
timeoutSeconds: 1
successThreshold: 3
failureThreshold: 10