k8s Permission Denied issue - kubernetes

I got that error when deploying a k8s deployment, I tried to impersonate being a root user via the security context but it didn't help, any guess how to solve it? Unfortunately, I don't have any other ideas or a workaround to avoid this permission issue.
The error I get is:
30: line 1: /scripts/wrapper.sh: Permission denied
stream closed
The deployment is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler-grok-exporter
labels:
app: cluster-autoscaler-grok-exporter
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler-grok-exporter
sidecar: cluster-autoscaler-grok-exporter-sidecar
template:
metadata:
labels:
app: cluster-autoscaler-grok-exporter
sidecar: cluster-autoscaler-grok-exporter-sidecar
spec:
securityContext:
runAsUser: 1001
fsGroup: 2000
serviceAccountName: flux
imagePullSecrets:
- name: id-docker
containers:
- name: get-data
# 3.5.0 - helm v3.5.0, kubectl v1.20.2, alpine 3.12
image: dtzar/helm-kubectl:3.5.0
command: ["sh", "-c", "/scripts/wrapper.sh"]
args:
- cluster-autoscaler
- "90"
# - cluster-autoscaler
- "30"
- /scripts/get_data.sh
- /logs/data.log
volumeMounts:
- name: logs
mountPath: /logs/
- name: scripts-volume-get-data
mountPath: /scripts/get_data.sh
subPath: get_data.sh
- name: scripts-wrapper
mountPath: /scripts/wrapper.sh
subPath: wrapper.sh
- name: export-data
image: ippendigital/grok-exporter:1.0.0.RC3
imagePullPolicy: Always
ports:
- containerPort: 9148
protocol: TCP
volumeMounts:
- name: grok-config-volume
mountPath: /grok/config.yml
subPath: config.yml
- name: logs
mountPath: /logs
volumes:
- name: grok-config-volume
configMap:
name: grok-exporter-config
- name: scripts-volume-get-data
configMap:
name: get-data-script
defaultMode: 0777
defaultMode: 0700
- name: scripts-wrapper
configMap:
name: wrapper-config
defaultMode: 0777
defaultMode: 0700
- name: logs
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: cluster-autoscaler-grok-exporter-sidecar
labels:
sidecar: cluster-autoscaler-grok-exporter-sidecar
spec:
type: ClusterIP
ports:
- name: metrics
protocol: TCP
targetPort: 9144
port: 9148
selector:
sidecar: cluster-autoscaler-grok-exporter-sidecar
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: cluster-autoscaler-grok-exporter
app.kubernetes.io/part-of: grok-exporter
name: cluster-autoscaler-grok-exporter
spec:
endpoints:
- port: metrics
selector:
matchLabels:
sidecar: cluster-autoscaler-grok-exporter-sidecar

From what I can see, your script does not have execute permissions.
Remove this line from your config map.
defaultMode: 0700
Keep only:
defaultMode: 0777
Also, I see missing leading / in your script path
- /bin/sh scripts/get_data.sh
So, change it to
- /bin/sh /scripts/get_data.sh

Related

$(POD_NAME) in subPath of Statefulset + Kustomize not expanding

I have a stateful set with a volume that uses a subPath: $(POD_NAME) I've also tried $HOSTNAME which also doesn't work. How does one set the subPath of a volumeMount to the name of the pod or the $HOSTNAME?
Here's what I have:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ravendb
namespace: pltfrmd
labels:
app: ravendb
spec:
serviceName: ravendb
template:
metadata:
labels:
app: ravendb
spec:
containers:
- command:
# ["/bin/sh", "-ec", "while :; do echo '.'; sleep 6 ; done"]
- /bin/sh
- -c
- /opt/RavenDB/Server/Raven.Server --log-to-console --config-path /configuration/settings.json
image: ravendb/ravendb:latest
imagePullPolicy: Always
name: ravendb
env:
- name: POD_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RAVEN_Logs_Mode
value: Information
ports:
- containerPort: 8080
name: http-api
protocol: TCP
- containerPort: 38888
name: tcp-server
protocol: TCP
- containerPort: 161
name: snmp
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
subPath: $(POD_NAME)
- mountPath: /configuration
name: configuration
subPath: ravendb
- mountPath: /certificates
name: certificates
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 120
volumes:
- name: certificates
secret:
secretName: ravendb-certificate
- name: configuration
persistentVolumeClaim:
claimName: configuration
- name: data
persistentVolumeClaim:
claimName: ravendb
And the Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: pltfrmd
name: ravendb
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /volumes/ravendb
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: pltfrmd
name: ravendb
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
$HOSTNAME used to work, but doesn't anymore for some reason. Wondering if it's a bug in the host path storage provider?
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
volumeMounts:
- name: nginx
mountPath: /usr/share/nginx/html
subPath: $(POD_NAME)
volumes:
- name: nginx
configMap:
name: nginx
Ok, so after great experimentation I found a way that still works:
Step one, map an environment variable:
env:
- name: POD_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
This creates $(POD_HOST_NAME) based on the field metadata.name
Then in your mount you do this:
volumeMounts:
- mountPath: /data
name: data
subPathExpr: $(POD_HOST_NAME)
It's important to use subPathExpr as subPath (which worked before) doesn't work. Then it will use the environment variable you created and properly expand it.

Kubernetes pod volume [gcePersistentDisk] zone specification

I have set up a nfs-server in our cluster which references a compute disk that's being created by terraform. Here is the script to create the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
namespace: storage
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: nfs-pvc
volumes:
- name: nfs-pvc
gcePersistentDisk:
pdName: testing-airflow-disk
fsType: ext4
The error I'm encountering is that when a pod is created by this deployment sometimes it is looking for the gcePersistentDisk in different zones [europe-west1-c, europe-west1-b]. The disk resides in europe-west1-b so I would like to specify the disk zone but can't seem to find in the docs where that can be done in the deployment yaml.
The solution I found based on Neskews answer was to set a node affinity condition on the deployment to only use nodes which have the same zone label as the disk zone specification. The code for this looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
namespace: storage
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: nfs-pvc
volumes:
- name: nfs-pvc
gcePersistentDisk:
pdName: testing-airflow-disk
fsType: ext4
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- europe-west1-b

Permission denied on mounted volume even after using initContainers?

I'm running the theia code-editor on my EKS cluster and the image's default user is theia on which I grant read and write permissions on /home/project. However, when I mount that volume /home/project on my EFS and try to read or write on /home/project it returns permission denied I tried using initContainer but still the same problem:
apiVersion: apps/v1
kind: Deployment
metadata:
name: atouati
spec:
replicas: 1
selector:
matchLabels:
app: atouati
template:
metadata:
labels:
app: atouati
spec:
initContainers:
- name: take-data-dir-ownership
image: alpine:3
command:
- chown
- -R
- 1001:1001
- /home/project:cached
volumeMounts:
- name: project-volume
mountPath: /home/project:cached
containers:
- name: theia
image: 'xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/theia-code-editor:latest'
ports:
- containerPort: 3000
volumeMounts:
- name: project-volume
mountPath: "/home/project:cached"
volumes:
- name: project-volume
persistentVolumeClaim:
claimName: local-storage-pvc
---
apiVersion: v1
kind: Service
metadata:
name: atouati
spec:
type: ClusterIP
selector:
app: atouati
ports:
- protocol: TCP
port: 80
targetPort: 3000
When I do ls -l on /home/project
drwxr-xr-x 2 theia theia 6 Aug 21 17:33 project
On the efs directory :
drwxr-xr-x 4 root root 6144 Aug 21 17:32
You can instead set the securityContext in your pod spec to run the Pods as uid/gid 1001.
For example
apiVersion: apps/v1
kind: Deployment
metadata:
name: atouati
spec:
replicas: 1
selector:
matchLabels:
app: atouati
template:
metadata:
labels:
app: atouati
spec:
securityContext:
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
containers:
- name: theia
image: 'xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/theia-code-editor:latest'
ports:
- containerPort: 3000
volumeMounts:
- name: project-volume
mountPath: "/home/project:cached"
volumes:
- name: project-volume
persistentVolumeClaim:
claimName: local-storage-pvc
Have you kubectl execd into the container to confirm that that's the uid/gid that you need to use based on the apparent ownership?

How to mount a configMap as a volume mount in a Stateful Set

I don't see an option to mount a configMap as volume in the statefulset , as per https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulset-v1-apps only PVC can be associated with "StatefulSet" . But PVC does not have option for configMaps.
Here is a minimal example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example
spec:
selector:
matchLabels:
app: example
serviceName: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: nginx:stable-alpine
volumeMounts:
- mountPath: /config
name: example-config
volumes:
- name: example-config
configMap:
name: example-configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: example-configmap
data:
a: "1"
b: "2"
In the container, you can find the files a and b under /config, with the contents 1 and 2, respectively.
Some explanation:
You do not need a PVC to mount the configmap as a volume to your pods. PersistentVolumeClaims are persistent drives, which you can read from/write to. An example for their usage is a database, such as Postgres.
ConfigMaps on the other hand are read-only key-value structures that are stored inside Kubernetes (in its etcd store), which are to store the configuration for your application. Their values can be mounted as environment variables or as files, either individually or altogether.
I have done it in this way.
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-configmap
namespace: default
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_shovel,rabbitmq_shovel_management].
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
labels:
component: rabbitmq
spec:
serviceName: "rabbitmq"
replicas: 1
selector:
matchLabels:
component: rabbitmq
template:
metadata:
labels:
component: rabbitmq
spec:
initContainers:
- name: "rabbitmq-config"
image: busybox:1.32.0
volumeMounts:
- name: rabbitmq-config
mountPath: /tmp/rabbitmq
- name: rabbitmq-config-rw
mountPath: /etc/rabbitmq
command:
- sh
- -c
- cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf && echo '' >> /etc/rabbitmq/rabbitmq.conf;
cp /tmp/rabbitmq/enabled_plugins /etc/rabbitmq/enabled_plugins
volumes:
- name: rabbitmq-config
configMap:
name: rabbitmq-configmap
optional: false
items:
- key: enabled_plugins
path: "enabled_plugins"
- name: rabbitmq-config-rw
emptyDir: {}
containers:
- name: rabbitmq
image: rabbitmq:3.8.5-management
env:
- name: RABBITMQ_DEFAULT_USER
value: "username"
- name: RABBITMQ_DEFAULT_PASS
value: "password"
- name: RABBITMQ_DEFAULT_VHOST
value: "vhost"
ports:
- containerPort: 15672
name: ui
- containerPort: 5672
name: api
volumeMounts:
- name: rabbitmq-data-pvc
mountPath: /var/lib/rabbitmq/mnesia
volumeClaimTemplates:
- metadata:
name: rabbitmq-data-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
component: rabbitmq
ports:
- protocol: TCP
port: 15672
targetPort: 15672
name: ui
- protocol: TCP
port: 5672
targetPort: 5672
name: api
type: ClusterIP

Postgres user not created Kubernetes

I have the following YAML file for create a postgres server instance
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: spring-demo-db
labels:
app: spring-demo-application
spec:
replicas: 1
selector:
matchLabels:
app: spring-demo-db
template:
metadata:
creationTimestamp: null
labels:
app: spring-demo-db
spec:
containers:
- name: spring-demo-db
image: postgres:10.4
ports:
- name: spring-demo-db
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_PASSWORD
value: "springdemo"
- name: POSTGRES_USER
value: "springdemo"
- name: POSTGRES_DB
value: "springdemo"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
volumes:
- name: "postgres-storage"
persistentVolumeClaim:
claimName: spring-demo-pv-claim
restartPolicy: Always
But when ssh into the container user springdemo not created. I have been struggling all day.What could be the problem for this
Anyone who can help me?
You didn't mention what command you're running and what error you're getting, so I'm guessing here, but try this:
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: spring-demo-db
labels:
app: spring-demo-application
spec:
replicas: 1
selector:
matchLabels:
app: spring-demo-db
template:
metadata:
creationTimestamp: null
labels:
app: spring-demo-db
spec:
containers:
- name: spring-demo-db
image: postgres:10.4
ports:
- name: spring-demo-db
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_USER
value: "springdemo"
- name: POSTGRES_DB
value: "springdemo"
- name: POSTGRES_PASSWORD
value: "springdemo"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
volumes:
- name: "postgres-storage"
persistentVolumeClaim:
claimName: spring-demo-pv-claim
restartPolicy: Always
But if it doesn't work, just use the Helm chart, because, among other issues, you are passing the password in an insecure way, which is a bad idea.