i want to use command: - wget to download a file and put in a volume gunzip since it is a gz format. but somehow the container fails as soon as i hit the kubectl apply -f command. the Pod status displays Error. what could i be doing wrong?
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
labels:
app: example-app
spec:
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: docker.source.co.za/azp/example-app:1
imagePullPolicy: Always
command:
- wget
- "-O"
- http://confluence.source.co.za/download/attachments/627674073/refpolicies.tar.gz
volumeMounts:
- name: example-app
mountPath: /config/
readOnly: true
volumes:
- name: example-app
emptyDir: {}
Related
Pod is running state but logging inside the container and and running capsh --print, give error as:
sh: capsh: not found
Running same image with --cap-add SYS_ADMIN or --privileged as docker container gives desired output.
What changes in deployment or extra permissions are needed for it to work inside k8s container?
Deployment :
kind: Deployment
apiVersion: apps/v1
metadata:
name: sample-deployment
namespace: sample
labels:
app: sample
spec:
replicas: 1
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: alpine:3.17
command:
- sh
- -c
- while true; do echo Hello World; sleep 10; done
env:
- name: NFS_EXPORT_0
value: /var/opt/backup
- name: NFS_LOG_LEVEL
value: DEBUG
volumeMounts:
- name: backup
mountPath: /var/opt/backup
securityContext:
capabilities:
add: ["SYS_ADMIN"]
volumes:
- name: backup
persistentVolumeClaim:
claimName: sample-pvc
I'm running the theia code-editor on my EKS cluster and the image's default user is theia on which I grant read and write permissions on /home/project. However, when I mount that volume /home/project on my EFS and try to read or write on /home/project it returns permission denied I tried using initContainer but still the same problem:
apiVersion: apps/v1
kind: Deployment
metadata:
name: atouati
spec:
replicas: 1
selector:
matchLabels:
app: atouati
template:
metadata:
labels:
app: atouati
spec:
initContainers:
- name: take-data-dir-ownership
image: alpine:3
command:
- chown
- -R
- 1001:1001
- /home/project:cached
volumeMounts:
- name: project-volume
mountPath: /home/project:cached
containers:
- name: theia
image: 'xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/theia-code-editor:latest'
ports:
- containerPort: 3000
volumeMounts:
- name: project-volume
mountPath: "/home/project:cached"
volumes:
- name: project-volume
persistentVolumeClaim:
claimName: local-storage-pvc
---
apiVersion: v1
kind: Service
metadata:
name: atouati
spec:
type: ClusterIP
selector:
app: atouati
ports:
- protocol: TCP
port: 80
targetPort: 3000
When I do ls -l on /home/project
drwxr-xr-x 2 theia theia 6 Aug 21 17:33 project
On the efs directory :
drwxr-xr-x 4 root root 6144 Aug 21 17:32
You can instead set the securityContext in your pod spec to run the Pods as uid/gid 1001.
For example
apiVersion: apps/v1
kind: Deployment
metadata:
name: atouati
spec:
replicas: 1
selector:
matchLabels:
app: atouati
template:
metadata:
labels:
app: atouati
spec:
securityContext:
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
containers:
- name: theia
image: 'xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/theia-code-editor:latest'
ports:
- containerPort: 3000
volumeMounts:
- name: project-volume
mountPath: "/home/project:cached"
volumes:
- name: project-volume
persistentVolumeClaim:
claimName: local-storage-pvc
Have you kubectl execd into the container to confirm that that's the uid/gid that you need to use based on the apparent ownership?
I'm moving from Kubernetes v1.17.4 to v1.18.3
The command kubectl -n nameSpace cp file pod-xxxx-yyyy:file will not write to the pod.
There are no error messages generated
Copying from a pod works just fine.
YAML for creating the pod
apiVersion: apps/v2
kind: Deployment
metadata:
name: acadmin
namespace: iiabe3h
labels:
app: acadmin
spec:
replicas: 1
selector:
matchLabels:
app: acadmin
template:
metadata:
labels:
app: acadmin
spec:
schedulerName: stork
terminationGracePeriodSeconds: 1800
securityContext:
supplementalGroups: [12312, 65432, 3399, 1000001]
fsGroup: 1000
nodeSelector:
node-role.kubernetes.io/worker: "true"
volumes:
- name: acadmin-logs
persistentVolumeClaim:
claimName: acadmin-logs
- name: acadmin-config
persistentVolumeClaim:
claimName: acadmin-config
containers:
- name: acadmin
image: acadmin:latest
env:
- name: spring.profiles.active
value: "e3h"
- name: AccessControlFilter.props
value: "/opt/jboss/wildfly/standalone/configuration/AccessControlFilter.props"
volumeMounts:
- name: acadmin-logs
mountPath: /opt/jboss/wildfly/standalone/log
- name: acadmin-config
mountPath: /opt/jboss/wildfly/standalone/configuration
I am trying to create a StatefulSet. I want to create a file on the attached volume so i am using this command touch /data/test.txt but it seems like the container crashes because of that. Why would it do that? If i don't use the command everything works fine. What are the properties of the /data directory mounted to volume? Like read/write permissions.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /data
args:
- /bin/sh
- -c
- touch /data/test.txt
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Because the default ENTRYPOINT of k8s.gcr.io/nginx-slim:0.8 would be nginx start or something likely.
So, if you want to inject the image, you need to set command
command: ["/bin/sh","-c"]
args:
- |
touch /data/test.txt
And you can kubectl describe or kubectl logs to see what's wrong with your pod/deployment.
I am trying to deploy a simple nginx in kubernetes using hostvolumes. I use the next yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
When I deploy it kubectl create -f webserver.yaml, it throws the next error:
error: error validating "webserver.yaml": error validating data: ValidationError(Deployment.spec.template): unknown field "volumes" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false
I believe you have the wrong indentation. The volumes key should be at the same level as containers.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
Look at this wordpress example from the documentation to see how it's done.