I can't seem to understand why the below mentioned pod manifest isn't working if I remove spec.containers.command, the pod fails if I remove the command.
I took this example from the official documentation
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
Because the busybox image doesn't run any process at start by itself. Containers are designed to run single application and shutdown when the app exits. If the image doesn't run anything it will immediately exit. In the Kubernetes the spec.containers.command overwrites the default container command. You can try changing the manifest image for i.e. image: nginx, remove the spec.containers.command and it will run, because that image as default Nginx server.
Related
I would like to test a security vulnerability (attack scenario) in K8s cluster. As a privileged K8s user, I want to mount ~/.kube directory inside a pod in order to change/read K8s configurations and CA info. Or maybe any root directory on Master-node. The pod runs with no error, and I can tell that the directory has already been mounted to the pod, but I cannot read the mounted directory.
Here is the deployment file:
apiVersion: v1
kind: Pod
metadata:
name: attack-pod
namespace: target-ns
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1001
fsGroup: 0
fsGroupChangePolicy: "OnRootMismatch"
tolerations:
- key: "is_control"
operator: "Equal"
value: "true"
effect: "NoExecute"
nodeName: master-node-1
imagePullSecrets:
- name: regcred
containers:
- name: attack-container
image: bash
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- mountPath: /home/admin-user/.kube
name: mount-root-into-mnt
securityContext:
allowPrivilegeEscalation: true
volumes:
- name: mount-root-into-mnt
hostPath:
path: /home/admin-user/.kube
serviceAccountName: service-account
But when I exec into the pod kubectl -n target-ns -it attack-pod -- bash and try to list the files inside /home/admin-user/.kube I get this error:
ls: can't open '.': Permission denied!
Although, the directory was mounted successfully, and I added the correct permissions in securityContext.
create a pod that runs two containers and ensure that the pod has shared volume that can be used by both containers to communicate with each other write an HTML file in one container and try accessing it from another container
can anyone tell me how to do it
Example pod with multiple containers
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
Official document : https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
Above example is using the empty dir so if you POD restart or Start again you will lose the data.
If you have any requirements to save the data i would suggest using the PVC instead of the empty dir.
i would recommend using the NFS if you can.
I'm working on a project that requires writing data to a cifs storage. I am using fstab/cifs flex volume. https://github.com/fstab/cifs
When I run the following yaml I get the error
sh: 1: cannot create /app/xxx/test.txt : Permission denied
I have confirmed the mount is mounting by doing an ls of the directory and it shows everything on the cifs storage. So why can't a create a file?
apiVersion: v1
kind: Pod
metadata:
name: test
namespace: xxx
spec:
securityContext:
fsGroup: 2000
containers:
- name: test
image: "docker-repo.XXX.test:5000/xxx/test:latest"
command: ['sh', '-c', 'echo test > /app/xxx/test.txt']
securityContext:
runAsUser: 1001
runAsGroup: 2000
volumeMounts:
- name: exsnap
mountPath: /app/xxx
volumes:
- name: exsnap
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "xxx-secret"
options:
networkPath: "//xxx.xxx.xxx.xxx/DATA/"
mountOptions: "vers=2.0,dir_mode=0766,file_mode=0666,noperm"
Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc
I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519