I have a Pod which contains two containers - nginx and busybox. Now I want to access the files within nginx container from the busybox container, say doing 'ls /etc/nginx' in busybox container to list the files there. Is there any configuration in k8s pod to allow me to achieve this?
Below is my current pod yaml file.
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginxbusybox
name: nginxbusybox
spec:
shareProcessNamespace: true
containers:
- image: nginx
name: nginx
- image: busybox
name: busybox
command:
- sleep
- '86400'
P.S. This is for debugging a container which don't have the common linux tools within.
OK, I've found a way to achieve this. By using the 'kubectl debug' and checking the /proc/1/root/, I could see the filesystem in the nginx container. Below's what this looks like.
Related
I have my application running in K8S pods. my application writes logs to a particular path for which I already have volume mounted on the pod. my requirement is to schedule cronjob which will trigger weekly once and read the logs from that pod volume and generate a report base on my script (which is basically filtering the logs based on some keywords). and send the report via mail.
unfortunately I am not sure how I will proceed on this as I couldn't get any doc or blog which talks about integrating conrjob to existing pod or volume.
apiVersion: v1
kind: Pod
metadata:
name: webserver
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
- name: sidecar-container
image: busybox
command: ["sh","-c","while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "discovery-cronjob"
labels:
app.kubernetes.io/name: discovery
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: log-report
image: busybox
command: ['/bin/sh']
args: ['-c', 'cat /var/log/nginx/access.log > nginx.log']
volumeMounts:
- mountPath: /log
name: shared-logs
restartPolicy: Never
I see two things here that you need to know:
Unfortunately, it is not possible to schedule a cronjob on an existing pod. Pods are ephemeral and job needs to finish. It would be impossible to tell if the job completed or not. This is by design.
Also in order to be able to see the files from one pod to another you must use a PVC. The logs created by your app have to be persisted if your job wants to access it. Here you can find some examples of how to Create ReadWriteMany PersistentVolumeClaims on your Kubernetes Cluster:
Kubernetes allows us to provision our PersistentVolumes dynamically
using PersistentVolumeClaims. Pods treat these claims as volumes. The
access mode of the PVC determines how many nodes can establish a
connection to it. We can refer to the resource provider’s docs for
their supported access modes.
I can run the Docker container for wso2 ei with following command.
docker run -it -p 8280:8280 -p 8243:8243 -p 9443:9443 -v wso2ei:/home/wso2carbon --name integrator wso2/wso2ei-integrator
I'm trying to create the pod definition file for the same. I don't know how to do port mapping and volume mapping in pod definition file. The following is the file I have created up to now. How can I complete the rest?
apiVersion: v1
kind: Pod
metadata:
name: ei-pod
labels:
type: ei
version: 6.6.0
spec:
containers:
- name: integrator
image: wso2/wso2ei-integrator
Here is YAML content which might work:
apiVersion: v1
kind: Pod
metadata:
name: ei-pod
labels:
type: ei
version: 6.6.0
spec:
containers:
- name: integrator
image: wso2/wso2ei-integrator
ports:
- containerPort: 8280
volumeMounts:
- mountPath: /wso2carbon
name: wso2ei
volumes:
- name: wso2ei
hostPath:
# directory location on host
path: /home/wso2carbon
While the above YAML content is just a basic example, it's not recommended for production usage because of two reasons:
Use deployment or statefulset or daemonset instead of pods directly.
hostPath volume is not sharable between nodes. So use external volumes such as NFS or Block and mount it into the pod. Also look at dynamic volume provisioning using storage class.
Is there a way to share the filesystem of two containers in a multi-container pod? without using shared volumes?
I have following pod manifest
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: pod
name: pod
spec:
replicas: 1
selector:
matchLabels:
app: pod
template:
metadata:
labels:
app: pod
spec:
containers:
- image: nginx:latest
name: nginx
- image: jenkins
name: jenkins
I want to access /var/jenkins_home path which is available in jenkins container from nginx container.
This is just for experimental purposes, I am trying to learn ways to share filesystem/things in general across containers in a pod.
You can't share files between containers without some sort of shared volume.
Part of the goal of a containerized system is that the container filesystems are isolated from each other. There are a huge number of practical problems with sharing files specifically (what if the containers are on different nodes? what if you have three replicas each of Jenkins and Nginx? what if they're all trying to write the same files?) and in general it's better to just avoid sharing files altogether if that's a possibility.
In the specific example you've shown, the lifecycle of a Jenkins CI system and an Nginx server will just be fundamentally different; whenever Jenkins builds something you don't want to restart it to also restart the Web server, and you could very easily want to scale up the Web tier without adding Jenkins workers. A better approach here would be to have Jenkins generate custom Docker images, push them to a registry, and then use the Kubernetes API to create a separate Nginx Deployment.
In most cases (especially because of the scaling considerations) you should avoid multi-container pods altogether.
(A more specific example of a case where this setup does make sense is if you're storing credentials somewhere like a Hashicorp Vault server. You would need an init container to connect to Vault, retrieve the credentials, and deposit them in an emptyDir volume, and then the main container can start up having gotten those credentials. As far as the main server container is concerned it's the only important part of this pod, and logically the pod is nothing more than the server container with some auxiliary stuff.)
Below sample would help you how to share volume between cobtainers
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
I want the all processes within the pod see the same network and process table, as well as share any IPCs with the host processes.
I know it possible when we use docker by leveraging the following command.
docker run -it --privileged --ipc=host --net=host --pid=host \
-v /:/host -v /run:/run -v /etc/localtime:/etc/localtime \
--name privcontainer centos7 /bin/bash
On the other hand, is there any way to run super-privileged containers using Kubernetes?
If possible, I would like to know the way to write pod yaml file.
There is a privileged flag on the SecurityContext of the container spec.
Check out documentation for more details.
I could only find an example from the v1.4 docs:
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
- name: hello-world-container
# The container definition
# ...
securityContext:
privileged: true ###Here is what you are looking for
seLinuxOptions:
level: "s0:c123,c456"
Even more infos here
I'm sure you're aware, but as a general word of caution, the privileged will remove all container security settings and open up the cluster to potential security vulnerabilities.
To disable the namespacing of a container PIDs, and thus allowing this container to view all processes on a host, you need to specify hostPID: true in the pod specs.
You might find this manifest useful if you want to inspect a Kubernetes host from within a pod:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: debug
spec:
selector:
matchLabels:
app: debug
template:
metadata:
labels:
app: debug
name: debug
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
hostNetwork: true
hostPID: true
containers:
- name: linux
image: alpine
args:
- sleep
- "3600"
securityContext:
privileged: true
runAsGroup: 0
runAsUser: 0
volumeMounts:
- mountPath: /mnt/host
name: host
volumes:
- hostPath:
path: /
type: ""
name: host
This will instantiate a "debug" pod on each node on your cluster (including control-plane node if they are visible to you). This pod will have access to all PIDs from the host, will see all its networks, and the node filesystem will be browseable at /mnt/host.
I have two kubernetes deployments say backend and frontend. frontend deployment depends on the backend deployments. Means after backend deployment pods are ready then the pods for frontend should be created. How can I specify this in the deployment yaml?
The solution you are looking for is Init container. Pod can have one or more Init containers and they run one after another before main Pod containers are started. Please be aware that each Init container runs until completion.
So you can use Init containers to check availability of your back-end applications. Here is an example:
apiVersion: v1
kind: Pod
metadata:
name: front-end
labels:
app: front-end
spec:
containers:
- name: front-end
image: node:boron
initContainers:
- name: init-backend
image: busybox
command: ['sh', '-c', 'until <put check condition for your back-end>; do echo waiting for back-end; sleep 2; done;']
For more information you can go through documentation.