Do pods share the filesystems, similar to how they share same network namespace? - kubernetes

I have created a pod with two containers. I know that different containers in a pod share same network namespace (i.e.,same IP and port space) and can also share a storage volume between them via configmaps. My question is do the pods also share same filesystem. For instance, in my case I have one container 'C1' that generates a dynamic file every 10 min in /var/targets.yml and I want the the other container 'C2' to read this file and perform its own independent action.
Is there a way to do this, may be some workaround via configmaps? or do I have to access these file via networking since each container have their own IP(But this may not be a good idea when it comes to POD restarts). Any suggestions or references please?

You can use an emptyDir for this:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: generating-container
volumeMounts:
- mountPath: /cache
name: cache-volume
- image: gcr.io/google_containers/test-webserver
name: consuming-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
But be aware, that the data isn't persistent during container recreations.

Related

How to create pod with default uid:gid and multiple groups access gids( 4 to 5 ) that's needed to access nfs shares.?

I'm trying to containerize a workflow that touches nfs shares.
For a successful run it requires user to have default uid:gid and also additional 4 or 5 groupid access.
group ids are random and ideally i would like to avoid giving range of gid's in the yaml file.
Is there an efficient way to get this done ? Would anyone be able to show any examples in yaml or point me to reference documents please. Thanks
The setting is called supplementalGroups. Take a look at the example:
apiVersion: v1
kind: Pod
...
spec:
containers:
- name: ...
image: ...
volumeMounts:
- name: nfs
mountPath: /mnt
securityContext:
supplementalGroups:
- 5555
- 6666
- 12345
volumes:
- name: nfs
nfs:
server: <nfs_server_ip_or_host>
path: /opt/nfs

Copy file inside Kubernetes pod from another container

I need to copy a file inside my pod during the time of creation. I don't want to use ConfigMap and Secrets. I am trying to create a volumeMounts and copy the source file using the kubectl cp command—my manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: copy
labels:
app: hello
spec:
containers:
- name: init-myservice
image: bitnami/kubectl
command: ['kubectl','cp','./test.json','init-myservice:./data']
volumeMounts:
- name: my-storage
mountPath: data
- name: init-myservices
image: nginx
volumeMounts:
- name: my-storage
mountPath: data
volumes:
- name: my-storage
emptyDir: {}
But I am getting a CrashLoopBackOff error. Any help or suggestion is highly appreciated.
it's not possible.
let me explain : you need to think of it like two different machine. here your local machine is the one where the file exist and you want to copy it in another machine with cp. but it's not possible. and this is what you are trying to do here. you are trying to copy file from your machine to pod's machine.
here you can do one thing just create your own docker image for init-container. and copy the file you want to store before building the docker image. then you can copy that file in shared volume where you want to store the file.
I do agree with an answer provided by H.R. Emon, it explains why you can't just run kubectl cp inside of the container. I do also think there are some resources that could be added to show you how you can tackle this particular setup.
For this particular use case it is recommended to use an initContainer.
initContainers - specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
Kubernetes.io: Docs: Concepts: Workloads: Pods: Init-containers
You could use the example from the official Kubernetes documentation (assuming that downloading your test.json is feasible):
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
-- Kubernetes.io: Docs: Tasks: Configure Pod Initalization: Create a pod that has an initContainer
You can also modify above example to your specific needs.
Also, referring to your particular example, there are some things that you will need to be aware of:
To use kubectl inside of a Pod you will need to have required permissions to access the Kubernetes API. You can do it by using serviceAccount with some permissions. More can be found in this links:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication: Service account tokens
Kubernetes.io: Docs: Reference: Access authn authz: RBAC
Your bitnami/kubectl container will run into CrashLoopBackOff errors because of the fact that you're passing a single command that will run to completion. After that Pod would report status Completed and it would be restarted due to this fact resulting in before mentioned CrashLoopBackOff. To avoid that you would need to use initContainer.
You can read more about what is happening in your setup by following this answer (connected with previous point):
Stackoverflow.com: Questions: What happens one of the container process crashes in multiple container POD?
Additional resources:
Kubernetes.io: Pod lifecycle
A side note!
I also do consider including the reason why Secrets and ConfigMaps cannot be used to be important in this particular setup.

Can we create a POD from two existing Yamls each having their own container?

My project have 2 Yamls to create which create 2 PODS each.
Can we create a single POD with these yamls, without merging the yamls, with 2 containers ?
Thanks
Yes, you run multiple containers inside the single pod. In single YAML manifest, you can add your both containers spec and run it.
however, you cannot without merging YAML you can not run multiple containers inside one pod.
for single file example :
apiVersion: v1
kind: Pod
metadata:
name: mc1
spec:
volumes:
- name: html
emptyDir: {}
containers:
- name: 1st
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
- name: 2nd
image: debian
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done
more details you can also refer official document : https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
If you don't want to merge the containers definition in the same file and in the same containers block, then no you can't.

GKE node with modprobe

Is there a way to load any kernel module ("modprobe nfsd" in my case) automatically after starting/upgrading nodes or in GKE? We are running an NFS server pod on our kubernetes cluster and it dies after every GKE upgrade
Tried both cos and ubuntu images, none of them seems to have nfsd loaded by default.
Also tried something like this, but it seems it does not do what it is supposed to do:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: nfsd-modprobe
labels:
app: nfsd-modprobe
spec:
template:
metadata:
labels:
app: nfsd-modprobe
spec:
hostPID: true
containers:
- name: nfsd-modprobe
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
modprobe nfs
modprobe nfsd
while true; do sleep 1; done
I faced the same issue, existing answer is correct, I want to expand it with working example of nfs pod within kubernetes cluster which has capabilities and libraries to load required modules.
It has two important parts:
privileged mode
mounted /lib/modules directory within the container to use it
nfs-server.yaml
kind: Pod
apiVersion: v1
metadata:
name: nfs-server-pod
spec:
containers:
- name: nfs-server-container
image: erichough/nfs-server
securityContext:
privileged: true
env:
- name: NFS_EXPORT_0
value: "/test *(rw,no_subtree_check,insecure,fsid=0)"
volumeMounts:
- mountPath: /lib/modules # mounting modules into container
name: lib-modules
readOnly: true # make sure it's readonly
- mountPath: /test
name: export-dir
volumes:
- hostPath: # using hostpath to get modules from the host
path: /lib/modules
type: Directory
name: lib-modules
- name: export-dir
emptyDir: {}
Reference which helped as well - Automatically load required kernel modules.
By default, you cannot load modules from inside a container because excluding kernel components is one of the main reason containers are lightweight and portable. You need to load the module from the host OS in order to make it available inside the container. This means you could simply launch a script that enables the kernel modules you want after each GKE upgrade.
However, there exists a somewhat hacky way to load kernel modules from inside a docker container. It all boils down to launching your container with escalated privileges and with access to certain host directories. You should try that if you really want to mount your kernel modules while inside a container.

Pre-populating Local SSD disk in GCP Kubernetes for readonly multipods usage

What is the best way to preload large files into a local PersistentVolume SSD before it gets used by Kubernetes pods?
The goal is to have multiple pods (could be multiple instances of the same pod, or different), share the same local SSD drive in a read-only mode. The drive would need to be initialized somehow with a large dataset.
Google Local SSD docs describes the Running the local volume static provisioner, but that approach only creates a PersistedVolume, but does not initialize it.
Basically, you can add an init container to your pod that initializes the SSD: add data, etc.
apiVersion: v1
kind: Pod
metadata:
name: "test-ssd"
spec:
initContainers:
- name: "init"
image: "ubuntu:14.04"
command: ["/bin/init_my_ssd.ssh"]
volumeMounts:
- mountPath: "/test-ssd/"
name: "test-ssd:
containers:
- name: "shell"
image: "ubuntu:14.04"
command: ["/bin/sh", "-c"]
args: ["echo 'hello world' > /test-ssd/test.txt && sleep 1 && cat /test-ssd/test.txt"]
volumeMounts:
- mountPath: "/test-ssd/"
name: "test-ssd"
volumes:
- name: "test-ssd"
hostPath:
path: "/mnt/disks/ssd0"
nodeSelector:
cloud.google.com/gke-local-ssd: "true"