How to share netns between different containers in k8s? - kubernetes

I want to share the netns between two containers which belong to the same pod.
I know all containers in the same pod share the network by default. And I try to mount the same host path to /var/run/netns for both containers. I can create netns in first container, but when I access the netns in second container, it says "Invalid argument". I run "file /var/run/netns/n1", it reports normal empty file. What can I do? Is it possible?

But below manifest file works perfectly in my case. You can check also by following steps.
Step 1: kubectl apply -f (below manifest file)
Step 2: kubectl exec -ti test-pd -c test1 /bin/bash
Step 3: go to test-pd directory and create a .txt file using touch command and exit.
Step 4: kubectl exec -ti test-pd -c test2 sh
Spep 5: go to test-pd directory and check the created file here. I found the created file here.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test1
volumeMounts:
- mountPath: /test-pd
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1
name: test2
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: DirectoryOrCreate

Related

Kubernetes initContainers to copy file and execute [duplicate]

This question already has an answer here:
How to share a file from initContainer to base container in Kubernetes
(1 answer)
Closed 4 months ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
I have a situation where we will have list of IP addresses (which are coming from the config map) then we need to validate these IP Addresses (i.e. check if they are accessible from this machine) then return the first accessible IP address so that application can access this ip address to process further actions.
I got the know that we can use InitContainers to do this stuff. But my question is how can we run a shell script in the initcontainer to identify the accessible IP Address and set it in the Environmental variable so that application process this further.
InitContainers can communicate with other, normal containers through volumes.
You can use emptyDir volume type, which is a directory that allows the pod to store data for the duration of its life cycle.
apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
volumes:
- name: addresses
emptyDir: {}
initContainers:
- name: ip-selector
image: your-image
volumeMounts:
- name: addresses
mountPath: /path/to/ip/addresses
containers:
- name: ip-handler
image: your-image
volumeMounts:
- name: addresses
mountPath: /path/to/ip/addresses/handler
readOnly: true # optional
Your initContainer can now save .env file with addresses in /path/to/ip/addresses path and then your normal container can read this file from /path/to/ip/addresses/handler path.
Option : 1
Once you get the IP inside the init container you can create the secret with value and use it.
initContainers:
- name: secret
image: gcr.io/cloud-builders/kubectl:latest
command:
- sh
- -c
- kubectl create secret mysecret ... -o yaml | kubectl apply -f -
containers:
- name: test-container
image: image-uri:latest
envFrom:
- secretRef:
name: mysecret
Option : 2
You can also use the shared mount option to write IP into the file, and inside the container, you can run the command to set the file contents to Env. So when your main container start it will start with the IP as Env.
command: [bash, -c, "source /tmp/env && service command"]
Example
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: busybox
image: busybox
command: [bash, -c, "source /tmp/env && service command"]
volumeMounts:
- name: workdir
mountPath: /tmp/env
initContainers:
- name: ip-check
image: busybox:1.28
command:
- Command or script to check the IP write to > /tmp/env
volumeMounts:
- name: workdir
mountPath: "/tmp"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}

OpenShift-Job to copy data from sftp to persistent volume

I would like to deploy a job which copies multiple files from sftp to a persistent volume and then completes.
My current version of this job looks like this:
apiVersion: batch/v1
kind: Job
metadata:
name: job
spec:
template:
spec:
containers:
- name: init-pv
image: w0wka91/ubuntu-sshpass
command: ["sshpass -p $PASSWORD scp -o StrictHostKeyChecking=no -P 22 -r user#sftp.mydomain.com:/RESSOURCES/* /mnt/myvolume"]
volumeMounts:
- mountPath: /mnt/myvolume
name: myvolume
envFrom:
- secretRef:
name: ftp-secrets
restartPolicy: Never
volumes:
- name: myvolume
persistentVolumeClaim:
claimName: myvolume
backoffLimit: 3
When I deploy the job, the pod starts but it always fails to create the container:
sshpass -p $PASSWORD scp -o StrictHostKeyChecking=no -P 22 -r user#sftp.mydomain.com:/RESSOURCES/* /mnt/myvolume: no such file or directory
It seems like the command gets executed before the volume is mounted but I couldnt find any documentation about it.
When I debug the pod and execute the command manually it all works fine so the command is definetely working.
Any ideas how to overcome this issue?
The volume mount is incorrect, change it to:
volumeMounts:
- mountPath: /mnt/myvolume
name: myvolume

two container pod creation in kubernetes

create a pod that runs two containers and ensure that the pod has shared volume that can be used by both containers to communicate with each other write an HTML file in one container and try accessing it from another container
can anyone tell me how to do it
Example pod with multiple containers
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
Official document : https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
Above example is using the empty dir so if you POD restart or Start again you will lose the data.
If you have any requirements to save the data i would suggest using the PVC instead of the empty dir.
i would recommend using the NFS if you can.

Read-only file system error in Kubernetes

I am getting an error while adding NFS in the Kubernetes cluster. I was able to mount the NFS but not able to add a file or directory in the mount location.
This is my yaml file
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
volumes:
- name: nfs-volume
nfs:
server: 10.01.26.81
path: /nfs_data/nfs_share_home_test/testuser
containers:
- name: app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /home/kube/testuser
Then I ran the following commands for building the pod
kubectl apply -f session.yaml
kubectl exec -it pod-using-nfs sh
After I exec to the pod,
/ # cd home/kube/testuser/
/home/kube/testuser# touch hello
touch: hello: Read-only file system
Expected output is
/ # cd home/kube/testuser/
/home/kube/testuser# touch hello
/home/kube/testuser# ls
hello
Should I need to add any securityContext to the yaml for fixing this?
Any help would be appreciated!

can i use a configmap created from an init container in the pod

I am trying to "pass" a value from the init container to a container. Since values in a configmap are shared across the namespace, I figured I can use it for this purpose. Here is my job.yaml (with faked-out info):
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['kubectl', 'create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url']
restartPolicy: Never
backoffLimit: 0
This does not seem to work (EDIT: although the statements following this edit note may still be correct, this is not working because kubectl is not a recognizable command in the busybox image), and I am assuming that the pod can only read values from a configmap created BEFORE the pod is created. Has anyone else come across the difficulty of passing values between containers, and what did you do to solve this?
Should I deploy the configmap in another pod and wait to deploy this one until the configmap exists?
(I know I can write files to a volume, but I'd rather not go that route unless it's absolutely necessary, since it essentially means our docker images must be coupled to an environment where some specific files exist)
You can create an EmptyDir volume, and mount this volume onto both containers. Unlike persistent volume, EmptyDir has no portability issue.
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['/bin/sh', '-c', 'cp x /tmp/artifact/x']
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
restartPolicy: Never
volumes:
- name: tmp
emptyDir: {}
backoffLimit: 0
If for various reasons, you don't want to use share volume. And you want to create a configmap or a secret, here is a solution.
First you need to use a docker image which contains kubectl : gcr.io/cloud-builders/kubectl:latest for example. (docker image which contains kubectl manage by Google).
Then this (init)container needs enough rights to create resource on Kubernetes cluster. Ok by default, kubernetes inject a token of default service account named : "default" in container, but I prefer to make more explicit, then add this line :
...
initContainers:
- # Already true by default but if use it, prefer to make it explicit
automountServiceAccountToken: true
name: artifactory-snapshot
And add "edit" role to "default" service account:
kubectl create rolebinding default-edit-rb --clusterrole=edit --serviceaccount=default:myapp --namespace=default
Then complete example :
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
initContainers:
- # Already true by default but if use it, prefer to make it explicit.
automountServiceAccountToken: true
name: artifactory-snapshot
# You need to use docker image which contains kubectl
image: gcr.io/cloud-builders/kubectl:latest
command:
- sh
- -c
# the "--dry-run -o yaml | kubectl apply -f -" is to make command idempotent
- kubectl create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url --dry-run -o yaml | kubectl apply -f -
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
First of all, kubectl is a binary. It was downloaded in your machine before you could use the command. But, In your POD, the kubectl binary doesn't exist. So, you can't use kubectl command from a busybox image.
Furthermore, kubectl uses some credential that is saved in your machine (probably in ~/.kube path). So, If you try to use kubectl from inside an image, this will fail because of missing credentials.
For your scenario, I will suggest the same as #ccshih, use volume sharing.
Here is the official doc about volume sharing between init-container and container.
The yaml that is used here is ,
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
Here init-containers saves a file in the volume and later the file was available in inside the container. Try the tutorial by yourself for better understanding.