For logs, I mount a volume from host on to the pod. This is written in the deployment yaml.
But, if my 2 pods run on the same host, there will be conflict as both pods will produce log files with same name.
Can I use some dynamic variables in deployment file so that mount on host is created with different name for different pods?
you can use subPathExpr to achieve the uniqueness in the absolute path, this is one of the use case of the this feature. As of now its is alpha in k1.14.
In this example, a Pod uses subPathExpr to create a directory pod1 within the hostPath volume /var/log/pods, using the pod name from the Downward API. The host directory /var/log/pods/pod1 is mounted at /logs in the container.
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
look at pod affinity/anti affinity to not to schedule the replica on the same node. that way each replica of a specific deployment gets deployed on separate node. you will not have to bother about same folder being used by multiple pods.
I had to spend hours for this, your solution worked like a charm!
Had tried with, none worked despite being given in multiple documents.
subPathExpr: "$POD_NAME"
subPathExpr: $POD_NAME
subPathExpr: ${POD_NAME}
Finally this worked, subPathExpr: $(POD_NAME)
Related
We are trying to get the logs of pods after multiple restarts but we dont want to use any external solution like efk.
i tried below config but its not working. does the below cmd run on the pod or it will run on node level
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "kubectl logs appworks-0 > /container-stoped.txt"]
i tried below config but its not working. does the below cmd run on
the pod or it will run on node level
it will run on the POD level, not on Node level
You can use the Hostpath in POD configuration
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: alpine
name: test-container
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /host
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /
type: Directory
Hostpath will directly will create one Dir at the Node level and save logs over there, if you don't want this solution you can add your solution of lifecycle hook also however when you can directly write app logs to Host don't add lifecycle hook extra.
Note : Make sure if your Node goes down hostpath or emptyDir logs you will miss.
I am running a kuberneted cluster using minikube.
I want to mount a folder from my PC into the minikube.
How can i do that.
I see there is hostPath, but that used the node inside minikube
Just like in docker-compose we can mount a host folder into the container, is there any such provision
#Santhosh If i understand your question correctly, do you want to mount a path from within a container to a PV of type hostpath on your host? Have you tried this :-
apiVersion: v1
kind: Pod
metadata:
name: pv-recycler
namespace: default
spec:
restartPolicy: Never
volumes:
- name: vol
hostPath:
path: /any/path/it/will/be/replaced
containers:
- name: pv-recycler
image: "k8s.gcr.io/busybox"
command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"]
volumeMounts:
- name: vol
mountPath: /scrub
I need to copy a file inside my pod during the time of creation. I don't want to use ConfigMap and Secrets. I am trying to create a volumeMounts and copy the source file using the kubectl cp command—my manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: copy
labels:
app: hello
spec:
containers:
- name: init-myservice
image: bitnami/kubectl
command: ['kubectl','cp','./test.json','init-myservice:./data']
volumeMounts:
- name: my-storage
mountPath: data
- name: init-myservices
image: nginx
volumeMounts:
- name: my-storage
mountPath: data
volumes:
- name: my-storage
emptyDir: {}
But I am getting a CrashLoopBackOff error. Any help or suggestion is highly appreciated.
it's not possible.
let me explain : you need to think of it like two different machine. here your local machine is the one where the file exist and you want to copy it in another machine with cp. but it's not possible. and this is what you are trying to do here. you are trying to copy file from your machine to pod's machine.
here you can do one thing just create your own docker image for init-container. and copy the file you want to store before building the docker image. then you can copy that file in shared volume where you want to store the file.
I do agree with an answer provided by H.R. Emon, it explains why you can't just run kubectl cp inside of the container. I do also think there are some resources that could be added to show you how you can tackle this particular setup.
For this particular use case it is recommended to use an initContainer.
initContainers - specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
Kubernetes.io: Docs: Concepts: Workloads: Pods: Init-containers
You could use the example from the official Kubernetes documentation (assuming that downloading your test.json is feasible):
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
-- Kubernetes.io: Docs: Tasks: Configure Pod Initalization: Create a pod that has an initContainer
You can also modify above example to your specific needs.
Also, referring to your particular example, there are some things that you will need to be aware of:
To use kubectl inside of a Pod you will need to have required permissions to access the Kubernetes API. You can do it by using serviceAccount with some permissions. More can be found in this links:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication: Service account tokens
Kubernetes.io: Docs: Reference: Access authn authz: RBAC
Your bitnami/kubectl container will run into CrashLoopBackOff errors because of the fact that you're passing a single command that will run to completion. After that Pod would report status Completed and it would be restarted due to this fact resulting in before mentioned CrashLoopBackOff. To avoid that you would need to use initContainer.
You can read more about what is happening in your setup by following this answer (connected with previous point):
Stackoverflow.com: Questions: What happens one of the container process crashes in multiple container POD?
Additional resources:
Kubernetes.io: Pod lifecycle
A side note!
I also do consider including the reason why Secrets and ConfigMaps cannot be used to be important in this particular setup.
I'm attempting to inject a ReplicationController's randomly generated pod ID extension (i.e. multiverse-{replicaID}) into a container's environment variables. I could manually get the hostname and extract it from there, but I'd prefer if I didn't have to add the special case into the script running inside the container, due to compatibility reasons.
If a pod is named multiverse-nffj1, INSTANCE_ID should equal nffj1. I've scoured the docs and found nothing.
apiVersion: v1
kind: ReplicationController
metadata:
name: multiverse
spec:
replicas: 3
template:
spec:
containers:
- env:
- name: INSTANCE_ID
value: $(replicaID)
I've tried adding a command into the controller's template configuration to create the environment variable from the hostname, but couldn't figure out how to make that environment variable available to the running script.
Is there a variable I'm missing, or does this feature not exist? If it doesn't, does anyone have any ideas on how to make this to work without editing the script inside of the container?
There is an answer provided by Anton Kostenko about inserting DB credentials into container environment variables, but it could be applied to your case also. It is all about the content of the InitContainer spec.
You can use InitContainer to get the hash from the container’s hostname and put it to the file on the shared volume that you mount to the container.
In this example InitContainer put the Pod name into the INSTANCE_ID environment variable, but you can modify it according to your needs:
Create the init.yaml file with the content:
apiVersion: v1
kind: Pod
metadata:
name: init-test
spec:
containers:
- name: init-test
image: ubuntu
args: [bash, -c, 'source /data/config && echo $INSTANCE_ID && while true ; do sleep 1000; done ']
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: init-init
image: busybox
command: ["sh","-c","echo -n INSTANCE_ID=$(hostname) > /data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
Create the pod using following command:
kubectl create -f init.yaml
Check if Pod initialization is done and is Running:
kubectl get pod init-test
Check the logs to see the results of this example configuration:
$ kubectl logs init-test
init-test
I have created a pod with two containers. I know that different containers in a pod share same network namespace (i.e.,same IP and port space) and can also share a storage volume between them via configmaps. My question is do the pods also share same filesystem. For instance, in my case I have one container 'C1' that generates a dynamic file every 10 min in /var/targets.yml and I want the the other container 'C2' to read this file and perform its own independent action.
Is there a way to do this, may be some workaround via configmaps? or do I have to access these file via networking since each container have their own IP(But this may not be a good idea when it comes to POD restarts). Any suggestions or references please?
You can use an emptyDir for this:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: generating-container
volumeMounts:
- mountPath: /cache
name: cache-volume
- image: gcr.io/google_containers/test-webserver
name: consuming-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
But be aware, that the data isn't persistent during container recreations.