I want to transfer Physical IP to dubbo pod by yaml,but the parameter is Fixed value.For example:
dubbo.yaml
spec:
replicas: 2
...
env:
- name: PhysicalIP
value: 192.168.1.1
In pod before start dubbo,i can replay container ip,for example:
echo "replay /etc/hosts"
cp /etc/hosts /etc/hosts.tmp
sed -i "s/.*$(hostname)/${PhysicalIP} $(hostname)/" /etc/hosts.tmp
cat /etc/hosts.tmp > /etc/hosts
There is an question,when pod deploy to host 192.168.1.1 and host 192.168.1.2,the host 192.168.1.2's pod environment variable ${PhysicalIP} value is 192.168.1.1,I want to ${PhysicalIP} is 192.168.1.2 in host 192.168.1.2,Is there any way?
You should be able to get information about the pod through Environment Variables or using a DownwardAPIVolumeFile.
Using environment variables:
You should add to your yaml something like this:
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
As far as I know, the name of the node is, right now, the best approach to what you need that you can get from inside the container.
Using a DownwardAPIVolumeFile:
You should add to your yaml something like this:
volumes:
- name: podinfo
downwardAPI:
items:
- path: "nodename"
fieldRef:
fieldPath: spec.nodeName
This way you will have the information of the node name stored in /etc/nodename
The issue #24657 and the pull #42717, on the kubernetes github are related to this. (Sorry, I need more reputation here to be able to post more links!).
As you can see there, access to the node IP through the downwardAPI should be available soon (using status.hostIP, probably).
Related
I have a simple StatefulSet with two containers. I just want to share a path by an emptyDir volume:
volumes:
- name: shared-folder
emptyDir: {}
The first container is a busybox:
- image: busybox
name: test
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /cache
name: shared-folder
The second container creates a file on /cache/<POD_NAME>. I want to mount both paths within the emptyDir volume to be able to share files between containers.
volumeMounts:
- name: shared-folder
mountPath: /cache/$(HOSTNAME)
Problem. The second container doesn't resolve /cache/$(HOSTNAME) so instead of mounting /cache/pod-0 it mounts /cache/$(HOSTNAME). I have also tried getting the POD_NAME and setting as env variable but it doesn't resolve it neither.
Dows anybody knows if it is possible to use a path like this (with env variables) in the mountPath attribute?
To use mountpath with env variable you can use subPath with expanded environment variables (k8s v1.17+).
In your case it would look like following:
containers:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- mountPath: /cache
name: shared-folder
subPathExpr: $(MY_POD_NAME)
I tested here and just using Kubernetes (k8s < 1.16) with env variables isn't possible to achieve what you want, basically what is happening is that variable will be accessible only after the pod gets deployed and you're referencing it before it happens.
You can use Helm to define your mounthPath and statefulset with the same value in the values.yaml file, then get this same value and set as a value for the mounthPath field and statefulset name. You can see about this here.
Edit:
Follow Matt's answer if you are using k8s 1.17 or higher.
The problem is that YAML configuration files are POSTed to Kubernetes exactly as they are written. This means that you need to create a templated YAML file, in which you will be able to replace the referenced ti environment variables with values bound to environment variables.
As this is a known "quirk" of Kubernetes there already exist tools to circumvent this problem. Helm is one of those tools which is very pleasant to use
I have an NFS physical volume that my pods can all access via a PVC, files are kept after pods are destroyed.
I want each pod to be able to put its files under a unique subdirectory.
Is there anyway that I can dynamically utilize say metadata.uid or metadata.name in the mountPath for the container? i.e. conceptually this:
volumeMounts:
- name: persistent-nfs-storage
mountPath: /metadata.name/files
I think I can see how to handle first making the directory, by using an init container and putting the value into the environment using the downward API. But I don't see any way to utilize it in a PVC mountPath.
Thanks for any help.
I don't know if it is possible to use Pod Name in Volume mountPath. But, if the intention is writing files in a separate folder(using pod name) of the same PVC, there are workarounds.
One way to achieve it is by getting the file path and pod name from env and then append them. After that write the log on that directory.
In details,
volumeMounts:
- name: persistent-nfs-storage
mountPath: /nfs/directory
ENVs:
env:
- name: WRITE_PATH
value: "$(NFS_DIR)/$(POD_NAME)"
- name: NFS_DIR
value: /nfs/directory
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
In Application, use $WRITE_PATH directory to write your necessary files. Also, if necessary create this directory from init container.
For logs, I mount a volume from host on to the pod. This is written in the deployment yaml.
But, if my 2 pods run on the same host, there will be conflict as both pods will produce log files with same name.
Can I use some dynamic variables in deployment file so that mount on host is created with different name for different pods?
you can use subPathExpr to achieve the uniqueness in the absolute path, this is one of the use case of the this feature. As of now its is alpha in k1.14.
In this example, a Pod uses subPathExpr to create a directory pod1 within the hostPath volume /var/log/pods, using the pod name from the Downward API. The host directory /var/log/pods/pod1 is mounted at /logs in the container.
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
look at pod affinity/anti affinity to not to schedule the replica on the same node. that way each replica of a specific deployment gets deployed on separate node. you will not have to bother about same folder being used by multiple pods.
I had to spend hours for this, your solution worked like a charm!
Had tried with, none worked despite being given in multiple documents.
subPathExpr: "$POD_NAME"
subPathExpr: $POD_NAME
subPathExpr: ${POD_NAME}
Finally this worked, subPathExpr: $(POD_NAME)
I am running a go app that is creating prometheus metrics that are node specific metrics and I want to be able to add the node IP as a label.
Is there a way to capture the Node IP from within the Pod?
The accepted answer didn't work for me, it seems fieldPath is required now:
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Is there a way to capture the Node IP from within the Pod?
Yes, easily, using the env: valueFrom: fieldRef: status.hostIP; the whole(?) list is presented in the envVarSource docs, I guess because objectFieldSelector can appear in multiple contexts.
so:
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
status.hostIP
I am trying to deploy Kong API Gateway via template to my openshift project. The problem is that Kong seems to be doing some DNS stuff that causes sporadic failure of DNS resolution. The workaround is to use the FQDN (<name>.<project_name>.svc.cluster.local). So, in my template i would like to do:
- env:
- name: KONG_DATABASE
value: postgres
- name: KONG_PG_HOST
value: "{APP_NAME}.{PROJECT_NAME}.svc.cluster.local"
I am just not sure how to get the current PROJECT_NAME of if perhaps there is a default set of available parameters...
You can read the namespace(project name) from the Kubernetes downward API into an environment variable and then use that in the value perhaps.
See the OpenShift docs here for example.
Update based on Claytons comment:
Tested and the following snippet from the deployment config works.
- env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: EXAMPLE
value: example.$(MY_POD_NAMESPACE)
Inside the running container:
sh-4.2$ echo $MY_POD_NAMESPACE
testing
sh-4.2$ echo $EXAMPLE
example.testing
In the environment screen of the UI it appears as a string value such as example.$(MY_POD_NAMESPACE)