kubernetes: using value of runAsUser in an environment variable using valueFrom? - kubernetes

I have a kubernetes deployment that starts a pod that includes a runAsUser key in its securityContext. I was hoping I could stick this value in the environment of an initContainer using valueFrom, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: testdeployment
spec:
template:
spec:
containers:
- name: myservice
image: myimage
securityContext:
runAsUser: 1000
initContainers:
- name: initialize_things
image: myimage
env:
- name: CONTAINER_UID
valueFrom:
fieldRef:
fieldPath: spec.containers[0].securityContext.runAsUser
That doesn't seem to work:
The Deployment "testdeployment" is invalid: spec.template.spec.initContainers[0].env[0].valueFrom.fieldRef.fieldPath: Invalid value: "spec.containers[0].securityContext.runAsUser": error converting fieldPath: field label not supported: spec.containers[0].securityContext.runAsUser
Is there any way to make this work? I'm trying to reduce the number of places I'm hardcoding that UID.

I think you cant make this work because The downward API doesnt support spec.containers[0].securityContext.runAsUser as a field.
Btw, in your case more logically was to put full path , I mean spec.template.spec.containers[0].securityContext.runAsUser, but anyway, it wont help
As per Capabilities of the Downward API - you are able to use only few fields
Information available via fieldRef:
metadata.name
metadata.namespace
metadata.uid
metadata.labels['<KEY>']
metadata.annotations['<KEY>']
In addition, the following information is available through downwardAPI volume fieldRef:
metadata.labels
metadata.annotations
The following information is available through environment variables:
status.podIP
spec.serviceAccountName
spec.nodeName
status.hostIP
You can find very similar issue on github closed: how to get imageID in container

Related

How to get self pod with kubernetes client-go

I have a kubernetes service, written in Go, and am using client-go to access the kubernetes apis.
I need the Pod of the service's own pod.
The PodInterface allows me to iterate all pods, but what I need is a "self" concept to get the currently running pod that is executing my code.
It appears by reading /var/run/secrets/kubernetes.io/serviceaccount/namespace and searching pods in the namespace for the one matching hostname, I can determine the "self" pod.
Is this the proper solution?
Expose the POD_NAME and POD_NAMESPACE to your pod as environment variables. Later use those values to get your own pod object.
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME MY_POD_NAMESPACE;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never
Ref: environment-variable-expose-pod-information

using the Downward API to access multiple container name kubernetes

Is there a programmatic way to get the container name in the pod spec?
I have multiple app containers running in a single POD via Deployment.yaml.
My fluentd instance is running as a sidecar in the deployment. Fluentd needs to collect the logs emitted from these containers. How does it identify a container's name?
The downward API looks promising in this regard. However, the container name is not mentioned in the Capabilities of the Downward API section.
Any workaround solution?
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-forward
env:
- name: FLUENT_FOWARD_HOST
value: "10.10.132.59"
- name: FLUENT_FOWARD_PORT
value: "24224"
- name: K8S_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: K8S_POD
valueFrom:
fieldRef:
fieldPath: metadata.name

Using Kubernetes Downward Api while using configmap and envFrom configMapRef

I'm using both env and envFrom in my k8s deployment manifest as below.
envFrom:
- configMapRef:
name: env
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
I know I can inject env using K8s downward api using env as below.
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SOME_ENDPOINT
value: mysvc.$(POD_NAMESPACE).svc.cluster.local
However, when I try to create env using ConfigMap, as below, I am not getting intended variable.
apiVersion: v1
kind: ConfigMap
metadata:
name: env
data:
SOME_ENDPOINT: mysvc.$(POD_NAMESPACE).svc.cluster.local
The expected result when I run printenv inside the container is SOME_ENDPOINT having mysvc.myns.svc.cluster.local value, but the result is SOME_ENDPOINT having value of mysvc.$(POD_NAMESPACE).svc.cluster.local itself.
Any solutions on injecting env using configmap with Downward Api?
Thanks!

Assigning a unique number in env inside each pod in a kubernetes cluster

I have a kubernetes cluster inside which there will be some pods running. For each pod I want to assign a unique id as env variable. eg: pod 1 server_id= 1, pod 2 server_id=2 etc.
Anyone have any idea how this can be done. I am building my docker image and deploying to cluster through gitlab ci.
Adding ENV variables into helm or YAML template
You can add a variable to the YAML file and apply it as per the requirement if your deployment are different.
Get the variables values into POD and use it
Or else you can get the POD name of deployment that will be different from each other and if you are fine to use that which will be unique
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
If you want to set your own variable and values you have to use the different deployment.
if you want to manage the sequence you need to use the stateful sets.
Sequence something like
POD-1
POD-2
POD-3
as environment variables at the, you want just this you can use stateful sets POD name and get value from node and inject back to POD and application use it further.

Make external API hit from kubernetes

I have a Kubernetes service running and we have an external API's dependent on this service.
We would like to be notified if there is any service restart. Is there any possibility to hit an API endpoint on every service restart?
Hi and welcome to the community!
There are multiple ways of achieve this. A really simple one (as pointed out by Thomas) is an Init Container. Refer to the Kubernetes docs for more on how to get those running! This init container would do nothing more than send out an HTTP request to your external API once the pod is started and terminate immediately afterwards.
The other way is much more complex and will require you to write some code yourself. What you'd have to do is write your own controller that watches the entities through the Kubernetes API and notify your external service when a pod is rescheduled, killed, died etc.
(You could however have your external service to exactly that why accessing the kube-api directly...)
Expanding on enzian's comment about using initContainers. Here is an example using a Curl based initContainer and mounting the metadata to pass as an environment variable in the call:
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-service
namespace: the-project
labels:
app: some-service
spec:
replicas: 1
selector:
matchLabels:
app: some-service
template:
metadata:
labels:
app: some-service
spec:
initContainers:
- name: service-name-init
image: txn2/curl:v3.0.0
- name: SOME_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
command: [
"/bin/sh",
"-c",
"/usr/bin/curl -sX GET example.com/notify/$(SOME_NAME)"
]
containers:
- name: ok
image: txn2/ok
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
ports:
- name: ok-port
containerPort: 8080