I have a Kubernetes service running and we have an external API's dependent on this service.
We would like to be notified if there is any service restart. Is there any possibility to hit an API endpoint on every service restart?
Hi and welcome to the community!
There are multiple ways of achieve this. A really simple one (as pointed out by Thomas) is an Init Container. Refer to the Kubernetes docs for more on how to get those running! This init container would do nothing more than send out an HTTP request to your external API once the pod is started and terminate immediately afterwards.
The other way is much more complex and will require you to write some code yourself. What you'd have to do is write your own controller that watches the entities through the Kubernetes API and notify your external service when a pod is rescheduled, killed, died etc.
(You could however have your external service to exactly that why accessing the kube-api directly...)
Expanding on enzian's comment about using initContainers. Here is an example using a Curl based initContainer and mounting the metadata to pass as an environment variable in the call:
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-service
namespace: the-project
labels:
app: some-service
spec:
replicas: 1
selector:
matchLabels:
app: some-service
template:
metadata:
labels:
app: some-service
spec:
initContainers:
- name: service-name-init
image: txn2/curl:v3.0.0
- name: SOME_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
command: [
"/bin/sh",
"-c",
"/usr/bin/curl -sX GET example.com/notify/$(SOME_NAME)"
]
containers:
- name: ok
image: txn2/ok
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
ports:
- name: ok-port
containerPort: 8080
Related
I have a kubernetes service, written in Go, and am using client-go to access the kubernetes apis.
I need the Pod of the service's own pod.
The PodInterface allows me to iterate all pods, but what I need is a "self" concept to get the currently running pod that is executing my code.
It appears by reading /var/run/secrets/kubernetes.io/serviceaccount/namespace and searching pods in the namespace for the one matching hostname, I can determine the "self" pod.
Is this the proper solution?
Expose the POD_NAME and POD_NAMESPACE to your pod as environment variables. Later use those values to get your own pod object.
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME MY_POD_NAMESPACE;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never
Ref: environment-variable-expose-pod-information
I have a kubernetes deployment that starts a pod that includes a runAsUser key in its securityContext. I was hoping I could stick this value in the environment of an initContainer using valueFrom, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: testdeployment
spec:
template:
spec:
containers:
- name: myservice
image: myimage
securityContext:
runAsUser: 1000
initContainers:
- name: initialize_things
image: myimage
env:
- name: CONTAINER_UID
valueFrom:
fieldRef:
fieldPath: spec.containers[0].securityContext.runAsUser
That doesn't seem to work:
The Deployment "testdeployment" is invalid: spec.template.spec.initContainers[0].env[0].valueFrom.fieldRef.fieldPath: Invalid value: "spec.containers[0].securityContext.runAsUser": error converting fieldPath: field label not supported: spec.containers[0].securityContext.runAsUser
Is there any way to make this work? I'm trying to reduce the number of places I'm hardcoding that UID.
I think you cant make this work because The downward API doesnt support spec.containers[0].securityContext.runAsUser as a field.
Btw, in your case more logically was to put full path , I mean spec.template.spec.containers[0].securityContext.runAsUser, but anyway, it wont help
As per Capabilities of the Downward API - you are able to use only few fields
Information available via fieldRef:
metadata.name
metadata.namespace
metadata.uid
metadata.labels['<KEY>']
metadata.annotations['<KEY>']
In addition, the following information is available through downwardAPI volume fieldRef:
metadata.labels
metadata.annotations
The following information is available through environment variables:
status.podIP
spec.serviceAccountName
spec.nodeName
status.hostIP
You can find very similar issue on github closed: how to get imageID in container
I tried running the pod (https://www.consul.io/docs/platform/k8s/run.html)
It failed with... containers with unready status: [consul]
kubectl create -f consul-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: consul-example
spec:
containers:
- name: example
image: "consul:latest"
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- "/bin/sh"
- "-ec"
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
restartPolicy: Never
I'm trying to set the deployment name as environment variable using the downward API but my container keeps crashing without any logging. I'm using the busybox to print the environment variables. I've had success using a Pod but no luck with a Deployment: This is my YAML:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-d
name: test-deploy
spec:
replicas: 1
selector:
matchLabels:
app: test-d
template:
metadata:
labels:
app: test-d
spec:
command:
- sh
- "-c"
- "echo Hello Kubernetes, I am $MY_DEPLOY_NAME in $MY_CLUSTER_NAME and $MY_NAMESPACE! && sleep 3600"
containers:
-
image: busybox
name: test-d-container
env:
-
name: MY_DEPLOY_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
-
name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
-
name: MY_CLUSTER_NAME
value: production
What am I missing?
Update:
It is clear that my indentation was messed up, thank you for pointing that out but the main part of my question is still not clear to me. How do I get the deployment name from within my container?
You are using the wrong indentation and structure for Deployment objects.
Both the command key and the env key are part of the container key.
This is the right format
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-d
name: test-deploy
spec:
replicas: 1
selector:
matchLabels:
app: test-d
template:
metadata:
labels:
app: test-d
spec:
containers:
- image: busybox
name: test-d-container
command:
- sh
- "-c"
- "echo Hello Kubernetes, I am $MY_DEPLOY_NAME in $MY_CLUSTER_NAME and $MY_NAMESPACE! && sleep 3600"
env:
-
name: MY_DEPLOY_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
-
name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
-
name: MY_CLUSTER_NAME
value: production
Remember that you can validate your Kubernetes manifests using this online validator, or locally using kubeval.
Referring to the main part of the question, you can get the object that created the Pod, but most likely that will be the ReplicaSet, not the Deployment.
The Pod name is normally generated by Kubernetes, you don't know it before hand, that's why there is a mechanism to get the name. But that is not the case for Deployments: you know the name of Deployments when creating them. I don't think there is a mechanism to get the Deployment name dynamically.
Typically, labels are used in the PodSpec of the Deployment object to add metadata.
You could also try to parse it, since the pod name (which you have) has always this format: deploymentName-replicaSetName-randomAlphanumericChars.
I'm not seeing any direct solution to get the deployment name from container. The workaround that I'm using is with the help of pod labels,
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-deployment
spec:
template:
metadata:
labels:
app: app1-deployment
spec:
containers:
- name: container1
image: nginx
env:
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
...
...
I'm using app as label name but deployment-name could also be better naming convention for this
I have a Deployment with three replicas, everyone started on a different node, behing an ingress. For tests and troubleshooting, I want to see which pod/node served my request. How is this possible?
The only way I know is to open the logs on all of the pods, do my request and search for the pod that has my request in the access log. But this is complicated and error prune, especially on productive apps with requests from other users.
I'm looking for something like a HTTP Response header like this:
X-Kubernetes-Pod: mypod-abcdef-23874
X-Kubernetes-Node: kubw02
AFAIK, there is no feature like that out of the box.
The easiest way I can think of, is adding these information as headers yourself from your API.
You technically have to Expose Pod Information to Containers Through Environment Variables and get it from code to add the headers to the response.
Would be something like this:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
And from the API you get the information and insert into the header.