I'm trying to set the deployment name as environment variable using the downward API but my container keeps crashing without any logging. I'm using the busybox to print the environment variables. I've had success using a Pod but no luck with a Deployment: This is my YAML:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-d
name: test-deploy
spec:
replicas: 1
selector:
matchLabels:
app: test-d
template:
metadata:
labels:
app: test-d
spec:
command:
- sh
- "-c"
- "echo Hello Kubernetes, I am $MY_DEPLOY_NAME in $MY_CLUSTER_NAME and $MY_NAMESPACE! && sleep 3600"
containers:
-
image: busybox
name: test-d-container
env:
-
name: MY_DEPLOY_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
-
name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
-
name: MY_CLUSTER_NAME
value: production
What am I missing?
Update:
It is clear that my indentation was messed up, thank you for pointing that out but the main part of my question is still not clear to me. How do I get the deployment name from within my container?
You are using the wrong indentation and structure for Deployment objects.
Both the command key and the env key are part of the container key.
This is the right format
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-d
name: test-deploy
spec:
replicas: 1
selector:
matchLabels:
app: test-d
template:
metadata:
labels:
app: test-d
spec:
containers:
- image: busybox
name: test-d-container
command:
- sh
- "-c"
- "echo Hello Kubernetes, I am $MY_DEPLOY_NAME in $MY_CLUSTER_NAME and $MY_NAMESPACE! && sleep 3600"
env:
-
name: MY_DEPLOY_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
-
name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
-
name: MY_CLUSTER_NAME
value: production
Remember that you can validate your Kubernetes manifests using this online validator, or locally using kubeval.
Referring to the main part of the question, you can get the object that created the Pod, but most likely that will be the ReplicaSet, not the Deployment.
The Pod name is normally generated by Kubernetes, you don't know it before hand, that's why there is a mechanism to get the name. But that is not the case for Deployments: you know the name of Deployments when creating them. I don't think there is a mechanism to get the Deployment name dynamically.
Typically, labels are used in the PodSpec of the Deployment object to add metadata.
You could also try to parse it, since the pod name (which you have) has always this format: deploymentName-replicaSetName-randomAlphanumericChars.
I'm not seeing any direct solution to get the deployment name from container. The workaround that I'm using is with the help of pod labels,
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-deployment
spec:
template:
metadata:
labels:
app: app1-deployment
spec:
containers:
- name: container1
image: nginx
env:
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
...
...
I'm using app as label name but deployment-name could also be better naming convention for this
Related
I am unable to deploy this file by using
kubectl apply -f command
Deployment YAML image
I have provided the YAML file required for your deployment. It is important that all the lines are indented correctly. Hyphens (-) indicate a list item. Therefore, it is not required to use them on every line.
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc-deployment
namespace: abc
spec:
replicas: 3
selector:
matchLabels:
app: abc-deployment
template:
metadata:
labels:
app: abc-deployment
spec:
containers:
- name: abc-deployment
image: anyimage
ports:
- containerPort: 80
env:
- name: APP_VERSION
value: v1
- name: ENVIRONMENT
value: "123"
- name: DATA
valueFrom:
configMapKeyRef:
name: abc-configmap
key: data
imagePullPolicy: IfNotPresent
restartPolicy: Always
imagePullSecrets:
- name: abc-secret
As a side note, the way envFrom was used is incorrect. It must be within the container env section, and formatted as such in the example above (see the DATA env variable).
If you are using Visual Studio Code, there is an official Kubernetes extension from Microsoft that provides Intellisense (suggestions) and alerts you to errors.
Hope this helps.
I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on AWS. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes.
I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.
apiVersion: v1
kind: Service
metadata:
name: ghost
labels:
app: ghost
spec:
ports:
- port: 80
selector:
app: ghost
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
containers:
- image: ghost:4-alpine
name: ghost
env:
- name: database_client
valueFrom:
secretKeyRef:
name: eks-keys
key: client
- name: database_connection_host
valueFrom:
secretKeyRef:
name: eks-keys
key: host
- name: database_connection_user
valueFrom:
secretKeyRef:tha
- name: database_connection_password
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcp
- name: database_connection_database
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcd
ports:
- containerPort: 2368
name: ghost
volumeMounts:
- name: ghost-persistent-storage
mountPath: /var/lib/ghost
volumes:
- name: ghost-persistent-storage
persistentVolumeClaim:
claimName: efs-ghost
I ran this line on cmd in the folder container:
kubectl create -f deployment-ghost.yaml --validate=false
service/ghost created
Error from server (BadRequest): error when creating "deployment-ghost.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.ValueFrom: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|lueFrom":"secretKeyR|..., bigger context ...|},{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},{"name":"database_connection_pa|...
I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?
{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},
Your spec has error:
...
- name: database_connection_user # <-- The error message points to this env variable
valueFrom:
secretKeyRef:
name: <secret name, eg. eks-keys>
key: <key in the secret>
...
I have two replica statefulset and I want to pass health URL of other pod(I mean in replica1 I want to know health URL of replica2 and vice versa), i did following
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dapi
labels:
app: dapi
spec:
serviceName: dapi
selector:
matchLabels:
app: dapi
template:
metadata:
labels:
app: dapi
spec:
containers:
- name: test-container
image: busybox:latest
imagePullPolicy: IfNotPresent
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME MASTER_HEALTH_URL;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MASTER_HEALTH_URL
value: "http://$(MY_POD_NAME).dapi:8181/health/readiness"
In above code I am able to pass pod name, as its statueflset so it has predictable pod name so i tried following and its not working
value: "http://$(for i in 0 1; do echo dapi-$i;done | grep -v $MY_POD_NAME).dapi:8181/health/readiness"
So above is not working, is there any way I can pass this dynamic behavior?
PS: Above is sample manifest, for real application I don't have control on entrypoint.sh so cant over ride it.
Not able to use subPathExpr or subPath in volume when kind is Deployment.
Tried using subpath giving some env variable, but is was not creating the folder with value, it is getting created with ${xyz}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
spec:
replicas: 1
selector:
matchLabels:
app: abc
template:
metadata:
labels:
app: abc
spec:
env:
- name: NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- mountPath: /opt/logs
name: abc
subPath: $(NAME)
volumes:
- name: abc
hostPath:
path: /opt/abc
type: Directory
i want to create the volume directory with pod hostname, but not able to create
example :
if pod name is xyzservice-3216544-fv4
i want to create the volume directory like /opt/abc/xyzservice-3216544-fv4
What's your Kubernetes Cluster Version?
Using subPath with expanded environment variables is a new FEATURE(alpha) in v1.14
How can I to source the Kubernetes deployment/job name that spawned the current pod from within the pod?
In many cases the hostname of the Pod equals to the name of the Pod (you can access that by the HOSTNAME environment variable). However that's not a reliable method of determining the Pod's identity.
You will want to you use the Downward API which allows you to expose metadata as environment variables and/or files on a volume.
The name and namespace of a Pod can be exposed as environment variables (fields: metadata.name and metadata.namespace) but the information about the creator of a Pod (which is the annotation kubernetes.io/created-by) can only be exposed as a file.
Example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: busybox
labels: {app: busybox}
spec:
selector: {matchLabels: {app: busybox}}
template:
metadata: {labels: {app: busybox}}
spec:
containers:
- name: busybox
image: busybox
command:
- "sh"
- "-c"
- |
echo "I am $MY_POD_NAME in the namespace $MY_POD_NAMESPACE"
echo
grep ".*" /etc/podinfo/*
while :; do sleep 3600; done
env:
- name: MY_POD_NAME
valueFrom: {fieldRef: {fieldPath: metadata.name}}
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo/
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef: {fieldPath: metadata.labels}
- path: "annotations"
fieldRef: {fieldPath: metadata.annotations}
Too see the output:
$ kubectl logs `kubectl get pod -l app=busybox -o name | cut -d / -f2`
Output:
I am busybox-1704453464-m1b9h in the namespace default
/etc/podinfo/annotations:kubernetes.io/config.seen="2017-02-16T16:46:57.831347234Z"
/etc/podinfo/annotations:kubernetes.io/config.source="api"
/etc/podinfo/annotations:kubernetes.io/created-by="{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"busybox-1704453464\",\"uid\":\"87b86370-f467-11e6-8d47-525400247352\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"191157\"}}\n"
/etc/podinfo/annotations:kubernetes.io/limit-ranger="LimitRanger plugin set: cpu request for container busybox"
/etc/podinfo/labels:app="busybox"
/etc/podinfo/labels:pod-template-hash="1704453464"
If you are using the Downwards API to get deployment name from inside the pod, and you want to avoid using the volume mount way - there is one opinionated way to get deployment info, exposed to pod as environment variables.
Template labels specified in a Deployment spec are added as pod labels to each pod of that deployment.
Example : the app label below will be added to all pods of this deployment
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
...
It is a commonly followed (again, not necessarily true for your case) convention for deployments, to keep the app label value same as the the deployment name, as shown in the above example. If your deployments follow this convention (mine did), you can expose this label's value (essentially, the name of deployment) as an environment variable to the pod, using the downwards API
Continuing on above example :
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
Again, clarifying that this is not a guaranteed solution for your problem as it still does not give the deployment name in env vars. It is just an opinionated way which I found useful and thought would be good to share.
In my case, there were a lot of deployments (>20) and I didn't want to add the deployment name manually as an env variable, for each of the deployment config. As my deployments already followed the above convention, I just copied the bit of yaml specifying NAMESPACE and DEPLOYMENT_NAME variable to each deployment config
references :
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api