I have two replica statefulset and I want to pass health URL of other pod(I mean in replica1 I want to know health URL of replica2 and vice versa), i did following
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dapi
labels:
app: dapi
spec:
serviceName: dapi
selector:
matchLabels:
app: dapi
template:
metadata:
labels:
app: dapi
spec:
containers:
- name: test-container
image: busybox:latest
imagePullPolicy: IfNotPresent
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME MASTER_HEALTH_URL;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MASTER_HEALTH_URL
value: "http://$(MY_POD_NAME).dapi:8181/health/readiness"
In above code I am able to pass pod name, as its statueflset so it has predictable pod name so i tried following and its not working
value: "http://$(for i in 0 1; do echo dapi-$i;done | grep -v $MY_POD_NAME).dapi:8181/health/readiness"
So above is not working, is there any way I can pass this dynamic behavior?
PS: Above is sample manifest, for real application I don't have control on entrypoint.sh so cant over ride it.
Related
I have a service running and attached to a pod. In the pod, I need to define env variable which has to point to itself. If I run locally, I would set path to localhost:8080 and it works. How can I set env variable to point to the service itself?
user#user % kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.96.116.26 129.153.28.245 8080:31495/TCP 21h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP,12250/TCP 5d18h
If the configuration is:
spec:
containers:
- name: myapp
image: path/to/imageregistry/image:v1.0.0-amd64
env:
- name: BASE_PATH
value: "129.153.28.245:8080"
App is working, in a sense that If I open in browser 129.153.28.245:8080/app/pages it will open the website. If I replace <EXTERTNAL-IP> with <CLUSTER-IP> it's not loading.
How to retrieve <EXTERTNAL-IP> from the service and put into env variable, something like:
env:
- name: BASE_PATH
value: "<EXTERNAL-IP-FROM-SERVICE-NAME>:8080"
or is there another and better approach to do that?
Here's the full Deployment and Service yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: xxx.ocir.io/xxxxxx/myrepo/myimage:v1.0.0-amd64
env:
- name: BASE_PATH
value: "129.153.28.245:8080"
ports:
- containerPort: 80
imagePullSecrets:
- name: ocirsecret
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: myapp
Exposing Pod and Cluster Vars to Containers
Let's say you need some data about the Pod or K8s environment in your application to add Pod information as metadata to logs. such as e.g.
Pod IP
Pod Namespace
Service Account of Pod
But how to access this information?
All Pod information can be made available in the config file
There are 2 ways to expose Pod fileds to a running Container:
Environment Variables
Volume Files
Environment Variable
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-env
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- name: log-sider
image: busybox
command: [ 'sh', '-c' ]
args:
- while true; do
echo sync app logs;
printenv POD_NAME POD_IP POD_SERVICE_ASCCOUNT;
sleep 20;
done;
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_SERVICE_ASCCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
Source
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: server
image: test_ml_server:2.3
ports:
- containerPort: 8080
volumeMounts:
- name: hostpath-vol-testserver
mountPath: /app/test/api
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: template.metadata.name
- name: testdb
image: test_db:1.4
ports:
- name: testdb
containerPort: 1433
volumeMounts:
- name: hostpath-vol-testdb
mountPath: /var/opt/mssql/data
# env:
# - name: POD_NAME
# valueFrom:
# fieldRef:
# fieldPath: template.metadata.name
volumes:
- name: hostpath-vol-testserver
hostPath:
path: /usr/testhostpath/testserver
- name: hostpath-vol-testdb
hostPath:
path: /usr/testhostpath/testdb
I want to set the name of the pod Because it communicates internally based on the name of the pod
but when a pod is created, it cannot be used because the variable name is appended to the end.
How can I set the pod name?
It's better if you use, statefulset instead of deployment. Statefulset's pod name will be like <statefulsetName-0>,<statefulsetName-1>... And you will need a clusterIP service. with which you can bound your pods. see the doc for more details. Ref
apiVersion: v1
kind: Service
metadata:
name: test-svc
labels:
app: test
spec:
ports:
- port: 8080
name: web
clusterIP: None
selector:
app: test
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-StatefulSet
labels:
app: test
spec:
replicas: 1
serviceName: test-svc
selector:
matchLabels:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: server
image: test_ml_server:2.3
ports:
- containerPort: 8080
volumeMounts:
- name: hostpath-vol-testserver
mountPath: /app/test/api
- name: testdb
image: test_db:1.4
ports:
- name: testdb
containerPort: 1433
volumeMounts:
- name: hostpath-vol-testdb
mountPath: /var/opt/mssql/data
volumes:
- name: hostpath-vol-testserver
hostPath:
path: /usr/testhostpath/testserver
- name: hostpath-vol-testdb
hostPath:
path: /usr/testhostpath/testdb
Here, The pod name will be like this test-StatefulSet-0.
if you are using the kind: Deployment it won't be possible ideally in this scenario you can use kind: Statefulset.
Instead of POD to POD communication, you can use the Kubernetes service for communication.
Still, statefulset manage the pod name in the sequence
statefulsetname - 0
statefulsetname - 1
statefulsetname - 2
You can't.
It is the property of the pods of a Deployment that they do not have an identity associated with them.
You could have a look at Statefulset instead of a Deployment if you want the pods to have a state.
From the docs:
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
So, if you have a Statefulset object named myapp with two replicas, the pods will be named as myapp-0 and myapp-1.
I'm trying to set the deployment name as environment variable using the downward API but my container keeps crashing without any logging. I'm using the busybox to print the environment variables. I've had success using a Pod but no luck with a Deployment: This is my YAML:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-d
name: test-deploy
spec:
replicas: 1
selector:
matchLabels:
app: test-d
template:
metadata:
labels:
app: test-d
spec:
command:
- sh
- "-c"
- "echo Hello Kubernetes, I am $MY_DEPLOY_NAME in $MY_CLUSTER_NAME and $MY_NAMESPACE! && sleep 3600"
containers:
-
image: busybox
name: test-d-container
env:
-
name: MY_DEPLOY_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
-
name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
-
name: MY_CLUSTER_NAME
value: production
What am I missing?
Update:
It is clear that my indentation was messed up, thank you for pointing that out but the main part of my question is still not clear to me. How do I get the deployment name from within my container?
You are using the wrong indentation and structure for Deployment objects.
Both the command key and the env key are part of the container key.
This is the right format
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-d
name: test-deploy
spec:
replicas: 1
selector:
matchLabels:
app: test-d
template:
metadata:
labels:
app: test-d
spec:
containers:
- image: busybox
name: test-d-container
command:
- sh
- "-c"
- "echo Hello Kubernetes, I am $MY_DEPLOY_NAME in $MY_CLUSTER_NAME and $MY_NAMESPACE! && sleep 3600"
env:
-
name: MY_DEPLOY_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
-
name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
-
name: MY_CLUSTER_NAME
value: production
Remember that you can validate your Kubernetes manifests using this online validator, or locally using kubeval.
Referring to the main part of the question, you can get the object that created the Pod, but most likely that will be the ReplicaSet, not the Deployment.
The Pod name is normally generated by Kubernetes, you don't know it before hand, that's why there is a mechanism to get the name. But that is not the case for Deployments: you know the name of Deployments when creating them. I don't think there is a mechanism to get the Deployment name dynamically.
Typically, labels are used in the PodSpec of the Deployment object to add metadata.
You could also try to parse it, since the pod name (which you have) has always this format: deploymentName-replicaSetName-randomAlphanumericChars.
I'm not seeing any direct solution to get the deployment name from container. The workaround that I'm using is with the help of pod labels,
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-deployment
spec:
template:
metadata:
labels:
app: app1-deployment
spec:
containers:
- name: container1
image: nginx
env:
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
...
...
I'm using app as label name but deployment-name could also be better naming convention for this
I have an angular app and some node containers for backend, in my deployment file, how i can get container backed for connect my front end.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: container_imaer_backend
env:
- name: IP_BACKEND
value: here_i_need_my_container_ip_pod
ports:
- containerPort: 80
protocol: TCP
I would recommend instead of using the IP to use the DNS Name there's more info here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
But basically it's http://metadata-name.namespace.svc.cluster.local so in the case for that deployment it's http://frontend.default.svc.cluster.local
It's better this way because the local IP address can change.
You could use Pod field values for environment(ref: here). That way you can set POD IP in environment variable.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql
name: mysql
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
ports:
- containerPort: 3306
name: mysql
protocol: TCP
volumeMounts:
- mountPath: /var/lib/mysql
name: data
volumes:
- name: data
emptyDir: {}
I'm migrating an application to Docker/Kubernetes. This application has 20+ well-known ports it needs to be accessed on. It needs to be accessed from outside the kubernetes cluster. For this, the application writes its public accessible IP to a database so the outside service knows how to access it. The IP is taken from the downward API (status.hostIP).
One solution is defining the well-known ports as (static) nodePorts in the service, but I don't want this, because it will limit the usability of the node: if another service has started and incidentally taken one of the known ports the application will not be able to start. Also, because Kubernetes opens the ports on all nodes in the cluster, I can only run 1 instance of the application per cluster.
Now I want to make the application aware of the port mappings done by the NodePort-service. How can this be done? As I don't see a hard link between the Service and the Statefulset object in Kubernetes.
Here is my (simplified) Kubernetes config:
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
labels:
app: my-app
spec:
ports:
- port: 6000
targetPort: 6000
protocol: TCP
name: debug-port
- port: 6789
targetPort: 6789
protocol: TCP
name: traffic-port-1
selector:
app: my-app
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app-sf
spec:
serviceName: my-app-svc
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-repo/myapp/my-app:latest
imagePullPolicy: Always
env:
- name: K8S_ServiceAccountName
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: K8S_ServerIP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: serverName
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- name: debug
containerPort: 6000
- name: traffic1
containerPort: 6789
This can be done with an initContainer.
You can define an initContainer to get the nodeport and save into a directory that shared with the container, then container can get the nodeport from that directory later, a simple demo like this:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: busybox
command: ["sh", "-c", "cat /data/port; while true; do sleep 3600; done"]
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: tutum/curl
command: ["sh", "-c", "TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`; curl -kD - -H \"Authorization: Bearer $TOKEN\" https://kubernetes.default:443/api/v1/namespaces/test/services/app 2>/dev/null | grep nodePort | awk '{print $2}' > /data/port"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}