I have an existing kubernetes deployment which is running fine. Now I want to edit it with some new environment variables which I will use in the pod.
Editing a deployment will delete and create new pod or it will update the existing pod.
My requirement is I want to create a new pod whenever I edit/update the deployment.
Kubernetes is always going to recreate your pods in case you change/create env vars.
Lets check this together creating a deployment without any env var on it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Let's check and note these pod names so we can compare later:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-56db997f77-9mpjx 1/1 Running 0 8s
nginx-deployment-56db997f77-mgdv9 1/1 Running 0 8s
nginx-deployment-56db997f77-zg96f 1/1 Running 0 8s
Now let's edit this deployment and include one env var making the manifest look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
env:
- name: STACK_GREETING
value: "Hello from the MARS"
ports:
- containerPort: 80
After we finish the edition, let's check our pod names and see if it changed:
$ kubectl get pod
nginx-deployment-5b4b68cb55-9ll7p 1/1 Running 0 25s
nginx-deployment-5b4b68cb55-ds9kb 1/1 Running 0 23s
nginx-deployment-5b4b68cb55-wlqgz 1/1 Running 0 21s
As we can see, all pod names changed. Let's check if our env var got applied:
$ kubectl exec -ti nginx-deployment-5b4b68cb55-9ll7p -- sh -c 'echo $STACK_GREETING'
Hello from the MARS
The same behavior will occur if you change the var or even remove it. All pods need to be removed and created again for the changes to take place.
If you would like to create a new pod, then you need to create a new deployment for that. By design deployments are managing the replicas of pods that belong to them.
Related
I am containerizing spring-boot applications on kubernetes and I want to have a different application property file for each replica of POD.
As I want to have different config for different pod replicas.
Any help on above would be appreciated.
They're not really replicas if you want a unique configuration for each pod. I think you may be looking for a StatefulSet. Quoting from the docs:
Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
For example, given a StatefulSet like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example
spec:
selector:
matchLabels:
app: example
serviceName: "example"
replicas: 3
template:
metadata:
labels:
app: example
spec:
containers:
- name: nginx
image: docker.io/nginxinc/nginx-unprivileged:mainline
ports:
- containerPort: 80
name: http
I end up with:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
example-0 1/1 Running 0 34s
example-1 1/1 Running 0 31s
example-2 1/1 Running 0 28s
In each pod, I can look at the value of $HOSTNAME to find my unique name, and I could use that to extract appropriate configuration from a directory path/structured file/etc.
Say I have a pod YAML such as:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.1
And a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
Now I first create the Pod:
$ kubectl apply -f pod.yaml
And only then the Deployment:
$ kubectl apply -f deployment.yaml
I thought that, since the pod.yaml metadata includes a app: nginx selector, the Deployment controller will only create 2 nginx:1.17.1 pods, but I see that all 3 are created. Why is that?
In addition to creating the app: nginx label, Deployment controller also added the pod-template-hash label for each pod that was created.
If we check labels for running pods, we can see pod-template-hash=5d5dd5dd49 label for my-deployment pods:
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-deployment-5d5dd5dd49-9tbcx 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
my-deployment-5d5dd5dd49-b88f4 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
my-deployment-5d5dd5dd49-x7n8q 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
nginx 1/1 Running 0 62s app=nginx
According to the official documentation:
The pod-template-hash label ensures that child ReplicaSets of a Deployment do not
overlap. It is generated by hashing the PodTemplate of the
ReplicaSet and using the resulting hash as the label value that is
added to the ReplicaSet selector, Pod template labels, and in any
existing Pods that the ReplicaSet might have.
This is why the changes from the Deployment didn't apply to a single pod with only the app: nginx label.
I am using minikube to learn about docker, but I have come across a problem.
I am following along with the examples in Kubernetes in Action, and I am trying to get a pod that I have pulled from my docker hub account, but I cannot make this pod visible.
if I run
kubectl get pod
I can see that the pod is present.
NAME READY STATUS RESTARTS AGE
kubia 1/1 Running 1 6d22h
However when I do the first step to create a service
kubectl expose rc kubia --type=LoadBalancer --name kubia-http service "kubia-http" exposed
I am getting this error returned
Error from server (NotFound): replicationcontrollers "kubia" not found
Error from server (NotFound): replicationcontrollers "service" not found
Error from server (NotFound): replicationcontrollers "kubia-http" not found
Error from server (NotFound): replicationcontrollers "exposed" not found
Any ideas why I am getting this error and what I need to do to correct it?
I am using minikube v1.13.1 on mac Mojave (v10.14.6), and I can't upgrade because I am using a company supplied machine, and all updates are controlled by HQ.
In this book, used command is kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1 which used to create ReplicationController back in the days when book was written however this object is currently depracated.
Now kubectl run command creates standalone pod without ReplicationController. So to expose it you should run:
kubectl expose pod kubia --type=LoadBalancer --name kubia-http
In order to create a replication it is recommended to use Deployment. To create it using CLI you can simply run
kubectl create deployment <name_of_deployment> --image=<image_to_be_used>
It will create a deployment and one pod. And then it can be exposed similarly to previous pod exposure:
kubectl expose deployment kubia --type=LoadBalancer --name kubia-http
Replication controllers are older concepts than creating services and deployments in Kubernetes, checkout this answer.
A service template looks like the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: App
ports:
- protocol: tcp
port: 80
targetPort: 8080
Then after saving the service config into a file you do kubectl apply -f <filename>
Checkout more at: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kubia.yaml
https://kubernetes.io/ko/docs/concepts/workloads/controllers/deployment/
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: seunggab/kubia:latest
ports:
- containerPort: 8080
shell
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
kubectl apply -f kubia.yaml
kubectl expose deployment kubia --type=LoadBalancer --port 8080 --name kubia-http
minikube tunnel &
curl 127.0.0.1:8080
if you change replicas
change kubia.yaml (3 -> 5)
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 5
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: seunggab/kubia:latest
ports:
- containerPort: 8080
re-apply
kubectl apply -f kubia.yaml
# as-is
NAME READY STATUS RESTARTS AGE
kubia-5f896dc5d5-qp7wl 1/1 Running 0 20s
kubia-5f896dc5d5-rqqm5 1/1 Running 0 20s
kubia-5f896dc5d5-vqgj9 1/1 Running 0 20s
# to-be
NAME READY STATUS RESTARTS AGE
kubia-5f896dc5d5-fsd49 0/1 ContainerCreating 0 6s
kubia-5f896dc5d5-qp7wl 1/1 Running 0 3m35s
kubia-5f896dc5d5-rqqm5 1/1 Running 0 3m35s
kubia-5f896dc5d5-vqgj9 1/1 Running 0 3m35s
kubia-5f896dc5d5-x84fr 1/1 Running 0 6s
ref: https://github.com/seunggabi/kubernetes-in-action/wiki/2.-Kubernetes
I want to maintain different configuration for each pod, so planning to fetch properties from spring cloud config based on pod name.
Ex:
Properties in cloud
PodName1.property1 = "xxx"
PodName2.property1 ="yyy";
Property value will be different for each pod. Planning to fetch properties from cloud ,based on container name Environment.get("current pod name"+ " propertyName").
So I want to set fixed hostname/pod name
If the above is not possible, is there any alternative ?
You can use statefulsets if you want fixed pod names for your application.
e.g.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web # this will be used as prefix in pod name
spec:
serviceName: "nginx"
replicas: 2 # specify number of pods that should be running
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
This template will create 2 pods of nginx in default namespace with names as following:
kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
A basic example can be found here.
I created a deployment like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: scs-db-sink
spec:
selector:
matchLabels:
app: scs-db-sink
replicas: 1
template:
metadata:
labels:
app: scs-db-sink
spec:
nodeSelector:
cloud.google.com/gke-nodepool: service-pool
containers:
- name: scs-db-sink
image: 'IMAGE_NAME'
imagePullPolicy: Always
ports:
- containerPort: 1068
kubectl get pods shows me that the pod is running:
scs-db-sink-74c4b6cd6b-tchm9 1/1 Running 0 16m
Question:
How can I setup the pod name to be scs-db-sink-0 and increase to scs-db-sink-1 when I scale up?
Thanks
Deployments pods are named as <replicaset-name>-<random-suffix> where replicaset name is <deployment-name>-<random-suffix>. Here, replicaset is created automatically by deployment. So, you can't achieve your expected name with deployment.
However, you can use Statefulset in this case. Statefulset's pods are named as you specified. Check about Statefulset here.