I have the following situation:
I have multiple deployments running the same image and are configured identically except for the environmental variables.
Now my question is, is there an easy way to update for example the image of all these deployments instead of doing it one by one?
I haven't found a solution, but I think this should be possible with the use of labels?
Or is there a better way to deploy the same deployments with only different environmental variables?
Yes, you can, this is what labels are for. They are good for grouping similar objects.
Here is the minimal reproducible example.
Create 2 deployments with the same label app=nginx:
$ kubectl run --image=nginx --overrides='{ "metadata": {"labels": {"app": "nginx"}}, "spec":{"template":{"spec": {"containers":[{"name":"nginx-container", "image": "nginx"}]}}}}' nginx-1
deployment.apps/nginx-1 created
$ kubectl run --image=nginx --overrides='{ "metadata": {"labels": {"app": "nginx"}}, "spec":{"template":{"spec": {"containers":[{"name":"nginx-container", "image": "nginx"}]}}}}' nginx-2
deployment.apps/nginx-2 created
Here are our deployments:
$ kubectl get deploy -o wide --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR LABELS
nginx-1 1/1 1 1 20s nginx-container nginx run=nginx-1 app=nginx,run=nginx-1
nginx-2 1/1 1 1 16s nginx-container nginx run=nginx-2 app=nginx,run=nginx-2
Then we can use set command and filter desired deployments using label app=nginx:
$ kubectl set image deployment -l app=nginx nginx-container=nginx:alpine
deployment.extensions/nginx-1 image updated
deployment.extensions/nginx-2 image updated
And see the results:
$ kubectl get deploy -o wide --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR LABELS
nginx-1 1/1 1 1 6m49s nginx-container nginx:alpine run=nginx-1 app=nginx,run=nginx-1
nginx-2 1/1 1 1 6m45s nginx-container nginx:alpine run=nginx-2 app=nginx,run=nginx-2
Related
I have 2 pods running on default namespace as shown below
NAMESPACE NAME READY STATUS RESTARTS AGE
default alpaca-prod 1/1 Running 0 36m
default alpaca-test 1/1 Running 0 4m26s
kube-system coredns-78fcd69978-xd7jw 1/1 Running 0 23h
But when I try to get deployments I do not see any
kubectl get deployments
No resources found in default namespace.
Can someone explain this behavior ?
I am running k8 on Minikube.
I think these are pods which were spawned without Deployment, StatefulSet or DaemonSet.
You can run pod like this using the command, e.g.:
kubectl run nginx-test --image=nginx -n default
pods created via DaemonSet usually end with -xxxxx
pods created via Deployment usually end with -xxxxxxxxxx-xxxxx
pods created via StatefulSet usually end with -0, -1 etc.
pods created without upper resource, usually have exact name as you specified e.g. nginx-test, nginx, etc.
So my guess that is a standalone Pod resource (last option)
I have a k8s cluster with 3 nodes.
With kubectl command i enter in a pod shell and make some file editing:
kubectl exec --stdin --tty <pod-name> -- /bin/bash
at this point i have one pod wit correct editing and other 2 replicas with old file.
My question is:
There is a kubectl commend for, starting from a specific pod, overwrite current replicas in cluster for create n equals pods?
Hope to be clear
So many thanks in advance
Manuel
You can use a kubectl plugin called: kubectl-tmux-exec.
All information on how to install and use this plugin can be found on GitHub: predatorray/kubectl-tmux-exec.
As described in the How to Install Dependencies documentation.
The plugin needs the following programs:
gnu-getopt(1)
tmux(1)
I've created a simple example to illustrate you how it works.
Suppose I have a web Deployment and want to create a sample-file file inside all (3) replicas.
$ kubectl get deployment,pods --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/web 3/3 3 3 19m app=web
NAME READY STATUS RESTARTS AGE LABELS
pod/web-96d5df5c8-5gn8x 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
pod/web-96d5df5c8-95r4c 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
pod/web-96d5df5c8-wc9k5 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
I have the kubectl-tmux_exec plugin installed, so I can use it:
$ kubectl plugin list
The following compatible plugins are available:
/usr/local/bin/kubectl-tmux_exec
$ kubectl tmux-exec -l app=web bash
After running the above command, Tmux will be opened and we can modify multiple Pods simultaneously:
I'm very beginner with K8S and I've a question about the labels with kubernetes. On youtube video (In french here), I've seen that :
The man create three deploys with these commands and run the command kubectl get deployment then kubectl get deployment --show-labels :
kubectl run monnginx --image nginx --labels "env=prod,group=front"
kubectl run monnginx2 --image nginx --labels "env=dev,group=front"
kubectl run monnginx3 --image nginx --labels "env=prod,group=back"
root#kubmaster:# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
monnginx 1/1 1 1 46s
monnginx2 1/1 1 1 22s
monnginx3 1/1 1 1 10s
root#kubmaster:# kubectl get deployments --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
monnginx 1/1 1 1 46s env=prod,group=front
monnginx2 1/1 1 1 22s env=dev,group=front
monnginx3 1/1 1 1 10s env=prod,group=back
Currently, if I try to do the same things :
root#kubermaster:~ kubectl run mynginx --image nginx --labels "env=prod,group=front"
pod/mynginx created
root#kubermaster:~ kubectl run mynginx2 --image nginx --labels "env=dev,group=front"
pod/mynginx2 created
root#kubermaster:~ kubectl run mynginx3 --image nginx --labels "env=dev,group=back"
pod/mynginx3 created
When I try the command kubectl get deployments --show-labels, the output is :
No resources found in default namespace.
But if I try kubectl get pods --show-labels, the output is :
NAME READY STATUS RESTARTS AGE LABELS
mynginx 1/1 Running 0 2m39s env=prod,group=front
mynginx2 1/1 Running 0 2m32s env=dev,group=front
mynginx3 1/1 Running 0 2m25s env=dev,group=back
If I follow every steps from the videos, there is a way to put some labels on deployments... But the command kubectl create deployment does not accept the flag --labels :
Error: unknown flag: --labels
There is someone to explain why I've this error and How put some label ?
Thanks a lot !
Because $ kubectl create deployment doesn't support --labels flag. But you can use $ kubectl label to add labels to your deployment.
Examples:
# Update deployment 'my-deployment' with the label 'unhealthy' and the value 'true'.
$ kubectl label deployment my-deployment unhealthy=true
# Update deployment 'my-deployment' with the label 'status' and the value 'unhealthy', overwriting any existing value.
$ kubectl label --overwrite deployment my-deployment status=unhealthy
It works with other Kubernetes objects too.
Format: kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N
I think the problem is something different.
Until Kubernetes 1.17 the command kubectl run created a deployment.
Since Kubernetes 1.18 the command kubectl run creates a pod.
Release Notes of Kubernetes 1.18
kubectl run has removed the previously deprecated generators, along with flags
unrelated to creating pods. kubectl run now only creates pods. See specific
kubectl create subcommands to create objects other than pods. (#87077,
#soltysh) [SIG Architecture, CLI and Testing]
As of 2022 you can use the following imperative command to create a pod with labels as:-
kubectl run POD_NAME --image IMAGE_NAME -l myapp:app
where, myapp:app is the label name.
I did a small deployment in K8s using Docker image but it is not showing in deployment but only showing in pods.
Reason: It is not creating any default namespace in deployments.
Please suggest:
Following are the commands I used.
$ kubectl run hello-node --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 --port=8080 --namespace=default
pod/hello-node created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node 1/1 Running 0 12s
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-node 1/1 Running 0 9m9s
kube-system event-exporter-v0.2.5-599d65f456-4dnqw 2/2 Running 0 23m
kube-system kube-proxy-gke-hello-world-default-pool-c09f603f-3hq6 1/1 Running 0 23m
$ kubectl get deployments
**No resources found in default namespace.**
$ kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system event-exporter-v0.2.5 1/1 1 1 170m
kube-system fluentd-gcp-scaler 1/1 1 1 170m
kube-system heapster-gke 1/1 1 1 170m
kube-system kube-dns 2/2 2 2 170m
kube-system kube-dns-autoscaler 1/1 1 1 170m
kube-system l7-default-backend 1/1 1 1 170m
kube-system metrics-server-v0.3.1 1/1 1 1 170m
Arghya Sadhu's answer is correct. In the past kubectl run command indeed created by default a Deployment instead of a Pod. Actually in the past you could use it with so called generators and you were able to specify exactly what kind of resource you want to create by providing --generator flag followed by corresponding value. Currently --generator flag is deprecated and has no effect.
Note that you've got quite clear message after running your kubectl run command:
$ kubectl run hello-node --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 --port=8080 --namespace=default
pod/hello-node created
It clearly says that the Pod hello-node was created. It doesn't mention about a Deployment anywhere.
As an alternative to using imperative commands for creating either Deployments or Pods you can use declarative approach:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
namespace: default
labels:
app: hello-node
spec:
replicas: 3
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node-container
image: gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0
ports:
- containerPort: 8080
Declaration of namespace can be ommitted in this case as by default all resources are deployed into the default namespace.
After saving the file e.g. as nginx-deployment.yaml you just need to run:
kubectl apply -f nginx-deployment.yaml
Update:
Expansion of the environment variables within the yaml manifest actually doesn't work so the following line from the above deployment example cannot be used:
image: gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0
The simplest workaround is a fairly simple sed "trick".
First we need to change a bit our project id's placeholder in our deployment definition yaml. It may look like this:
image: gcr.io/{{DEVSHELL_PROJECT_ID}}/hello-node:1.0
Then when applying the deployment definition instead of simple kubectl apply -f deployment.yaml run this one-liner:
sed "s/{{DEVSHELL_PROJECT_ID}}/$DEVSHELL_PROJECT_ID/g" deployment.yaml | kubectl apply -f -
The above command tells sed to search through deployment.yaml document for {{DEVSHELL_PROJECT_ID}} string and each time this string occurs, to substitute it with the actual value of $DEVSHELL_PROJECT_ID environment variable.
Check version of kubectl using kubectl version
From kubectl 1.18 version kubectl run creates only pod and nothing else. To create a deployment use kubectl create deployment or use older version of kubectl
I do not want to decrease the number of pods controlled by StatefulSet, and i think that decreasing pods is a dangerous operation in production env.
so... , is there some way ? thx ~
I'm not sure if this is what you are looking for but you can scale a StatefulSet
Use kubectl to scale StatefulSets
First, find the StatefulSet you want to scale.
kubectl get statefulsets <stateful-set-name>
Change the number of replicas of your StatefulSet:
kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
To show you an example, I've deployed a 2 pod StatefulSet called web:
$ kubectl get statefulsets.apps web
NAME READY AGE
web 2/2 60s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 63s
web-1 1/1 Running 0 44s
$ kubectl describe statefulsets.apps web
Name: web
Namespace: default
CreationTimestamp: Wed, 23 Oct 2019 13:46:33 +0200
Selector: app=nginx
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"web","namespace":"default"},"spec":{"replicas":2,"select...
Replicas: 824643442664 desired | 2 total
Update Strategy: RollingUpdate
Partition: 824643442984
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
...
Now if we do scale this StatefulSet up to 5 replicas:
$ kubectl scale statefulset web --replicas=5
statefulset.apps/web scaled
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m41s
web-1 1/1 Running 0 3m22s
web-2 1/1 Running 0 59s
web-3 1/1 Running 0 40s
web-4 1/1 Running 0 27s
$ kubectl get statefulsets.apps web
NAME READY AGE
web 5/5 3m56s
You do not have any downtime in already working pods.
i think that decreasing pods is a dangerous operation in production env.
I agree with you.
As Crou wrote, it is possible to do this operation with kubectl scale statefulsets <stateful-set-name> but this is an imperative operation and it is not recommended to do imperative operations in a production environment.
In a production environment it is better to use a declarative operation, e.g. have the number of replicas in a text file (e.g. stateful-set-name.yaml) and deploy them with kubectl apply -f <stateful-set-name>.yaml with this way of working, it is easy to store the yaml-files in Git so you have full control of all changes and can revert/rollback to a previous configuration. When you store the declarative files in a Git repository you can use a CICD solution e.g. Jenkins or ArgoCD to 1) validate the operation (e.g. not allow decrease) and 2) first deploy to a test-environment and see that it works, before applying the changes to the production environment.
I recommend the book (new edition) Kubernetes Up&Running 2nd ed that describes this procedure in Chapter 18 (new chapter).