can I run a job and a deploy in a single config file/action
Where the deploy will wait for the job to finish and check if it's successful so it can continue with the deployment?
Based on the information you provided I believe you can achieve your goal using a Kubernetes feature called InitContainer:
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. However, if the Pod has a restartPolicy of Never, Kubernetes does not restart the Pod.
I'll create a initContainer with a busybox to run a command linux to wait for the service mydb to be running before proceeding with the deployment.
Steps to Reproduce:
- Create a Deployment with an initContainer which will run the job that needs to be completed before doing the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app
name: my-app
spec:
replicas: 2
selector:
matchLabels:
run: my-app
template:
metadata:
labels:
run: my-app
spec:
restartPolicy: Always
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
Many kinds of commands can be used in this field, you just have to select a docker image that contains the binary you need (including your sequelize job)
Now let's apply it see the status of the deployment:
$ kubectl apply -f my-app.yaml
deployment.apps/my-app created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 4s
my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 4s
The pods are hold on Init:0/1 status waiting for the completion of the init container.
- Now let's create the service which the initcontainer is waiting to be running before completing his task:
apiVersion: v1
kind: Service
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
We will apply it and monitor the changes in the pods:
$ kubectl apply -f mydb-svc.yaml
service/mydb created
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
my-app-6b4fb4958f-44ds7 0/1 Init:0/1 0 91s
my-app-6b4fb4958f-s7wmr 0/1 Init:0/1 0 91s
my-app-6b4fb4958f-s7wmr 0/1 PodInitializing 0 93s
my-app-6b4fb4958f-44ds7 0/1 PodInitializing 0 94s
my-app-6b4fb4958f-s7wmr 1/1 Running 0 94s
my-app-6b4fb4958f-44ds7 1/1 Running 0 95s
^C
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-app-6b4fb4958f-44ds7 1/1 Running 0 99s
pod/my-app-6b4fb4958f-s7wmr 1/1 Running 0 99s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mydb ClusterIP 10.100.106.67 <none> 80/TCP 14s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-app 2/2 2 2 99s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-app-6b4fb4958f 2 2 2 99s
If you need help to apply this to your environment let me know.
Although initContainers are a viable option for this solution, there is another if you use helm to manage and deploy to your cluster.
Helm has chart hooks that allow you to run a Job before other installations in the helm chart occur. You mentioned that this is for a database migration before a service deployment. Some example helm config to get this done could be...
apiVersion: batch/v1
kind: Job
metadata:
name: api-migration-job
namespace: default
labels:
app: api-migration-job
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
spec:
containers:
- name: platform-migration
...
This will run the job to completion before moving on to the installation / upgrade phases in the helm chart. You can see there is a 'hook-weight' variable that allows you to order these hooks if you desire.
This in my opinion is a more elegant solution than init containers, and allows for better control.
Related
Imagine the following deployment definition in kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
env: staging
spec:
...
I have two questions in particular:
1). The label env: staging won't be available in created pods. how can I access this data programmatically in client-go?
2). When pod is created/updated, how can I found which deployment it belongs to?
1). the label env: staging won't be available in created pods. how can I access this data programmatically in client-go?
You can get the Deployment using client-go. See the example Create, Update & Delete Deployment for operations on a Deployment.
2). when pod is created/updated, how can I found which deployment it belongs to?
When a Deployment is created, a ReplicaSet is created that manage the Pods.
See the ownerReferences field of a Pod to see what ReplicaSet manages it. This is described in How a ReplicaSet works
hope you are enjoying your kubernetes journey !
In fact the label won't be available in created pods but you can add it to the manifest, in the pod section:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
labels:
#Here you have the deployment labels
app: nginx
spec:
selector:
matchLabels:
#Here you have the selector that indicates to the deployment
#(more exactly to the replicatsets of the deployment)
#which pod to track to check if the number of replicas is respected.
app: nginx
...
template:
metadata:
labels:
#Here you have the POD labels that needs to match in the selector.matchlabels section
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:latest
...
you can check the pods' labels by typing:
❯ k get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 7m13s app=nginx,pod-template-hash=6bdc4445fd
you can get the deployments' labels by typing:
❯ k get deploy --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
nginx-deploy 3/3 3 3 7m39s app=nginx
you can add a custom column in your "kubectl get po" command to display the value of each "app" labels when getting the pods:
❯ k get pod -L app
NAME READY STATUS RESTARTS AGE APP
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 8m30s nginx
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 8m30s nginx
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 8m30s nginx
and you can use multiple -L :
❯ k get pod -L app -L test
NAME READY STATUS RESTARTS AGE APP TEST
nginx-deploy-6bdc4445fd-5qlhg 1/1 Running 0 9m46s nginx
nginx-deploy-6bdc4445fd-pgkhb 1/1 Running 0 9m46s nginx
nginx-deploy-6bdc4445fd-xdz59 1/1 Running 0 9m46s nginx
In general, the names of the pod begin by the name of their owner (deployment, replicaset, statefulset, job etc)
When you use a deployment to create a pod, you can be sure that between the deployment and the pod there is a replicaset (The deployment only manages the differents version of the replicaset, while the replicaset only ENSURES that the current number of actual replicas is matching the demanded number of replicas in the manifes, with labels selector ! )
So you in fact, checks the ownerReference filed of a pod, by typing:
❯ kubectl get po -o custom-columns=NAME:'{.metadata.name}',OWNER:'{.metadata.ownerReferences[0].name}',OWNER_KIND:'{.metadata.ownerReferences[0].kind}'
NAME OWNER OWNER_KIND
nginx-deploy-6bdc4445fd-5qlhg nginx-deploy-6bdc4445fd ReplicaSet
nginx-deploy-6bdc4445fd-pgkhb nginx-deploy-6bdc4445fd ReplicaSet
nginx-deploy-6bdc4445fd-xdz59 nginx-deploy-6bdc4445fd ReplicaSet
can do the same with replicasets to get their deployments owner:
❯ kubectl get rs -o custom-columns=NAME:'{.metadata.name}',OWNER:'{.metadata.ownerReferences[0].name}',OWNER_KIND:'{.metadata.ownerReferences[0].kind}'
NAME OWNER OWNER_KIND
nginx-deploy-6bdc4445fd nginx-deploy Deployment
thats how you can quickly see withs kubectl who owns who
here is a little reading about owners and dependants: https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/
hope this has helped you. bguess
How would I display available schedulers in my cluster in order to use non default one using the schedulerName field?
Any link to a document describing how to "install" and use a custom scheduler is highly appreciated :)
Thx in advance
Schedulers can be found among your kube-system pods. You can then filter the output to your needs with kube-scheduler as the search key:
➜ ~ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-9wfkp 0/1 Completed 15 264d
coredns-6955765f44-jmz9j 1/1 Running 16 264d
etcd-acid-fuji 1/1 Running 17 264d
kube-apiserver-acid-fuji 1/1 Running 6 36d
kube-controller-manager-acid-fuji 1/1 Running 21 264d
kube-proxy-hs2qb 1/1 Running 0 177d
kube-scheduler-acid-fuji 1/1 Running 21 264d
You can retrieve the yaml file with:
➜ ~ kubectl get pods -n kube-system <scheduler pod name> -oyaml
If you bootstrapped your cluster with Kubeadm you may also find the yaml files in the /etc/kubernetes/manifests:
➜ manifests sudo cat /etc/kubernetes/manifests/kube-scheduler.yaml
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
image: k8s.gcr.io/kube-scheduler:v1.17.6
imagePullPolicy: IfNotPresent
---------
The location for minikube is similar but you do have to login in the minikube's virtual machine first with minikube ssh.
For more reading please have a look how to configure multiple schedulers and how to write custom schedulers.
You can try this one:
kubectl get pods --all-namespaces | grep scheduler
Nearly 3 years ago, Kubernetes would not carry out a rolling deployment if you had a single replica (Kubernetes deployment does not perform a rolling update when using a single replica).
Is this still the case? Is there any additional configuration required for this to work?
You are not required to have a minimum number of replicas to rollout an update using Kubernetes Rolling Update anymore.
I tested it on my lab (v1.17.4) and it worked like a charm having only one replica.
You can test it yourself using this Katakoda Lab: Interactive Tutorial - Updating Your App
This lab is setup to create a deployment with 3 replicas. Before starting the lab, edit the deployment and change the number of replicas to one and follow the lab steps.
I created a lab using different example similar to your previous scenario. Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:1.16.1
ports:
- containerPort: 80
Deployment is running with one replica only:
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-6c4699c59c-w8clt 1/1 Running 0 5s
Here I edited my nginx-deployment.yaml and changed the version of nginx to nginx:latest and rolled out my deployment running replace:
$ kubectl replace -f nginx-deployment.yaml
deployment.apps/nginx-deployment replaced
Another option is to change the nginx version using the kubectl set image command:
kubectl set image deployment/nginx-deployment nginx-container=nginx:latest --record
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 0/1 ContainerCreating 0 3s
nginx-deployment-6c4699c59c-w8clt 1/1 Running 0 48s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 1/1 Running 0 6s
nginx-deployment-6c4699c59c-w8clt 0/1 Terminating 0 51s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-566d9f6dfc-hmlf2 1/1 Running 0 13s
As you can see, everything worked normally with only one replica.
In the latest version of the documentation we can read:
Deployment ensures that only a certain number of Pods are down while
they are being updated. By default, it ensures that at least 75% of
the desired number of Pods are up (25% max unavailable).
Deployment also ensures that only a certain number of Pods are created
above the desired number of Pods. By default, it ensures that at most
125% of the desired number of Pods are up (25% max surge).
Is there a way to request the status of a readinessProbe by using a service name linked to a deployment ? In an initContainer for example ?
Imagine we have a deployment X, using a readinessProbe, a service linked to it so we can request for example http://service-X:8080.
Now we create a deployment Y, in the initContainer we want to know if deployment X is ready. Is there a way to ask something likedeployment-X.ready or service-X.ready ?
I know that the correct way to handle dependencies is to let kubernetes do it for us, but i have a container which doesn't crash and I have no hand on it...
You can add a ngnix proxy sidecar on deployment Y.
Set the deploymentY.initContainer.readynessProbe to a port on nginx and that port is proxied to deploymentY.readynessProbe
Instead of readinessProbe You can use just InitContainer.
You create a pod/deployment X, make service X, and create a initContainer which is searching for the service X.
If he find it -> he will make the pod.
If he won't find it -> he will keep looking until service X will be created.
Just a simple example, we create nginx deployment by using kubectl apply -f nginx.yaml.
nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Then we create initContainer
initContainer.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'until nslookup my-nginx; do echo waiting for myapp-pod2; sleep 2; done;']
initContainer will look for service my-nginx, until You create it ,it will be in Init:0/1 status.
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/1 0 15m
After You add service for example by using kubectl expose deployment/my-nginx and initContainer will find my-nginx service, he will be created.
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 35m
Result:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/myapp-pod to kubeadm2
Normal Pulled 20s kubelet, kubeadm2 Container image "busybox:1.28" already present on machine
Normal Created 20s kubelet, kubeadm2 Created container init-myservice
Normal Started 20s kubelet, kubeadm2 Started container init-myservice
Normal Pulled 20s kubelet, kubeadm2 Container image "busybox:1.28" already present on machine
Normal Created 20s kubelet, kubeadm2 Created container myapp-container
Normal Started 20s kubelet, kubeadm2 Started container myapp-container
Let me know if that answer your question.
I finaly found a solution by following this link :
https://blog.giantswarm.io/wait-for-it-using-readiness-probes-for-service-dependencies-in-kubernetes/
We first need to create a ServiceAccount in Kubernetes to allow listing endpoints from an initContainer. After this, we ask for the available endpoints, if there is at least one, dependency is ready (in my case).
I'm trying to deploy my NodeJS application to EKS and run 3 pods with exactly the same container.
Here's the error message:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cm-deployment-7c86bb474c-5txqq 0/1 Pending 0 18s
cm-deployment-7c86bb474c-cd7qs 0/1 ImagePullBackOff 0 18s
cm-deployment-7c86bb474c-qxglx 0/1 ImagePullBackOff 0 18s
public-api-server-79b7f46bf9-wgpk6 0/1 ImagePullBackOff 0 2m30s
$ kubectl describe pod cm-deployment-7c86bb474c-5txqq
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s (x4 over 2m55s) default-scheduler 0/3 nodes are available: 3 Insufficient pods.
So it says that 0/3 nodes are available However, if I run
kubectl get nodes --watch
$ kubectl get nodes --watch
NAME STATUS ROLES AGE VERSION
ip-192-168-163-73.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
ip-192-168-172-235.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
ip-192-168-184-236.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
3 pods are running.
here are my configurations:
aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: [MY custom role ARN]
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cm-deployment
spec:
replicas: 3
selector:
matchLabels:
app: cm-literal
template:
metadata:
name: cm-literal-pod
labels:
app: cm-literal
spec:
containers:
- name: cm
image: docker.io/cjsjyh/public_test:1
imagePullPolicy: Always
ports:
- containerPort: 80
#imagePullSecrets:
# - name: regcred
env:
[my environment variables]
I applied both .yaml files
How can I solve this?
Thank you
My guess, without running the manifests you've got is that the image tag 1 on your image doesn't exist, so you're getting ImagePullBackOff which usually means that the container runtime can't find the image to pull .
Looking at the Docker Hub page there's no 1 tag there, just latest.
So, either removing the tag or replace 1 with latest may resolve your issue.
I experienced this issue with aws instance types with low resources