Kubernetes Zero Downtime deployment not working - Gives 503 Service Temporarily Unavailable - kubernetes

I am trying to achieve zero-downtime deployment using Kubernetes. But every time I do the upgrade of the deployment using a new image, I am seeing 2-3 seconds of downtime. I am testing this using a Hello-World sort of application but still could not achieve it. I am deploying my application using the Helm charts.
Following the online blogs and resources, I am using Readiness-Probe and Rolling Update strategy in my Deployment.yaml file. But this gives me no success.
I have created a /health end-point which simply returns 200 status code as a check for readiness probe. I expected that after using readiness probes and RollingUpdate strategy in Kubernetes I would be able to achieve zero-downtime of my service when I upgrade the image of the container. The request to my service goes through an Amazon ELB.
Deployment.yaml file is as below:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: wine-deployment
labels:
app: wine-store
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: wine-store
replicas: 2
template:
metadata:
labels:
app: wine-store
spec:
containers:
- name: {{ .Chart.Name }}
resources:
limits:
cpu: 250m
requests:
cpu: 200m
image: "my-private-image-repository-with-tag-and-version-goes-here-which-i-have-hidden-here"
imagePullPolicy: Always
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8089
name: testing-port
readinessProbe:
httpGet:
path: /health
port: 8089
initialDelaySeconds: 3
periodSeconds: 3
Service.yaml file:
apiVersion: v1
kind: Service
metadata:
name: wine-service
labels:
app: wine-store
spec:
ports:
- port: 80
targetPort: 8089
protocol: TCP
selector:
app: wine-store
Ingress.yaml file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wine-ingress
annotations:
kubernetes.io/ingress.class: public-nginx
spec:
rules:
- host: my-service-my-internal-domain.com
http:
paths:
- path: /
backend:
serviceName: wine-service
servicePort: 80
I expect the downtime to be zero when I am upgrading the image using helm upgrade command. Meanwhile, when the upgrade is in progress, I continuously hit my service using a curl command. This curl command gives me 503-service Temporarily un-available errors for 2-3 seconds and then again the service is up. I expect that this downtime does not happens.

This issue is caused by the Service VIP using iptables. You haven't done anything wrong - it's a limitation of current Kubernetes.
When the readiness probe on the new pod passes, the old pod is terminated and kube-proxy rewrites the iptables for the service. However, a request can hit the service after the old pod is terminated but before iptables has been updated resulting in a 503.
A simple workaround is to delay termination by using a preStop lifecycle hook:
lifecycle:
preStop:
exec:
command: ["/bin/bash", "-c", "sleep 10"]
It'd probably not relevant in this case, but implementing graceful termination in your application is a good idea. Intercept the TERM signal and wait for your application to finish handling any requests that it has already received rather than just exiting immediately.
Alternatively, more replicas, a low maxUnavailable and a high maxSurge will all reduce the probability of requests hitting a terminating pod.
For more info:
https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables
https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
Another answer mistakenly suggests you need a liveness probe. While it's a good idea to have a liveness probe, it won't effect the issue that you are experiencing. With no liveness probe defined the default state is Success.
In the context of a rolling deployment a liveness probe will be irrelevant - Once the readiness probe on the new pod passes the old pod will be sent the TERM signal and iptables will be updated. Now that the old pod is terminating, any liveness probe is irrelevant as its only function is to cause a pod to be restarted if the liveness probe fails.
Any liveness probe on the new pod again is irrelevant. When the pod is first started it is considered live by default. Only after the initialDelaySeconds of the liveness probe would it start being checked and, if it failed, the pod would be terminated.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes

Go around with blue-green deployments because even if pods are up it may take time for kube-proxy to forward requests to new POD IPs.
So setup new deployment, after all pods are up update service selector to new POD lables.
Follow: https://kubernetes.io/blog/2018/04/30/zero-downtime-deployment-kubernetes-jenkins/

The problem you describe indicate an issue with readiness probes. It is important to understand the differences between liveness and readiness probes. First of all you should implement and configure both!
The liveness probes are to check if the container is started and alive. If this isn’t the case, kubernetes will eventually restart the container.
The readiness probes in turn also check dependencies like database connections or other services your container is depending on to fulfill it’s work. As a developer you have to invest here more time into the implementation than just for the liveness probes. You have to expose a an endpoint which is also checking the mentioned dependencies when queried.
Your current configuration uses an health endpoint which is usually used by the liveness probes. It probably doesn’t check if your services is really ready to take traffic.
Kubernetes relies on the readiness probes. During an rolling update, it will keep the old container up and running until the new service declares that it is ready to take traffic. Therefore the readiness probes have to be implemented correctly.

Related

Debugging slow Kubernetes deployment

We are using K8S in a managed Azure environment, Minikube in Ubuntu and a Rancher cluster built on on-prem machines and in general, our deployments take up to about 30 seconds to pull containers, run up and be ready. However, my latest attempt to create a deployment (on-prem) takes upwards of a minute and sometimes longer. It is a small web service which is very similar to our other deployments. The only (obvious) difference is the use of a startup probe and a liveness probe, although some of our other services do have probes, they are different though.
After removing Octopus deploy from the equation by extracting the yaml it was running and using kubectl, as soon as the (single) pod starts, I start reading the logs and as expected, the startup and liveness probes are called very quickly. Startup succeeds and the cluster starts calling the live probe, which also succeeds. However, if I use kubectl describe on the pod, it shows Initialized and PodScheduled as True but ContainersReady (there is one container) and Ready are both false for around a minute. I can't see what would cause this other than probe failures but these are logged as successful.
They eventually start and work OK but I don't know why they take so long.
kind: Deployment
apiVersion: apps/v1
metadata:
name: 'redirect-files-deployments-28775'
labels:
Octopus.Kubernetes.SelectionStrategyVersion: "SelectionStrategyVersion2"
OtherOctopusLabels
spec:
replicas: 1
selector:
matchLabels:
Octopus.Kubernetes.DeploymentName: 'redirect-files-deployments-28775'
template:
metadata:
labels:
Octopus.Kubernetes.SelectionStrategyVersion: "SelectionStrategyVersion2"
OtherOctopusLabels
spec:
containers:
- name: redirect-files
image: ourregistry.azurecr.io/microservices.redirectfiles:1.0.34
ports:
- name: http
containerPort: 80
protocol: TCP
env:
- removed connection strings etc
livenessProbe:
httpGet:
path: /api/version
port: 80
scheme: HTTP
successThreshold: 1
startupProbe:
httpGet:
path: /healthcheck
port: 80
scheme: HTTP
httpHeaders:
- name: X-SS-Authorisation
value: asdkjlkwe098sad0akkrweklkrew
initialDelaySeconds: 5
timeoutSeconds: 5
imagePullSecrets:
- name: octopus-feedcred-feeds-azure-container-registry
So the cause was the startup and/or liveness probes. When I removed them, the deployment time went from over a minute to 18 seconds, despite the logs proving that the probes were called successfully very quickly after containers were started.
At least I now have something more concrete to look for.

kubernetes - container busy with request, then request should route to another container

I am very new to k8s and docker. But I have task on k8s. Now I stuck with a use case. That is:
If a container is busy with requests. Then incoming request should redirect to another container.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: twopoddeploy
namespace: twopodns
spec:
selector:
matchLabels:
app: twopod
replicas: 1
template:
metadata:
labels:
app: twopod
spec:
containers:
- name: secondcontainer
image: "docker.io/tamilpugal/angmanualbuild:latest"
env:
- name: "PORT"
value: "24244"
- name: firstcontainer
image: "docker.io/tamilpugal/angmanualbuild:latest"
env:
- name: "PORT"
value: "24243"
service.yaml
apiVersion: v1
kind: Service
metadata:
name: twopodservice
spec:
type: NodePort
selector:
app: twopod
ports:
- nodePort: 31024
protocol: TCP
port: 82
targetPort: 24243
From deployment.yaml, I created a pod with two containers with same image. Because, firstcontainer is not reachable/ busy, then secondcontainer should handles the incoming requests. This our idea and use case. So (only for checking our use case) I delete firstcontainer using docker container rm -f id_of_firstcontainer. Now site is not reachable until, docker recreates the firstcontainer. But I need k8s should redirects the requests to secondcontainer instead of waiting for firstcontainer.
Then, I googled about the solution, I found Ingress and Liveness - Readiness. But Ingress does route the request based on the path instead of container status. Liveness - Readiness also recreates the container. SO also have some question and some are using ngix server. But no luck. That why I create a new question.
So my question is how to configure the two containers to reduce the downtime?
what is keyword for get the solution from the google to try myself?
Thanks,
Pugal.
A service can load-balance between multiple pods. You should delete the second copy of the container inside the deployment spec, but also change it to have replicas: 2. Now your deployment will launch two identical pods, but the service will match both of them, and requests will go to both.
apiVersion: apps/v1
kind: Deployment
metadata:
name: onepoddeploy
namespace: twopodns
spec:
selector: { ... }
replicas: 2 # not 1
template:
metadata: { ... }
spec:
containers:
- name: firstcontainer
image: "docker.io/tamilpugal/angmanualbuild:latest"
env:
- name: "PORT"
value: "24243"
# no secondcontainer
This means if two pods aren't enough to handle your load, you can kubectl scale deployment -n twopodns onepoddeploy --replicas=3 to increase the replica count. If you can tell from CPU utilization or another metric when you're getting to "not enough", you can configure the horizontal pod autoscaler to adjust the replica count for you.
(You will usually want only one container per pod. There's no way to share traffic between containers in the way you're suggesting here, and it's helpful to be able to independently scale components. This is doubly true if you're looking at a stateful component like a database and a stateless component like an HTTP service. Typical uses for multiple containers are things like log forwarders and network proxies, that are somewhat secondary to the main operation of the pod, and can scale or be terminated along with the primary container.)
There are two important caveats to running multiple pod replicas behind a service. The service load balancer isn't especially clever, so if one of your replicas winds up working on intensive jobs and the other is more or less idle, they'll still each get about half the requests. Also, if your pods are configure for HTTP health checks (recommended), if a pod is backed up to the point where it can't handle requests, it will also not be able to answer its health checks, and Kubernetes will kill it off.
You can help Kubernetes here by trying hard to answer all HTTP requests "promptly" (aiming for under 1000 ms always is probably a good target). This can mean returning a "not ready yet" response to a request that triggers a large amount of computation. This can also mean rearranging your main request handler so that an HTTP request thread isn't tied up waiting for some task to complete.

Is there a way to do a load balancing between pod in multiple nodes?

I have a kubernetes cluster deployed with rke witch is composed of 3 nodes in 3 different servers and in those server there is 1 pod which is running yatsukino/healthereum which is a personal modification of ethereum/client-go:stable .
The problem is that I'm not understanding how to add an external ip to send request to the pods witch are
My pods could be in 3 states:
they syncing the ethereum blockchain
they restarted because of a sync problem
they are sync and everything is fine
I don't want my load balancer to transfer requests to the 2 first states, only the third point consider my pod as up to date.
I've been searching in the kubernetes doc but (maybe because a miss understanding) I only find load balancing for pods inside a unique node.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: goerli
name: goerli-deploy
spec:
replicas: 3
selector:
matchLabels:
app: goerli
template:
metadata:
labels:
app: goerli
spec:
containers:
- image: yatsukino/healthereum
name: goerli-geth
args: ["--goerli", "--datadir", "/app", "--ipcpath", "/root/.ethereum/geth.ipc"]
env:
- name: LASTBLOCK
value: "0"
- name: FAILCOUNTER
value: "0"
ports:
- containerPort: 30303
name: geth
- containerPort: 8545
name: console
livenessProbe:
exec:
command:
- /bin/sh
- /app/health.sh
initialDelaySeconds: 20
periodSeconds: 60
volumeMounts:
- name: app
mountPath: /app
initContainers:
- name: healthcheck
image: ethereum/client-go:stable
command: ["/bin/sh", "-c", "wget -O /app/health.sh http://my-bash-script && chmod 544 /app/health.sh"]
volumeMounts:
- name: app
mountPath: "/app"
restartPolicy: Always
volumes:
- name: app
hostPath:
path: /app/
The answers above explains the concepts, but about your questions anout services and external ip; you must declare the service, example;
apiVersion: v1
kind: Service
metadata:
name: goerli
spec:
selector:
app: goerli
ports:
- port: 8545
type: LoadBalancer
The type: LoadBalancer will assign an external address for in public cloud or if you use something like metallb. Check your address with kubectl get svc goerli. If the external address is "pending" you have a problem...
If this is your own setup you can use externalIPs to assign your own external ip;
apiVersion: v1
kind: Service
metadata:
name: goerli
spec:
selector:
app: goerli
ports:
- port: 8545
externalIPs:
- 222.0.0.30
The externalIPs can be used from outside the cluster but you must route traffic to any node yourself, for example;
ip route add 222.0.0.30/32 \
nexthop via 192.168.0.1 \
nexthop via 192.168.0.2 \
nexthop via 192.168.0.3
Assuming yous k8s nodes have ip 192.168.0.x. This will setup ECMP routes to your nodes. When you make a request from outside the cluster to 222.0.0.30:8545 k8s will load-balance between your ready PODs.
For loadbalancing and exposing your pods, you can use https://kubernetes.io/docs/concepts/services-networking/service/
and for checking when a pod is ready, you can use tweak your liveness and readiness probes as explained https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
for probes you might want to consider exec actions like execution a script that checks what is required and returning 0 or 1 dependent on status.
When a container is started, Kubernetes can be configured to wait for a configurable
amount of time to pass before performing the first readiness check. After that, it
invokes the probe periodically and acts based on the result of the readiness probe. If a
pod reports that it’s not ready, it’s removed from the service. If the pod then becomes
ready again, it’s re-added.
Unlike liveness probes, if a container fails the readiness check, it won’t be killed or
restarted. This is an important distinction between liveness and readiness probes.
Liveness probes keep pods healthy by killing off unhealthy containers and replacing
them with new, healthy ones, whereas readiness probes make sure that only pods that
are ready to serve requests receive them. This is mostly necessary during container
start up, but it’s also useful after the container has been running for a while.
I think you can use probe for your goal

20 percent of requests timeout when a node crashing in k8s. How to solve this?

I was testing my kubernetes services recently. And I found it's very unreliable. Here are the situation:
1. The test service 'A' which receives HTTP requests at port 80 has five pods deployed on three nodes.
2. An nginx ingress was set to route traffic outside onto the service 'A'.
3. The ingress was set like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-A
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "1s"
nginx.ingress.kubernetes.io/proxy-next-upstream: "error timeout invalid_header http_502 http_503 http_504"
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "2"
spec:
rules:
- host: <test-url>
http:
paths:
- path: /
backend:
serviceName: A
servicePort: 80
http_load was started on an client host and kept sending request to the ingress nginx at a speed of 1000 per-seconds. All the request were routed to the service 'A' in k8s and eveything goes well.
When I restarted one of the nodes manually, things went wrong:
In the next 3 minutes, about 20% requests were timeout, which is unacceptable in product environment.
I don't know why k8s reacts so slow and is there a way to solve this problem?
You can speed up that fail-over process by configuring liveness and readiness probes in the pods' spec:
Container probes
...
livenessProbe: Indicates whether the Container is running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is Success.
readinessProbe: Indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.
Liveness probe example:
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
Thanks for #VAS's answer!
Liveness probe is a way to solve this problem.
But I finally figured out that what I want was passive health check, which the k8s dosen't surpport.
And I solved this problem by introducing istio into my cluster.

Is there a kubernetes config parm (service or rc or other) to delay before using a pod

We are running a workload against a cluster hosting 2 instances of a small (3 container) pod. Accessing the pod using a service w/nodeport. If we stop a pod and rc starts a new one, our constant (low volume) workload has numerous failures (Rational Perf Tester, http test hitting the service on the master ... but likely same if it were hitting either minion ... master also has a minion). Anyway, if we just add a pod with kubectl scale, we also get errors. If we then take down this a pod (rc doesn't start a new one because we had one more than needed due to scale) ... no errors. Seems that service starts sending work to new pod because kubelet has done his thing, even though containers are not up. Thus, any time a pod is started ... it starts receiving work a little too soon (after kubelet did his work, but before all containers are ready). Is there a way to guarantee that the service will not route to this pod until all containers are up? Barring that is there some way to say wait 'n' seconds before sending to this pod? I may be wrong, but behavior seems to suggest this scenario.
This is precisely what the readinessProbe option is :)
It's documented more here and here, and is part of the container definition in a pod specification.
For example, you might use a pod specification like the one below to ensure that your nginx pod won't be marked as ready (and thus won't have traffic sent to it) until it responds to an HTTP request for /index.html:
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
lifecycle:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 10
timeoutSeconds: 5