I need to deploy a web application in gke. The application consists of two pods and needs to scale to ~30 replicas.
Rolling updates take ~30s/pod in our setup.
Old title: How do I enable the deployments API on GKE cluster?
I tried to use deployments as they allow to update multiple pods in parallel.
But, as nshttpd pointed out in #google-containers on the kubernetes slack:
I may be wrong, but GKE clusters don’t have beta features I thought. so if you want Deployments you’ll have to spin up your own cluster.
GKE clusters actually do have beta features. But Deployments were an alpha feature in the 1.1 release (which is the current supported release) and are graduating to beta for the upcoming 1.2 release. Once they are a beta feature, you will be able to use them in GKE.
The rolling update command is really just syntactic sugar around first creating a new replication controller, scaling it up by one, scaling the existing replication controller down by one, and repeating until the old replication controller has size zero. You can do this yourself at a much faster rate if going one pod at a time is too slow. You may also want to file a feature request on github to add a flag to the rolling update command to update multiple pods in parallel.
Related
I am trying to use VPA for autoscaling my deployed services. Due to limitation in resources in my cluster I set the min_replica option to 1. The workflow of VPA that have seen so far is that it first deletes the existing pod and then re-create the pod. This approach will cause a downtime to my services. What I want is that the VPA first create the new pod and then deletes the old pod, completely similar to the rolling updates for deployments. Is there an option or hack to reverse the flow to the desired order in my case?
This can be achieved by using python script or by using an IAC pipeline, you can get the metrics of the kubernetes cluster and whenever these metrics exceed a certain threshold, trigger this python code for creating new pod with the required resources and shutdown the old pod. Follow this github link for more info on python plugin for kubernetes.
Ansible can also be used for performing this operation. This can be achieved by triggering your ansible playbook whenever the threshold breaches a certain limit and you can specify the new sizes of the pods that need to be created. Follow this official ansible document for more information. However both these procedures involve manual analysis for selecting the desired pod size for scaling. So if you don’t want to use vertical scaling you can go for horizontal scaling.
Note: The information is gathered from official Ansible and github pages and the urls are referred to in the post.
I'm experiencing downtimes whenever the GKE cluster gets upgraded during the maintenance window. My services (APIs) become unreachable for like ~5min.
The cluster Location type is set to "Zonal", and all my pods have 2 replicas. The only affected pods seem to be the ones using nginx ingress controller.
Is there anything I can do to prevent this? I read that using Regional clusters should prevent downtimes in the control plane, but I'm not sure if it's related to my case. Any hints would be appreciated!
You mention "downtime" but is this downtime for you using the control plane (i.e. kubectl stop working) or is it downtime in that the end user who is using the services stops seeing the service working.
A GKE upgrade upgrades two parts of the cluster: the control plane or master nodes, and the worker nodes. These are two separate upgrades although they can happen at the same time depending on your configuration of the cluster.
Regional clusters can help with that, but they will cost more as you are having more nodes, but the upside is that the cluster is more resilient.
Going back to the earlier point about the control plane vs node upgrades. The control plane upgrade does NOT affect the end-user/customer perspective. The services will remaining running.
The node upgrade WILL affect the customer so you should consider various techniques to ensure high availability and resiliency on your services.
A common technique is to increase replicas and also to include pod antiaffinity. This will ensure the pods are scheduled on different nodes, so when the node upgrade comes around, it doesn't take the entire service out because the cluster scheduled all the replicas on the same node.
You mention the nginx ingress controller in your question. If you are using Helm to install that into your cluster, then out of the box, it is not setup to use anti-affinity, so it is liable to be taken out of service if all of its replicas get scheduled onto the same node, and then that node gets marked for upgrade or similar.
I have a single page app. It is served by (and talks to) an API server running on a Kubernetes deployment with 2 replicas. I have added a X-API-Version header that my API sends on every request, and my client can compare with, to figure out if it needs to inform the user their client code is outdated.
One issue I am facing however, is when I deploy, I want to ensure only ever 1 version of the API is running. I do not want a situation where a client can be refreshed many times in a loop, as it receives different API versions.
I basically want it to go from 2 replicas running version A, to 2 replicas running Version A, 2 running version B. Then switch the traffic to version B once health checks pass, then tear down the old version A's.
Does Kubernetes support this using the RollingDeploy strategy?
For blue-green deployment in Kubernetes, I will recommend to use some third party solution like Argo Rollouts, NGINX, Istio etc.. They will let you split the traffic between the versions of your application.
However, Kubernentes is introducing Gateway API which has built-in support for traffic splitting.
What you are asking isn't a blue/green deploy really. If you require two pods, or more, to run during the upgrade, for performance issues, you will get an overlap where some pods of version A responds and some from version B.
You can fine tune it a little, for instance you can configure it to start all of the new pods at once and for each one that turn from running->ready one of the old will be removed. If your pods starts fast, or at least equally fast, the overlap will be really short.
Or, if you can accept a temporary downtime there is a deployment strategy that completely decommission all old pods before rolling out the new ones. Depending on how fast your service starts this could give a short or long downtime.
Of, if you don't mind just a little bit extra work, you deploy version B in parallell with version A and you add the version to the set of labels.
Then, in your service you make sure the version label is a part of the selector and once the pods for version B is running you change the service selectors from version A to version B and it will instantly start using those instead.
I recently starting using Kubernetes. My experience is that yes, K8s behaves this way out of the box. If I have e.g. two pods running and I perform a deployment, K8s will create two fresh pods and then, only once those two fresh pods are healthy, will K8s terminate the original two pods.
I only want to deploy one pod in k8s.
For example, I deploy several pods in one pool with the same codes, but I only want to change one pod to do some test. Can it be done?
What you're describing in your question is actually the closest to what we call Canary Deployment.
In a nutshell Canary Deployment (also known as Canary Release) is a technique that allows you to reduce potential risk of introducing in production a new software version that may be corrupted. It is achieved by rolling out the change only to a small subset of servers ( in Kubernetes it may be just one pod ) before deploying it to the entire infrastructure and making it available to everybody.
If you decide e.g. to deploy one more pod using new image version and you've got already working deployment consisting let's say of 3 replicas, only 25 % of traffic will be routed to the new pod. Once you decide the test was successful you may continue rolling out the update to other pods.
Here you can find an article describing in detail how you can perform such kind of deployment on Kubernetes.
It's actually similar approach to Blue-Green Deployment already mentioned by #Malathi and has a lot in common with it.
Perhaps you meant Blue-Green Deployments.
The common release process involves, adding new pods with the latest release and perhaps expose a certain percent of the traffic to be routed to the new release pod. If everything goes well you can remove the old pods with old release and replace them with new pods with the new release.
This article talks of blue-green deployments with Kubernetes.
It is also possible to use service mesh-like istio with Kubernetes for advanced blue-green deployments such as redirect traffic to a new release based on header values or cookies.
I have two colored tracks where I deployed two different versions of my webapp (nginx+php-fpm), These tracks are available by services, called live and next.
The classic way would be deploying new version of webapp to next, after checking, release it to live by switching their services.
So far so good.
Considering autoscaling with HPA:
Before doing a release I have to prescale next to the amount of live pods to prevent too heavy loads after switch.
Problem here is the nature of HPAs cpu load measuring. In worst case the autoscaler will downscale the prescaled track immediately, cause of calculating cpu load coming from next.
Another problem i found is using keepalive connections, which makes releasing new pods to live very hard without killing old pods.
How to solve the problem?
We have a few deployment strategies (there are more but I will point the most common).
1) Rolling Update - We need only one deployment. It will add pods with new content to current deployment and terminating old version pods in the same time. For a while deployment will contain mix of old and new version.
2) Blue-Green Deployment - It is the safest strategy and it is recommended for production workloads. We need to have two deployments coexisting i.e v1 and v2. Im most cases old deployment is draining (close all connections/sessions to old deployment) and redirected all new sessions/connections to new deployment. Usualy both deployments are keept for a while as Production and Stage.
3) Canary Deployment - The hardest one. Here you also need at least two deployments running at the same time. Some users will be connected to old application, others will be redirected to new one. It can be achieved via load balancig/proxy layer configuration. In this case HPA is not allowed because we are using two deployments at the same time and each deployment will have own independent autoscaler.
Like #Mamuz pointed in comment Blue-Green Strategy without switch on
service level sounds much better in this case than rolling-update.
Another option which might be useful in this scenario is Blue-Green
Deployment with ISTIO using Traffic Shifting. In this option you
could divide traffic as request i.e. from 100-0, 80-20, 60-40, 20-80
to 0-100%
Using ISTIO and HPA step by step is described in this article.
You can read about Traffic Management here.
Example of Istio and K8s here.