How to implement Blue-Green Deployment with HPA? - kubernetes

I have two colored tracks where I deployed two different versions of my webapp (nginx+php-fpm), These tracks are available by services, called live and next.
The classic way would be deploying new version of webapp to next, after checking, release it to live by switching their services.
So far so good.
Considering autoscaling with HPA:
Before doing a release I have to prescale next to the amount of live pods to prevent too heavy loads after switch.
Problem here is the nature of HPAs cpu load measuring. In worst case the autoscaler will downscale the prescaled track immediately, cause of calculating cpu load coming from next.
Another problem i found is using keepalive connections, which makes releasing new pods to live very hard without killing old pods.
How to solve the problem?

We have a few deployment strategies (there are more but I will point the most common).
1) Rolling Update - We need only one deployment. It will add pods with new content to current deployment and terminating old version pods in the same time. For a while deployment will contain mix of old and new version.
2) Blue-Green Deployment - It is the safest strategy and it is recommended for production workloads. We need to have two deployments coexisting i.e v1 and v2. Im most cases old deployment is draining (close all connections/sessions to old deployment) and redirected all new sessions/connections to new deployment. Usualy both deployments are keept for a while as Production and Stage.
3) Canary Deployment - The hardest one. Here you also need at least two deployments running at the same time. Some users will be connected to old application, others will be redirected to new one. It can be achieved via load balancig/proxy layer configuration. In this case HPA is not allowed because we are using two deployments at the same time and each deployment will have own independent autoscaler.
Like #Mamuz pointed in comment Blue-Green Strategy without switch on
service level sounds much better in this case than rolling-update.
Another option which might be useful in this scenario is Blue-Green
Deployment with ISTIO using Traffic Shifting. In this option you
could divide traffic as request i.e. from 100-0, 80-20, 60-40, 20-80
to 0-100%
Using ISTIO and HPA step by step is described in this article.
You can read about Traffic Management here.
Example of Istio and K8s here.

Related

k8s: Is it possible to have two identical deployments but route different traffic to them?

Here is my use case:
I have a microservice which gets sent traffic via an ingress gateway in real time and via a batch process. What I'd like to be able to do is be able to conceptually define a deployment and have it create two sets of pods:
One set for real time request
Another for batch.
When a new version of the microservice gets deployed, the k8s deployment is updated and both real time and batch use the new version.
Is this possible in k8s or will I need to create two deployments and manage them separately?
This is a community wiki answer posted for better visibility. Feel free to expand it.
Since we don't know the complete information about the architecture used, the following suggestions from comments can be used in the future to solve the problem.
1. With Deployments, Services, Selectors
You can have two identical deployments and route different traffic to them.
It may be implemented by Services:
In Kubernetes, a Service is an abstraction which defines a logical set
of Pods and a policy by which to access them (sometimes this pattern
is called a micro-service). The set of Pods targeted by a Service is
usually determined by a selector.
Such approach has some advantages.
By default, traffic will be routed to endpoints in random way if you are using iptables proxy mode. When you try to send traffic to specific pods covered by the same deployment - it may happen large differences in CPU and Memory usage leading to the resource exhaustion or wasting resources.
It will be easier to manage service versioning, CPU and Memory assignment and rollouts.
2. With Istio
From David M. Karr
If a service is defined as a VirtualService, you can route to
different DestinationRule objects depending on header values (or other
qualifications).
Additionally
If you need to deploy a new version of the microservice, you can choose between different strategies, which is more suitable for your needs.
Kubernetes deployment strategies:
recreate: terminate the old version and release the new one
ramped: release a new version on a rolling update fashion, one after the other
blue/green: release a new version alongside the old version then switch traffic
canary: release a new version to a subset of users, then proceed to a full rollout
a/b testing: release a new version to a subset of users in a precise way (HTTP headers, cookie, weight, etc.). A/B testing is
really a technique for making business decisions based on statistics
but we will briefly describe the process. This doesn’t come out of the
box with Kubernetes, it implies extra work to setup a more advanced
infrastructure (Istio, Linkerd, Traefik, custom nginx/haproxy, etc).

Does Kubernetes natively support "blue-green"-like deployments?

I have a single page app. It is served by (and talks to) an API server running on a Kubernetes deployment with 2 replicas. I have added a X-API-Version header that my API sends on every request, and my client can compare with, to figure out if it needs to inform the user their client code is outdated.
One issue I am facing however, is when I deploy, I want to ensure only ever 1 version of the API is running. I do not want a situation where a client can be refreshed many times in a loop, as it receives different API versions.
I basically want it to go from 2 replicas running version A, to 2 replicas running Version A, 2 running version B. Then switch the traffic to version B once health checks pass, then tear down the old version A's.
Does Kubernetes support this using the RollingDeploy strategy?
For blue-green deployment in Kubernetes, I will recommend to use some third party solution like Argo Rollouts, NGINX, Istio etc.. They will let you split the traffic between the versions of your application.
However, Kubernentes is introducing Gateway API which has built-in support for traffic splitting.
What you are asking isn't a blue/green deploy really. If you require two pods, or more, to run during the upgrade, for performance issues, you will get an overlap where some pods of version A responds and some from version B.
You can fine tune it a little, for instance you can configure it to start all of the new pods at once and for each one that turn from running->ready one of the old will be removed. If your pods starts fast, or at least equally fast, the overlap will be really short.
Or, if you can accept a temporary downtime there is a deployment strategy that completely decommission all old pods before rolling out the new ones. Depending on how fast your service starts this could give a short or long downtime.
Of, if you don't mind just a little bit extra work, you deploy version B in parallell with version A and you add the version to the set of labels.
Then, in your service you make sure the version label is a part of the selector and once the pods for version B is running you change the service selectors from version A to version B and it will instantly start using those instead.
I recently starting using Kubernetes. My experience is that yes, K8s behaves this way out of the box. If I have e.g. two pods running and I perform a deployment, K8s will create two fresh pods and then, only once those two fresh pods are healthy, will K8s terminate the original two pods.

Can i only change one pod in kubernetes?

I only want to deploy one pod in k8s.
For example, I deploy several pods in one pool with the same codes, but I only want to change one pod to do some test. Can it be done?
What you're describing in your question is actually the closest to what we call Canary Deployment.
In a nutshell Canary Deployment (also known as Canary Release) is a technique that allows you to reduce potential risk of introducing in production a new software version that may be corrupted. It is achieved by rolling out the change only to a small subset of servers ( in Kubernetes it may be just one pod ) before deploying it to the entire infrastructure and making it available to everybody.
If you decide e.g. to deploy one more pod using new image version and you've got already working deployment consisting let's say of 3 replicas, only 25 % of traffic will be routed to the new pod. Once you decide the test was successful you may continue rolling out the update to other pods.
Here you can find an article describing in detail how you can perform such kind of deployment on Kubernetes.
It's actually similar approach to Blue-Green Deployment already mentioned by #Malathi and has a lot in common with it.
Perhaps you meant Blue-Green Deployments.
The common release process involves, adding new pods with the latest release and perhaps expose a certain percent of the traffic to be routed to the new release pod. If everything goes well you can remove the old pods with old release and replace them with new pods with the new release.
This article talks of blue-green deployments with Kubernetes.
It is also possible to use service mesh-like istio with Kubernetes for advanced blue-green deployments such as redirect traffic to a new release based on header values or cookies.

Using spinnaker's Red/Black deployments strategy and still having both versions serving traffic

I'm currently setting up a POC spinnaker pipeline to deploy to a kubernetes cluster.
Experimenting with spinnaker's red/black strategy, i've noticed that it does not behave as i expect it to. I expect it to guarantee that only 1 version gets traffic with the following steps:
deploy black server group (kubernete's replicaset) & ensure it's healthy
reroute the traffic of the service to the black server group by updating the load balancer's targets
disable the red server group
But in reality, at least when using it with kubernetes, step 2 here seems to map to several steps:
add black targets to the load balancer
remove red targets from the load balancer
Therefore, i get 2 versions serving traffic for a minute here.
To my understanding, blue green can be achieved in kubernetes by updating the service (load balancer) 's pods selector, so i'm confused as for why spinnaker's kubernetes driver does not seem to leverage this.
Can anybody help me see what i'm missing here ?
Thanks
Can you verify if the deployment isn't still in the phase of rolling out? It can be that your spinacker setup just spins up a new version of the current deployment. If this is the case your deployment will doe a rolling upgrade withe the max surge you provided or the default one and that's why you have 2 versions running at the same time.
If I'm not mistaken, most of the people that doe blue/green deploys have 2 separated networks (for example with flannel) and just spin up a new deployment that gets switched either gradually or instant via their ingress controllers.

How to speedup rolling-updates on GKE

I need to deploy a web application in gke. The application consists of two pods and needs to scale to ~30 replicas.
Rolling updates take ~30s/pod in our setup.
Old title: How do I enable the deployments API on GKE cluster?
I tried to use deployments as they allow to update multiple pods in parallel.
But, as nshttpd pointed out in #google-containers on the kubernetes slack:
I may be wrong, but GKE clusters don’t have beta features I thought. so if you want Deployments you’ll have to spin up your own cluster.
GKE clusters actually do have beta features. But Deployments were an alpha feature in the 1.1 release (which is the current supported release) and are graduating to beta for the upcoming 1.2 release. Once they are a beta feature, you will be able to use them in GKE.
The rolling update command is really just syntactic sugar around first creating a new replication controller, scaling it up by one, scaling the existing replication controller down by one, and repeating until the old replication controller has size zero. You can do this yourself at a much faster rate if going one pod at a time is too slow. You may also want to file a feature request on github to add a flag to the rolling update command to update multiple pods in parallel.