Can i only change one pod in kubernetes? - kubernetes

I only want to deploy one pod in k8s.
For example, I deploy several pods in one pool with the same codes, but I only want to change one pod to do some test. Can it be done?

What you're describing in your question is actually the closest to what we call Canary Deployment.
In a nutshell Canary Deployment (also known as Canary Release) is a technique that allows you to reduce potential risk of introducing in production a new software version that may be corrupted. It is achieved by rolling out the change only to a small subset of servers ( in Kubernetes it may be just one pod ) before deploying it to the entire infrastructure and making it available to everybody.
If you decide e.g. to deploy one more pod using new image version and you've got already working deployment consisting let's say of 3 replicas, only 25 % of traffic will be routed to the new pod. Once you decide the test was successful you may continue rolling out the update to other pods.
Here you can find an article describing in detail how you can perform such kind of deployment on Kubernetes.
It's actually similar approach to Blue-Green Deployment already mentioned by #Malathi and has a lot in common with it.

Perhaps you meant Blue-Green Deployments.
The common release process involves, adding new pods with the latest release and perhaps expose a certain percent of the traffic to be routed to the new release pod. If everything goes well you can remove the old pods with old release and replace them with new pods with the new release.
This article talks of blue-green deployments with Kubernetes.
It is also possible to use service mesh-like istio with Kubernetes for advanced blue-green deployments such as redirect traffic to a new release based on header values or cookies.

Related

Multiple apps in single K8S deployment

I'm exploring K8S possibilities and I'm wonder is there any way to create deployments for two or more apps in single deployment so it is transactional - when something is wrong after deployment all apps are rollbacked. Also I want to mention that I'm not saying about pod with multiple containers because additional side car containers are rather intended for some crosscutting concerns like monitoring, authentication (like kerberos) and others but it is not recommended to put different apps in single pod. Having this in mind, is it possible to have single deployment that can produce 2+ kind of pods?
Is it possible to have single deployment that can produce 2+ kind of pods?
No. A Deployment creates only one kind of Pod. You can update a Deployment's contents, and it will incrementally replace existing Pods with new ones that match the updated Pod spec.
Nothing stops you from creating multiple Deployments, one for each kind of Pod, and that's probably the approach you're looking for here.
... when something is wrong after deployment all apps are rollbacked.
Core Kubernetes doesn't have this capability on its own; indeed, it has somewhat limited capacity to tell that something has gone wrong, other than a container failing its health checks or exiting.
Of the various tools in #SYN's answer I at least have some experience with Helm. It's not quite "transactional" in the sense you might take from a DBMS, but it does have the ability to manage a collection of related resources (a "release" of a "chart") and it has the ability to roll back an entire version of a release across multiple Deployments if required. See the helm rollback command.
Helm
As pointed out in comments, one way to go about this would be to use something like Helm.
Helm is some kind of client (as of v3. Previous also involved "tiller", a controller running in your kubernetes cluster: let's forget about that one/deprecated).
Helm uses "Charts" (more or less: templates, with default values you can override).
Kustomize
Another solution, similar to Helm, is Kustomize. Working from plain-text files (not templates), while making it simple to override / customize your objects before applying them to your Kubernetes cluster.
ArgoCD
While Kustomize and Helm are both standalone clients, we could also mention solutions such as ArgoCD.
The ArgoCD controller would run inside your Kubernetes cluster, allowing you to create "Application" objects.
Those Applications are processed by ArgoCD, driving deployment of your workloads (common sources for those applications would involve Helm Charts, Git repositories, ...).
The advantage of ArgoCD being that their controller may (depending on your configuration) be responsible for upgrading your applications over time (eg: if your source is a git repository, branch XXX, and someone pushes changes into that branch: argocd would apply those pretty much right away)
Operators
Although most of those solutions are pretty much unaware of how your application is running. Say you upgrade a deployment, driven by Helm, Kustomize or ArgoCD, and end up with some database pods stuck in crashloopbackoff: your application pods would get updated nevertheless, there's no automatic rollback to a previous working configuration.
Which brings us to another way to ship applications to Kubernetes: operators.
Operators are aware of the state of your workloads, and may be able to fix common errors ( depending on how it was coded, ... there's no magic ).
An operator is an application (can be in Go, Java, Python, Ansible playbooks, ... or whichever comes with some library communicating with a Kubernetes cluster API)
An operator is constantly connected to your Kubernetes cluster API. You would usually find some CustomResourceDefinitions specific to your operator, allowing you to describe the deployment of some component in your cluster. (eg: the elasticsearch operator introduces an object kind "ElasticSearch", and some "Kibana")
The operator watches for instances of the objects it managed (eg: ElasticSearch), eventually creating Deployment/StatefulSets/Services ...
If someone deletes an object that was created by your operator, it would/should be re-created by that operator, in a timely manner (mileage may vary, depending on which operator we're talking about ...)
A perfect sample for operators would be something like OpenShift 4 (OKD4). A Kubernetes cluster that comes with 10s of operators (SDN, DNS, machine configurations, ingress controller, kubernetes API server, etcd database, ...). The whole cluster is an assembly of operators: upgrading your cluster, each of those would manage the upgrade of the corresponding services, in an orchestrated way, ... one after the other, ... if anything fails, you're still usually left with enough replicas running to troubleshoot the issue, ...
Depending on what you're looking for, each option has advantages and inconvenients. Now if you're looking for "single deployment that can produce 2+ kind of pods", then ArgoCD or some home-grown operator would qualify.

Does Kubernetes natively support "blue-green"-like deployments?

I have a single page app. It is served by (and talks to) an API server running on a Kubernetes deployment with 2 replicas. I have added a X-API-Version header that my API sends on every request, and my client can compare with, to figure out if it needs to inform the user their client code is outdated.
One issue I am facing however, is when I deploy, I want to ensure only ever 1 version of the API is running. I do not want a situation where a client can be refreshed many times in a loop, as it receives different API versions.
I basically want it to go from 2 replicas running version A, to 2 replicas running Version A, 2 running version B. Then switch the traffic to version B once health checks pass, then tear down the old version A's.
Does Kubernetes support this using the RollingDeploy strategy?
For blue-green deployment in Kubernetes, I will recommend to use some third party solution like Argo Rollouts, NGINX, Istio etc.. They will let you split the traffic between the versions of your application.
However, Kubernentes is introducing Gateway API which has built-in support for traffic splitting.
What you are asking isn't a blue/green deploy really. If you require two pods, or more, to run during the upgrade, for performance issues, you will get an overlap where some pods of version A responds and some from version B.
You can fine tune it a little, for instance you can configure it to start all of the new pods at once and for each one that turn from running->ready one of the old will be removed. If your pods starts fast, or at least equally fast, the overlap will be really short.
Or, if you can accept a temporary downtime there is a deployment strategy that completely decommission all old pods before rolling out the new ones. Depending on how fast your service starts this could give a short or long downtime.
Of, if you don't mind just a little bit extra work, you deploy version B in parallell with version A and you add the version to the set of labels.
Then, in your service you make sure the version label is a part of the selector and once the pods for version B is running you change the service selectors from version A to version B and it will instantly start using those instead.
I recently starting using Kubernetes. My experience is that yes, K8s behaves this way out of the box. If I have e.g. two pods running and I perform a deployment, K8s will create two fresh pods and then, only once those two fresh pods are healthy, will K8s terminate the original two pods.

Is it possible to dynamically add a new pod to an existing deployment?

Just to be clear: I'm not asking about scaling up the number of replicas of a pod - I'm asking about adding a new pod which provides completely new functionality.
So I'm wondering: can I call the Kubernetes API to dynamically add a new pod to an existing deployment?
Deployments are meant to be a homogeneous set of replicas of the same pod template, each presumably providing the same functionality. Deployments keep the desired number of replicas running in the event of crashes and other failures, and facilitate rolling updates of the pods when you need to change configuration or the version of the container image, for example. If you want to run a pod that provides different functionality, do so via a different deployment.
Adding a different pod to an existing deployment is not a viable option. If you want to spin up pods in response to API requests to do some work, there are a handful of officially support client libraries you can use in your API business logic: https://kubernetes.io/docs/reference/using-api/client-libraries/#officially-supported-kubernetes-client-libraries.
You can inject a container to an existing Pod. Not sure whether it would meet your requirement.
You can refer to how istio inject a sidecar proxy to an existing Pod manually. Manual injection

How to implement Blue-Green Deployment with HPA?

I have two colored tracks where I deployed two different versions of my webapp (nginx+php-fpm), These tracks are available by services, called live and next.
The classic way would be deploying new version of webapp to next, after checking, release it to live by switching their services.
So far so good.
Considering autoscaling with HPA:
Before doing a release I have to prescale next to the amount of live pods to prevent too heavy loads after switch.
Problem here is the nature of HPAs cpu load measuring. In worst case the autoscaler will downscale the prescaled track immediately, cause of calculating cpu load coming from next.
Another problem i found is using keepalive connections, which makes releasing new pods to live very hard without killing old pods.
How to solve the problem?
We have a few deployment strategies (there are more but I will point the most common).
1) Rolling Update - We need only one deployment. It will add pods with new content to current deployment and terminating old version pods in the same time. For a while deployment will contain mix of old and new version.
2) Blue-Green Deployment - It is the safest strategy and it is recommended for production workloads. We need to have two deployments coexisting i.e v1 and v2. Im most cases old deployment is draining (close all connections/sessions to old deployment) and redirected all new sessions/connections to new deployment. Usualy both deployments are keept for a while as Production and Stage.
3) Canary Deployment - The hardest one. Here you also need at least two deployments running at the same time. Some users will be connected to old application, others will be redirected to new one. It can be achieved via load balancig/proxy layer configuration. In this case HPA is not allowed because we are using two deployments at the same time and each deployment will have own independent autoscaler.
Like #Mamuz pointed in comment Blue-Green Strategy without switch on
service level sounds much better in this case than rolling-update.
Another option which might be useful in this scenario is Blue-Green
Deployment with ISTIO using Traffic Shifting. In this option you
could divide traffic as request i.e. from 100-0, 80-20, 60-40, 20-80
to 0-100%
Using ISTIO and HPA step by step is described in this article.
You can read about Traffic Management here.
Example of Istio and K8s here.

How to speedup rolling-updates on GKE

I need to deploy a web application in gke. The application consists of two pods and needs to scale to ~30 replicas.
Rolling updates take ~30s/pod in our setup.
Old title: How do I enable the deployments API on GKE cluster?
I tried to use deployments as they allow to update multiple pods in parallel.
But, as nshttpd pointed out in #google-containers on the kubernetes slack:
I may be wrong, but GKE clusters don’t have beta features I thought. so if you want Deployments you’ll have to spin up your own cluster.
GKE clusters actually do have beta features. But Deployments were an alpha feature in the 1.1 release (which is the current supported release) and are graduating to beta for the upcoming 1.2 release. Once they are a beta feature, you will be able to use them in GKE.
The rolling update command is really just syntactic sugar around first creating a new replication controller, scaling it up by one, scaling the existing replication controller down by one, and repeating until the old replication controller has size zero. You can do this yourself at a much faster rate if going one pod at a time is too slow. You may also want to file a feature request on github to add a flag to the rolling update command to update multiple pods in parallel.