Kubernetes deployment patterns & REST API versions - rest

I have a REST API with multiple API-versions. Each API backend is composed of several micro-services. It is fair to assume that only the latest REST API resources/code has most churn. The older versions will see churn due to feature backport (rarely) or bug fixes (mostly). I'd like to get recommendations on what DevOps pattern would best fit this scenario - assuming we are using Kubernetes to model our service mesh.
Note that our APIs are mostly async and so it is possible to support several API versions all in the same codebase (packaged in a single container).
Given the above, these configurations are all possible
Service yaml per API version.
Single service yaml with multiple pod templates (one per API version).
Functional Service yamls - one for the front end and each of the other micro-services (POD for message broker, processing worker etc.,).
Additional points to ponder:
Are deployments recommended to separate clusters or to the same cluster (for the API versions). If yes, does this impact updates to specific API versions?
I'm looking for any prescribed patterns or suggestions based on your prior experience.

In general, Deployments manage replicas of Pods, and each Pod runs a specific container image. If your API backend consists of multiple microservices, then each microservice is a Deployment. The microservice that handles API requests is exposed with a (client-facing) Service.
For the multiple API versions, you could just replicate this for each version and you could put an Ingress in front of the Services which routes traffic to one of them based on the requested API version.
If you put all the API versions in the same container, you may have a problem of inconsistent state during updates: 1) inside a Deployment, for a short time both Pod versions exist side by side (if you use the default rolling update); 2) for a short time you might have both updated and not yet updated microservices next to each other.

Related

Multiple apps in single K8S deployment

I'm exploring K8S possibilities and I'm wonder is there any way to create deployments for two or more apps in single deployment so it is transactional - when something is wrong after deployment all apps are rollbacked. Also I want to mention that I'm not saying about pod with multiple containers because additional side car containers are rather intended for some crosscutting concerns like monitoring, authentication (like kerberos) and others but it is not recommended to put different apps in single pod. Having this in mind, is it possible to have single deployment that can produce 2+ kind of pods?
Is it possible to have single deployment that can produce 2+ kind of pods?
No. A Deployment creates only one kind of Pod. You can update a Deployment's contents, and it will incrementally replace existing Pods with new ones that match the updated Pod spec.
Nothing stops you from creating multiple Deployments, one for each kind of Pod, and that's probably the approach you're looking for here.
... when something is wrong after deployment all apps are rollbacked.
Core Kubernetes doesn't have this capability on its own; indeed, it has somewhat limited capacity to tell that something has gone wrong, other than a container failing its health checks or exiting.
Of the various tools in #SYN's answer I at least have some experience with Helm. It's not quite "transactional" in the sense you might take from a DBMS, but it does have the ability to manage a collection of related resources (a "release" of a "chart") and it has the ability to roll back an entire version of a release across multiple Deployments if required. See the helm rollback command.
Helm
As pointed out in comments, one way to go about this would be to use something like Helm.
Helm is some kind of client (as of v3. Previous also involved "tiller", a controller running in your kubernetes cluster: let's forget about that one/deprecated).
Helm uses "Charts" (more or less: templates, with default values you can override).
Kustomize
Another solution, similar to Helm, is Kustomize. Working from plain-text files (not templates), while making it simple to override / customize your objects before applying them to your Kubernetes cluster.
ArgoCD
While Kustomize and Helm are both standalone clients, we could also mention solutions such as ArgoCD.
The ArgoCD controller would run inside your Kubernetes cluster, allowing you to create "Application" objects.
Those Applications are processed by ArgoCD, driving deployment of your workloads (common sources for those applications would involve Helm Charts, Git repositories, ...).
The advantage of ArgoCD being that their controller may (depending on your configuration) be responsible for upgrading your applications over time (eg: if your source is a git repository, branch XXX, and someone pushes changes into that branch: argocd would apply those pretty much right away)
Operators
Although most of those solutions are pretty much unaware of how your application is running. Say you upgrade a deployment, driven by Helm, Kustomize or ArgoCD, and end up with some database pods stuck in crashloopbackoff: your application pods would get updated nevertheless, there's no automatic rollback to a previous working configuration.
Which brings us to another way to ship applications to Kubernetes: operators.
Operators are aware of the state of your workloads, and may be able to fix common errors ( depending on how it was coded, ... there's no magic ).
An operator is an application (can be in Go, Java, Python, Ansible playbooks, ... or whichever comes with some library communicating with a Kubernetes cluster API)
An operator is constantly connected to your Kubernetes cluster API. You would usually find some CustomResourceDefinitions specific to your operator, allowing you to describe the deployment of some component in your cluster. (eg: the elasticsearch operator introduces an object kind "ElasticSearch", and some "Kibana")
The operator watches for instances of the objects it managed (eg: ElasticSearch), eventually creating Deployment/StatefulSets/Services ...
If someone deletes an object that was created by your operator, it would/should be re-created by that operator, in a timely manner (mileage may vary, depending on which operator we're talking about ...)
A perfect sample for operators would be something like OpenShift 4 (OKD4). A Kubernetes cluster that comes with 10s of operators (SDN, DNS, machine configurations, ingress controller, kubernetes API server, etcd database, ...). The whole cluster is an assembly of operators: upgrading your cluster, each of those would manage the upgrade of the corresponding services, in an orchestrated way, ... one after the other, ... if anything fails, you're still usually left with enough replicas running to troubleshoot the issue, ...
Depending on what you're looking for, each option has advantages and inconvenients. Now if you're looking for "single deployment that can produce 2+ kind of pods", then ArgoCD or some home-grown operator would qualify.

k8s: Is it possible to have two identical deployments but route different traffic to them?

Here is my use case:
I have a microservice which gets sent traffic via an ingress gateway in real time and via a batch process. What I'd like to be able to do is be able to conceptually define a deployment and have it create two sets of pods:
One set for real time request
Another for batch.
When a new version of the microservice gets deployed, the k8s deployment is updated and both real time and batch use the new version.
Is this possible in k8s or will I need to create two deployments and manage them separately?
This is a community wiki answer posted for better visibility. Feel free to expand it.
Since we don't know the complete information about the architecture used, the following suggestions from comments can be used in the future to solve the problem.
1. With Deployments, Services, Selectors
You can have two identical deployments and route different traffic to them.
It may be implemented by Services:
In Kubernetes, a Service is an abstraction which defines a logical set
of Pods and a policy by which to access them (sometimes this pattern
is called a micro-service). The set of Pods targeted by a Service is
usually determined by a selector.
Such approach has some advantages.
By default, traffic will be routed to endpoints in random way if you are using iptables proxy mode. When you try to send traffic to specific pods covered by the same deployment - it may happen large differences in CPU and Memory usage leading to the resource exhaustion or wasting resources.
It will be easier to manage service versioning, CPU and Memory assignment and rollouts.
2. With Istio
From David M. Karr
If a service is defined as a VirtualService, you can route to
different DestinationRule objects depending on header values (or other
qualifications).
Additionally
If you need to deploy a new version of the microservice, you can choose between different strategies, which is more suitable for your needs.
Kubernetes deployment strategies:
recreate: terminate the old version and release the new one
ramped: release a new version on a rolling update fashion, one after the other
blue/green: release a new version alongside the old version then switch traffic
canary: release a new version to a subset of users, then proceed to a full rollout
a/b testing: release a new version to a subset of users in a precise way (HTTP headers, cookie, weight, etc.). A/B testing is
really a technique for making business decisions based on statistics
but we will briefly describe the process. This doesn’t come out of the
box with Kubernetes, it implies extra work to setup a more advanced
infrastructure (Istio, Linkerd, Traefik, custom nginx/haproxy, etc).

Does Kubernetes natively support "blue-green"-like deployments?

I have a single page app. It is served by (and talks to) an API server running on a Kubernetes deployment with 2 replicas. I have added a X-API-Version header that my API sends on every request, and my client can compare with, to figure out if it needs to inform the user their client code is outdated.
One issue I am facing however, is when I deploy, I want to ensure only ever 1 version of the API is running. I do not want a situation where a client can be refreshed many times in a loop, as it receives different API versions.
I basically want it to go from 2 replicas running version A, to 2 replicas running Version A, 2 running version B. Then switch the traffic to version B once health checks pass, then tear down the old version A's.
Does Kubernetes support this using the RollingDeploy strategy?
For blue-green deployment in Kubernetes, I will recommend to use some third party solution like Argo Rollouts, NGINX, Istio etc.. They will let you split the traffic between the versions of your application.
However, Kubernentes is introducing Gateway API which has built-in support for traffic splitting.
What you are asking isn't a blue/green deploy really. If you require two pods, or more, to run during the upgrade, for performance issues, you will get an overlap where some pods of version A responds and some from version B.
You can fine tune it a little, for instance you can configure it to start all of the new pods at once and for each one that turn from running->ready one of the old will be removed. If your pods starts fast, or at least equally fast, the overlap will be really short.
Or, if you can accept a temporary downtime there is a deployment strategy that completely decommission all old pods before rolling out the new ones. Depending on how fast your service starts this could give a short or long downtime.
Of, if you don't mind just a little bit extra work, you deploy version B in parallell with version A and you add the version to the set of labels.
Then, in your service you make sure the version label is a part of the selector and once the pods for version B is running you change the service selectors from version A to version B and it will instantly start using those instead.
I recently starting using Kubernetes. My experience is that yes, K8s behaves this way out of the box. If I have e.g. two pods running and I perform a deployment, K8s will create two fresh pods and then, only once those two fresh pods are healthy, will K8s terminate the original two pods.

Notify containers of updated pods in Kubernetes

I have some servers I want to deploy in Kubernetes. The clients of those servers will also be in Kubernetes. Clients and servers can independently be deployed or scaled.
The clients must know the list of the servers (IPs). I have an HTTP endpoint on the clients to update the list of the servers while the clients are running (hot config reload).
All this is currently running outside of Kubernetes. I want to migrate to GCP.
What's the industry standard regarding pods updates and notifications? I want to get notified when servers are updated to call the endpoints on the clients to update the list of the servers.
Can't use a LoadBalancer since the clients really need to call a specific server (business logic are in the clients).
Thanks
The standard for calling a group of pods that offer a functionality is services. If you don't want automated load-balancing or a single IP address, which regular services do, you should look into headless services. Calling headless services returns a list of DNS A records that point to the pods behind the service. This list is automatically updated as pods become available/unavailable.
While I think modifying an existing script to just pull a list from a headless is much simpler, it might be worth mentioning CRDs (Custom Resource Definitions) as well.
You could build a custom controller that listens to service events and then posts the data from that event to an HTTP endpoint of another Service or Ingress. The custom resource would define which service to watch and where to post the results.
Though, this is probably much heavier weight solution that just having a sidecar / separate container in a pod polling the service for changes (which sounds closer to you existing model).
I upvoted Alassane answer as I think it is the correct first path to something like this before building a CRD.

Running Concurrent Major versions of an API with google endpoints in Kubernetes

I'm struggling to find any documentation relating to the configuration of Extensible Service Proxy and Google Endpoints relating to the correct pattern for deploying multiple versions of an API.
Brief overview - I have docker building out two releases of an API.
they run in separate containers.
I currently have a kubernetes pod with ESP and APIv1.
Really I want to run a pod with ESP+APIv1 and a pod with ESP+APIv2 but I can work out how this would work - my external IP and DNS would all point at one pod - Endpoints doesn't seem to be contacted until the user gets to the ESP service, is there some mechanism for passing to another ESP instance - I'm clearly missing something here.
OR - In order to run multiple versions should I be running a pod with ESP, APIv1, and APIv2 in it? That doesn't seem ideal from a scalability or management point of view.
Unless APIv1 and APIv2 are disjoint, you can probably implement methods supporting both versions in the same dockerized app. This approach is explained in more detail here.
https://cloud.google.com/endpoints/docs/lifecycle-management