I've got some doubts about deploying Spring Cloud Gateway (old Zuul) with Kubernetes and getting zero-downtime. I'm completely new to Kubernetes and I'm a bit lost with quite a lot of concepts.
We would like to use the Spring Cloud Gateway verify the JWT. I've also read that when I've got a call, it should go first have gateway, afterwards the ribbon discovery and finally the REST services.
The application has very strict zero-downtime requirements. My question is, what happens when I need to redeploy for some reason the Gateway? Is it possible to achieve the zero-downtime if it is my first component and I will have constantly traffic and request in my system
Is there any other component I should set-up in order to archive this? The users that are having having access to my REST services shouldn't be disconnected abruptly.
Kubernetes Deployments use a rolling update model to achieve zero downtime deploys. New pods are brought up and allowed to become ready, then added to the rotation, then old ones are shut down, repeat as needed.
Related
Hello fellow developers.
I have the following problem:
I have a solution that relies on a few services each having a container. The way I provide the solution at the moment is create a manual deployment for each client each time using Cloud services. The requirements include scalability, fault tolerance, cost tracking, and access management.
However, instead of creating a deployment manually every time for each client, I would like them to go through another service or app so the new requirements are:
Dashboard to issue a deployment.
Dashboard gives overview of health of services.
Dashboard tracks costs and issues API keys automatically.
My first thought is using kubernetes and creating deployments issued by a user through the dashboard.
I have little experience with kubernetes, so I would appreciate anyone pointing me in a direction and not necessity explaining everything.
Thanks all
In the Kubernetes world, a typical/classic pattern is using Deployment for Stateless Applications and using StatefulSet for a stateful application.
I am using a vendor product (Ping Access) which is meant to be a stateless application (it plays the role of a Proxy in front of other Ping products such as Ping Federate).
The github repo for Ping Cloud (where they run these components as containers) shows them running Ping Access (a stateless application) as a Stateful Set.
I am reaching out to their support team to understand why anyone would run a Stateless application as a StatefulSet.
Are there other examples of such usage (as this appears strange/bizarre IMHO)?
I also observed a scenario where a customer is using a StatefulApp (Ping Federate) as a regular deployment instead of hosting them as a StatefulSet.
The Ping Cloud repository does build and deploy Ping Federate as a StatefulSet.
Honestly, both these usages, running a stateless app as a StatefulSet (Ping Access) and running a stateful app as a deployment (Ping Federate) sound like classic anti-patterns.
Apart from the ability to attach dedicated Volumes to StatefulSets you get the following features of which some might be useful for stateless applications:
Ordered startup and shutdown of Pods with K8s doing them one by one in an ordered fashion.
Possibility to guarantee that not more than a single Pod is running at a time even during unscheduled Pod restarts.
Stable DNS names for Pods.
I can only speculate, why Ping Federate uses a StatefulSet. Possibly, it has to do with access limitations of the downstream services it connects to.
The consumption of PingAccess is stateless, but the operation is very much stateful. Namely, the PingAccess admin console maintains a database for configuration, and part of that configuration includes clustered engine mapping and session keys.
Thus, if you were to take away the persistent volume, restarting the admin console would decouple all the engines in the cluster. Then the engines no longer receive configuration.. and web session keys would be mismatched.
The ping-cloud-base repo uses StatefulSet for engines not for persistent volumes, but for sts naming scheme. I personally disagree with this and recommend using Deployment for engines. The only downside is you then have to remove orphaned engines from the admin configuration. Orphaned engines meaning engine config that stays in the admin console db after the engine deployment is rolled/updated. These can be removed from the admin UI, or API. Pretty easy to script in a hook.
It would be ideal for an application that is not a datastore to run without persistent volume, but for the reasons mentioned above, the PingAccess admin console does require and act like a persistent datastore so I think StatefulSet is okay.
Finally, the Ping DevOps team focuses support on their helm chart (where engines are also deployments by default). I'd suspect the community and enterprise support is much larger there for folks deploying on their own. ping-cloud-base is a good place to pick up some hooks though.
I'm currently looking into triggering some 3D rendering from an AppEngine-based service.
The idea is that input data is submitted by an API client to this web service, which then invokes an internal Kubernetes GPU enabled application ("rendering backend") to do the hard work.
GPU-enabled clusters are relatively expensive ($$$), so I really want the cluster to be up and running on demand. I am trying to achieve that by setting the autoscaling minimum to 0 for the rendering backend.
The only pretty way of "triggering" a rendering task on such a cluster I could think of is via Pub/Sub Push. Basically, I need something like Cloud Tasks, but those seem to be aimed at long running tasks executed in AppEngine, not Kubernetes. Plus I like the way Pub/Sub decouples the web service from the rendering backend.
Google's Pub/Sub only allows pushing via HTTPS and only to a validated domain. It appears that Google is forcing me to completely "expose" my internal rendering backend by assigning a domain name to it, which feels ridiculous. I cannot just tell Pub/Sub to invoke http://loadbalancer.IP.address/handle_push.
This is making me doubt my architecture.
How would you go about building something like this on GCP?
From the GKE perspective:
You can have a cluster with a dedicated GPU-based nodepool and schedule your pods there using Taints and tolerations. Additionally, you can control the number of nodes in your nodepool using Autoscaling so that, you can use them only when your pods are to be scheduled/run.
Consider that this requires an additional default-non-GPU-based nodepool, where system pods are being run.
For triggering, as long as your default pool is running, you'd be able to deploy your application and the autoscaling should start automatically. For deploying from an App Engine application, you might want to consider talking to the Kubernetes API directly through a library.
Finally and considering the nature of your current goal (3D rendering), it might be best to use Kubernetes Jobs. With these, you can complete an sporadic computational load, allowing the nodepool to downsize once is finished.
Wrapping up, you can have a minimum cluster with a zero-sized GPU-based nodepool that will autoscale when a tainted job is requested to be run there, and once the workload is finished, it should automatically downscale. These actions can be triggered from GAE, using one of the client libraries.
Context
I am deploying a set of services that are containerised using Docker into AWS. No matter which deployment solution is chosen (e.g. raw EC2/ECS/Elastic Beanstalk/Fargate) we will face the issue of "service discovery".
To name just a few of the options for service discovery that I've considered:
AWS Route 53 Service Registry
Kubernetes
Hashicorp Consul
Spring Cloud Netflix Eureka
Specifics Of My Stack
I am developing Java Spring Boot applications using Spring Cloud with the target deployment environment being AWS.
Given that my stack is Spring based, spring cloud eureka made sense to me while developing locally. It was easy to set up a single node, integrates well with the stack and ecosystem of choice and required very little set up.
Locally, we are using docker compose (not swarm) to deploy services - one of the containers deployed is a single node Eureka service discovery server.
However, when we progress outside of local development and into staging or production environment we are considering options like Kubernetes.
My Own Assessment Of Pros/Cons
AWS Route 53 Service Registry
Requires us to couple code specifically to AWS services. Not a problem per se, we are quite tied in anyway on other parts of the stack (SNS/SQS).
Makes running the stack locally slightly more difficult as it relies on Route 53, I suppose we could open up a certain hosted zone for local development.
AWS native, no managing service registries or extra "moving parts".
Spring Cloud Eureka
Downside is that thus requires us to deploy and manage a high availability service registry cluster and requires more resources. Another "moving part" to manage.
Advantages are that it fits into our stack well (spring ecosystem, spring boot, spring cloud, feign and zuul work well with this). Also can be run locally trivially.
I presume we need to configure the networks and registry zone to ensure that that clients publish their host address rather and docker container internal IP address. e.g. if service A is on host A and wants to talk to service B on host B, service B needs to advertise its EC2 address rather than some internal docker IP.
Questions
If we use Kubernetes for orchestration, are there any disadvantages to using something like Spring Cloud Eureka over the built in service discovery options described here https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services
Given Kube provides this, it seems suboptimal to then use eureka deployed using kube to perform discovery. I presume kube can make some optimisations that impact avaialbility and stability that might nit be possible using eureka. e.g kube would know when deploying a new service - eureka will have to rely on heartbeats/health checks and depending on how that is configured (e.g. frequency) this could result in stale records whereas i presume kube might not suffer from this for planned service shutdown/restarts. I guess it still does for unplanned failures such as a host failure or network partition.
Does anyone have any advice on this, do people use services like Kubernetes but use other mechanisms for service discovery rather than those provided by kube. Is there a good reason to do one or the other?
Possible Challenges I Anticipate
We could replace eureka, but relying on Kube to perform discovery will mean that we need to run kube locally to deploy whereas currently we have a simple tiny docker-compose file. Also, I'll have to look at how easy it'll be to ensure that ribbon, zuul and feign play nicely with this.
Currently we have ribbon configured with a eureka client so that service A can server to service B just as "service-b" for example and have ribbon resolve a healthy host via a eureka client. I guess we can configure ribbon to not use eureka and use an external Kube service name which will be resolved by Kube DNS at runtime...
Final Note
Thanks in advance for any contribution or advice. I know this might elicit a primarily opinion focused response. But I am hoping someone can provide objective guidance on when one solution might be preferable to another.
Service discovery is something you get out-of-the-box with Kubernetes. So having another external service in your platform will be another application to maintain, deploy and can be a point of failure. So I would stick with the the service discovery provided by Kubernetes.
When using on premise (running on my own) api gateway like Kong, should it be run in a node as 1 withing the main kubernetes cluster or should it be ran as separate kubernetes cluster?
Unless you have an amazing reason to do otherwise: run Kong within the cluster. Pretty much the last thing you'd want is for all API requests to bomb because of a severed connection between cluster-A and cluster-B, not to mention the horrible latency as requests hop from one layer of abstraction to another.
Taking a page from the nginx Ingress controller, you also have the opportunity to use the Endpoint API to bypass the iptables-based Service machinery, saving even more latency and system resources -- a trick that would be almost impossible with a multi-cluster configuration.
It is my recollection there are even Kong-based Ingress controllers, which could save you even more heartache if their featureset and your needs align