Kubernetes deployment without a port - kubernetes

I have a service which is long-running (in a while 1 loop) and processes payloads via GCloud pub/sub, after which it writes the result to a DB.
The service doesn't need to listen on any port.
What would the declarative YAML config look like for Kind=Deployment?
I understand ClusterIP is the default type, and the docs go on to say that a headless service just has to define spec.clusterIP as None.
(A better practice would probably be to modify the worker to exit after a successful payload processing, and change the Kind to Job, but this is in the backlog)

What you're describing sounds more like a job or a deployment than a service. You can run a deployment (which creates a replicaset, which ensures a certain number of replicas are running) without creating a service.
If your pod isn't exposing any network services for others to consume, there's very little reason to create a service.

Related

In kubernetes, is there a way to make statefulset pods linger to finish requests on rolling update?

In Kubernetes, I have a statefulset with a number of replicas.
I've set the updateStrategy to RollingUpdate.
I've set podManagementPolicy to Parallel.
My statefulset instances do not have a persistent volume claim -- I use the statefulset as a way to allocate ordinals 0..(N-1) to pods in a deterministic manner.
The main reason for this, is to keep availability for new requests while rolling out software updates (freshly built containers) while still allowing each container, and other services in the cluster, to "know" its ordinal.
The behavior I want, when doing a rolling update, is for the previous statefulset pods to linger while there are still long-running requests processing on them, but I want new traffic to go to the new pods in the statefulset (mapped by the ordinal) without a temporary outage.
Unfortunately, I don't see a way of doing this -- what am I missing?
Because I don't use volume claims, you might think I could use deployments instead, but I really do need each of the pods to have a deterministic ordinal, that:
is unique at the point of dispatching new service requests (incoming HTTP requests, including public ingresses)
is discoverable by the pod itself
is persistent for the duration of the pod lifetime
is contiguous from 0 .. (N-1)
The second-best option I can think of is using something like zookeeper or etcd to separately manage this property, using some of the traditional long-poll or leader-election mechanisms, but given that kubernetes already knows (or can know) about all the necessary bits, AND kubernetes service mapping knows how to steer incoming requests from old instances to new instances, that seems more redundant and complicated than necessary, so I'd like to avoid that.
I assume that you need this for a stateful workload, a workload that e.g. requires writes. Otherwise you can use Deployments with multiple pods online for your shards. A key feature with StatefulSet is that they provide unique stable network identities for the instances.
The behavior I want, when doing a rolling update, is for the previous statefulset pods to linger while there are still long-running requests processing on them, but I want new traffic to go to the new pods in the statefulset.
This behavior is supported by Kubernetes pods. But you also need to implement support for it in your application.
New traffic will not be sent to your "old" pods.
A SIGTERM signal will be sent to the pod - your application may want to listen to this and do some action.
After a configurable "termination grace period", your pod will get killed.
See Kubernetes best practices: terminating with grace for more info about pod termination.
Be aware that you should connect to services instead of directly to pods for this to work. E.g. you need to create headless services for the replicas in a StatefulSet.
If your clients are connecting to a specific headless service, e.g. N, this means that it will not be available for some times during upgrades. You need to decide if your clients should retry their connections during this time period or if they should connect to another headless service if N is not available.
If you are in a case where you need:
stateful workload (e.g. support for write operations)
want high availability for your instances
then you need a form of distributed system that does some form of replication/synchronization, e.g. using raft or a product that implements this. Such system is easiest deployed as a StatefulSet.
You may be able to do this using Container Lifecycle Hooks, specifically the preStop hook.
We use this to drain connections from our Varnish service before it terminates.
However, you would need to implement (or find) a script to do the draining.

Kubernetes Pod to Pod communication

I have 2 Deployment - A(1 replica) and B(4 replica)
I have scheduled job in the POD A and on successful completion it hits endpoint present in one of the Pods from Deployment B through the service for Deployment B.
Is there a way I can hit all the endpoints of 4 PODS from Deployment B on successful completion of job?
Ideally one of the pod is notified!! But is this possible as I don't want to use pub-sub for this.
Is there a way I can hit all the endpoints of 4 PODS from Deployment B on successful completion of job?
But is this possible as I don't want to use pub-sub for this.
As you say, a pub-sub solution is best for this problem. But you don't want to use it.
Use stable network identity for Service B
To solve this without pub-sub, you need a stable network-identity for the pods in Deployment B. To get this, you need to change to a StatefulSet for your service B.
StatefulSets are valuable for applications that require one or more of the following.
Stable, unique network identifiers.
When B is deployed with a StatefulSet, your job or other applications can reach your pods of B, with a stable network identity that is the same for every version of service B that you deploy. Remember that you also need to deploy a Headless Service for your pods.
Scatter pattern: You can have an application aware (e.g. aware of number of pods of Service B) proxy, possibly as a sidecar. Your job sends the request to this proxy. The proxy then sends a request to all your replicas. As described in Designing Distributed Systems: Patterns and Paradigms
Pub-Sub or Request-Reply
If using pub-sub, the job only publish an event. Each pod in B is responsible to subscribe.
In a request-reply solution, the job or a proxy is responsible for watching what pods exists (unless it is a fixed number of pods) in service B, in addition it need to send request to all, if requests fails to any pod (it will happen on deployments sometime) it is responsibly to retry the request to those pods.
So, yes, it is a much more complicated problem in a request-reply way.
Kubernetes service is an abstraction to provide service discovery and load balancing. So if you are using service your request will be sent to one of the backend pods.
To achieve what you want I suggest you create 4 different services each having only one backend pod or use a message queue such as rabbitmq between service A and service B.
You can use headless service. That way Kubernetes won't allocate separate IP address and instead will setup DNS record with IP addresses of all the pods. Then in your application just resolve the record and send notification to all endpoints. But really, this is ideal use case for pub-sub or service discovery system. DNS is too unreliable for this.
PUB-SUB is the best option here. I have similar use case and I am using pub-sub , which is in production for last 6 months.

Sharded load balancing for stateful services in Kubernetes

I am currently switching from Service Fabric to Kubernetes and was wondering how to do custom and more complex load balancing.
So far I already read about Kubernetes offering "Services" which do load balancing for pods hidden behind them, but this is only available in more plain ways.
What I want to rewrite right now looks like the following in Service Fabric:
I have this interface:
public interface IEndpointSelector
{
int HashableIdentifier { get; }
}
A context keeping track of the account in my ASP.Net application e.g. inherits this. Then, I wrote some code which would as of now do service discovery through the service fabric cluster API and keep track of all services, updating them when any instances die or are being respawned.
Then, based on the deterministic nature of this identifier (due to the context being cached etc.) and given multiple replicas of the target service of a frontend -> backend call, I can reliably route traffic for a certain account to a certain endpoint instance.
Now, how would I go about doing this in Kubernetes?
As I already mentioned, I found "Services", but it seems like their load balancing does not support custom logic and is rather only useful when working with stateless instances.
Is there also a way to have service discovery in Kubernetes which I could use here to replace my existing code at some points?
StatefulSet
StatefulSet is a building block for stateful workload on Kubernetes with certain guarantees.
Stable and unique network identity
StatefulSet Pods have a unique identity that is comprised of an ordinal, a stable network identity, and stable storage.
As an example, if your StatefulSet has the name sharded-svc
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sharded-svc
And you have e.g. 3 replicas, those will be named by <name>-<ordinal> where ordinal starts from 0 up to replicas-1.
The name of your pods will be:
sharded-svc-0
sharded-svc-1
sharded-svc-2
and those pods can be reached with a dns-name:
sharded-svc-0.sharded-svc.your-namespace.svc.cluster.local
sharded-svc-1.sharded-svc.your-namespace.svc.cluster.local
sharded-svc-2.sharded-svc.your-namespace.svc.cluster.local
given that your Headless Service is named sharded-svc and you deploy it in namespace your-namespace.
Sharding or Partitioning
given multiple replicas of the target service of a frontend -> backend call, I can reliably route traffic for a certain account to a certain endpoint instance.
What you describe here is that your stateful service is what is called sharded or partitioned. This does not come out of the box from Kubernetes, but you have all the needed building blocks for this kind of service. It may happen that it exists an 3rd party service providing this feature that you can deploy, or it can be developed.
Sharding Proxy
You can create a service sharding-proxy consisting of one of more pods (possibly from Deployment since it can be stateless). This app need to watch the pods/service/endpoints in your sharded-svc to know where it can route traffic. This can be developed using client-go or other alternatives.
This service implements the logic you want in your sharding, e.g. account-nr modulus 3 is routed to the corresponding pod ordinal
Update: There are 3rd party proxies with sharding functionallity, e.g. Weaver Proxy
Sharding request based on headers/path/body fields
Recommended reading: Weaver: Sharding with simplicity
Consuming sharded service
To consume your sharded service, the clients send request to your sharding-proxy that then apply your routing or sharding logic (e.g. request with account-nr modulus 3 is routed to the corresponding pod ordinal) and forward the request to the replica of sharded-svc that match your logic.
Alternative Solutions
Directory Service: It is probably easier to implement sharded-proxy as a directory service but it depends on your requirements. The clients can ask your directory service to what statefulSet replica should I send account-nr X and your serice reply with e.g. sharded-svc-2
Routing logic in client: The probably most easy solution is to have your routing logic in the client, and let this logic calculate to what statefulSet replica to send the request.
Services generally run the proxy in kernel space for performance reasons so writing custom code is difficult. Cillium does allow writing eBPF programs for some network features but I don't think service routing is one of them. So that pretty much means working with a userspace proxy instead. If your service is HTTP-based, you could look at some of the existing Ingress controllers to see if any are close enough or allow you to write your own custom session routing logic. Otherwise you would have to write a daemon yourself to handle it.

Specify scheduling order of a Kubernetes DaemonSet

I have Consul running in my cluster and each node runs a consul-agent as a DaemonSet. I also have other DaemonSets that interact with Consul and therefore require a consul-agent to be running in order to communicate with the Consul servers.
My problem is, if my DaemonSet is started before the consul-agent, the application will error as it cannot connect to Consul and subsequently get restarted.
I also notice the same problem with other DaemonSets, e.g Weave, as it requires kube-proxy and kube-dns. If Weave is started first, it will constantly restart until the kube services are ready.
I know I could add retry logic to my application, but I was wondering if it was possible to specify the order in which DaemonSets are scheduled?
Kubernetes itself does not provide a way to specific dependencies between pods / deployments / services (e.g. "start pod A only if service B is available" or "start pod A after pod B").
The currect approach (based on what I found while researching this) seems to be retry logic or an init container. To quote the docs:
They run to completion before any app Containers start, whereas app Containers run in parallel, so Init Containers provide an easy way to block or delay the startup of app Containers until some set of preconditions are met.
This means you can either add retry logic to your application (which I would recommend as it might help you in different situations such as a short service outage) our you can use an init container that polls a health endpoint via the Kubernetes service name until it gets a satisfying response.
retry logic is preferred over startup dependency ordering, since it handles both the initial bringup case and recovery from post-start outages

Kubernetes job that consists of two pods (that must run on different nodes and communicate with each other)

I am trying to create a Kubernetes job that consists of two pods that have to be scheduled on separate nodes in our Hybrid cluster. Our requirement is that one of the pods runs on a Windows Server node and the other pod is running on a Linux node (thus we cannot just run two Docker containers from the same pod, which I know is possible, but would not work in our scenario). The Linux pod (which you can imagine as a client) will communicate over the network with the Windows pod (which you can imagine as a stateful server) exchanging data while the job runs. When the Linux pod terminates, we want to also terminate the Windows pod. However, if one of the pods fail, then we want to fail both pods (as they are designed to be a single job)
Our current design is to write a K8S service that handles the communication between the pods, and then apply the service and the two pods to the cluster to "emulate" a job. However, this is not ideal since the two pods are not tightly coupled as a single job and adds quite a bit of overhead to manually manage this setup (e.g. when failures or the job, we probably need to manually kill the service and deployment of the Windows pod). Plus we would need to deploy a new service for each "job", as we require the Linux pod to always communicate with the same Windows pod for the duration of the job due to underlying state (thus cannot use a single service for all Windows pods).
Any thoughts on how this could be best achieved on Kubernetes would be much appreciated! Hopefully this scenario is supported natively, and I would not need to resort in this kind of pod-service-pod setup that I described above.
Many thanks
I am trying to distinguish your distaste for creating and wiring the Pods from your distaste at having to do so manually. Because, in theory, a Job that creates Pods is very similar to what you are describing, and would be able to have almost infinite customization for those kinds of rules. With a custom controller like that, one need not create a Service for the client(s) to speak to their server, as the Job could create the server Pod first, obtain its Pod-specific-IP, and feed that to the subsequently created client Pods.
I would expect one could create a Job controller using only bash and either curl or kubectl: generate the json or yaml that describes the situation you wish to have, feed it to the kubernetes API (since the Job would have a service account - just like any other in-cluster container), and use normal traps to cleanup after itself. Without more of the specific edge cases loaded in my head it's hard to say if that's a good idea or not, but I believe it's possible.