Kubernetes external service faileover howto - kubernetes

I have two external AcriveMQ url, say mqA and mqB which are used by pod my-pod.
Both mqA and mqB are not in kubernetes cluster, and are not managed by me.
mqA is to be normally used, thus when its unavailable (0.1% probability) then mqB can be used.
Currently if mqA is broken then my-pod crashes and tries to restart.
For me, it's ok to restart the pod or some reasonable fowntime for switching.
So basically I'd like to have a kind of faileover for external service.
Is there a common approach for this in Kubernetes? Like an externalname would be fine. So when mqA is not pingable then it will switch externalName to mqB.

Unfortunately Kubernetes does not have native concept for running active-passive workloads. At this stage you will be better off having fail over at the client side or have some custom operator logic and proxy that will do the fail over for you.
Here's very good information that you may find interesting about active/standby proposal to kubernetes devs with some possible temporary solutions:
https://github.com/amelbakry/kubernetes-active-passive
https://github.com/andreykaipov/active-standby-controller
https://github.com/kubernetes/kubernetes/issues/45300#issuecomment-737658596

Related

Having question about publishing service in Kubernetes

My cluster has one master and two slaves(not on any cloud platform), and I create a deployment with 2 replicas so each slave has one pod, the image I’m running is tensorflow-jupyter. Then I create a NodePort type service for this deployment and I thought I can separately run these two pods at the same time, but I was wrong.
Tensorflow-jupyter have to use token it gives to login, everything is fine if there has only 1 pod, but if the replicas is 2 or more, it will have server error after login and logout by itself after I press F5, then I can’t use the token to login anymore. Similar situation happens to Wordpress, too.
I think I shouldn’t use NodePort type to doing this, but I don’t know if other service type can solve this problem. I don’t have load balancer to try and I don’t know how to use ExternalName.
Is there has any way to expose a service for a deployment with 2 or more replicas(one pod per slave)? Or I only can create a lot of deployments all with 1 pod and then expose same amount of services for each deployment?
It seems the application you're trying to deploy requires sticky session support: this is not supported out-of-the-box with the NodePort Service, you have to go for exposing your application using an Ingress resource controlled by an Ingress Controller in order to take advantage of the reverse-proxy capabilities (in this case, the sticky-session).
I'm not suggesting you use the sessionAffinity=ClientIP Service option since it's allowed only for ClusterIP Service resources and according to your question it seems the application has to be accessed outside of the cluster.

Internal communication between pods at Kubernetes with code

Maybe this question is very wrong but my research so far hasn't been very helpful.
My plan is to deploy a server app to multiple pods , as replicas (same code running in multiple pods) and I want each pod to be able to communicate with the rest pods.
More specifically I need to broadcast a message to all the rest pods every x minutes.
I cannot find examples of how I could do that with Python code or anything helpful related to the communication internally between the pods. I can see some instructions for the yaml configurations that I should use to make that possible , but no practical examples , which makes me think that maybe using Kubernetes is not the best technology service for what I am trying to do (?).
Any advice/suggestion/documentation is more than needed.
Thank you
Applications is typically deployed as Deployment to Kubernets, however in use-cases where you want stable network identity for your Pods, it is easier to deploy your app as StatefulSet.
When your app is deployed as StatefulSet the pods will be named e.g.: appname-0, appname-1, appname-2 if your StatefulSet is named appname and your replicas is replicas: 3
I cannot find examples of how I could do that with Python code
This is just plain network programming between the pods. You can use any UDP or TCP protocol, e.g. you can use http for this. The network address is the pod name (since your replicas are Pods within the same namespace) e.g. http://appname-0 or http://appname-1.

Communication between Pods in Kubernetes. Service object or Cluster Networking?

I'm a beginner in Kubernetes and I have a situation as following: I have two differents Pods: PodA and PodB. Firstly, I want to expose PodA to the outside world, so I create a Service (type NodePort or LoadBalancer) for PodA, which is not difficult to understand for me.
Then I want PodA communicate to PodB, and after several hours googling, I found the answer is that I also need to create a Service (type ClusterIP if I want to keep PodB only visible inside the cluster) for PodB, and if I do so, I can let PodA and PodB comminucate to each other. But the problem is I also found this article. According to this webpage, they say that the communication between pods on the same node can be done via cbr0, a Network Bridge, or the communication between pods on different nodes can be done via a route table of the cluster, and they don't mention anything to the Service object (which means we don't need Service object ???).
In fact, I also read the documents of K8s and I found in the Cluster Networking
Cluster Networking
...
2. Pod-to-Pod communications: this is the primary focus of this document.
...
where they also focus on to the Pod-to-Pod communications, but there is no stuff relevant to the Service object.
So, I'm really confusing right now and my question is: Could you please explain to me the connection between these stuff in the article and the Service object? The Service object is a high-level abstract of the cbr0 and route table? And in the end, how can the Pods can communicate to each other?
If I misunderstand something, please, point it out for me, I really appreciate that.
Thank you guys !!!
Motivation behind using a service in a Kubernetes cluster.
Kubernetes Pods are mortal. They are born and when they die, they are not resurrected. If you use a Deployment to run your app, it can create and destroy Pods dynamically.
Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them “backends”) provides functionality to other Pods (call them “frontends”) inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?
That being said, a service is handy when your deployments (podA and podB) are dynamically managed.
Your PodA can always communicate with PodB if it knows the address or the DNS name of PodB. In a cluster environment, there may be multiple replicas of PodB, or an instance of PodB may die and be replaced by another instance with a different address and different name. A Service is an abstraction to deal with this situation. If you use a Service to expose your PodB, then all pods in the cluster can talk to an instance of PodB using that service, which has a fixed name and fixed address no matter how many instances of PodB exists and what their addresses are.
First, I read it as you are dealing with two applications, e.g. ApplicationA and ApplicationB. Don't use the Pod abstraction when you reason about your architecture. On Kubernetes, you are dealing with a distributed system, and it is designed so that you should have multiple instances of your Application, e.g. for High Availability. Each instance of your application is a Pod.
Deploy your applications ApplicationA and ApplicationB as a Deployment resource. Then it is easy do do rolling upgrades without downtime, and Kubernetes will restart any instance of your application if it crash.
For every Deployment or for you, application, create one Service resource, (e.g. ServiceA and ServiceB). When you communicate from ApplicationA to another application, use the Service, e.g. ServiceB. The service will load balance your requests to the instances of the other application, and you can upgrade your Deployment without downtime.
1.Cluster networking : As the name suggests, all the pods deployed in the cluster will be connected by implementing any kubernetes network model like DANM, flannel
Check this link to see how to create a cluster network.
Creating cluster network
With the CNI installed (by implementing cluster network), every pod will get an IP.
2.Service objects created with type ClusterIP, points to the this IPs (via endpoint) created internally to communicate.
Answering your question, Yes, The Service object is a high-level abstract of the cbr0 and route table.
You can use service object to communicate between pods.
You can also implement service mesh like envoy / Istio if the network is complex.

Defining Deployment Dependencies

I have an application that has 14 different services. Some of the services are dependent on other services. I am trying to find a good way to deploy these in the right sequences without using thread sleeps.
Is there a way to tell kuberentes a service deployment tree like don't deploy service B or service C until Service A is in a container and the status is running?\
Is there s good way to use kubectl to poll service A so I can do a while loop until I know it's up and running then run the scripts to deploy service B and C?
This is not how Kubernetes works. You can kind of shim it with an initContainer that blocks until dependencies are available (usually via kubectl in a while loop, but you get fancy you can try using --wait).
But the expectation is that you set up your applications to be "eventually consistent" when it comes to inter-service dependencies. In practical terms, this usually means just crashing if a dependent service isn't available, and it will just be restarted until things succeed.
You can use the readiness probe to hit health check APIs of your application which is being deployed and in those health check APIs you can test the other service pods availability by hitting their APIs or service

Istio on Kubernetes: pod to service communication doesn't work

I have two deployments (A and B), each one exposing ClusterIP Service. Before deploying Istio, I was able to communicate from pod A to any of B pods via its Service (e.g. http://B.default.svc.cluster.local/dosomecrazystuff)
After deploying Istio (1.0.5), I getting "http://B.default.svc.cluster.local refusing connection" when calling it from pod in deployment A.
What is default routing policy in Istio? I don't need some cleaver load balancing or version based routing, just straightforward communication from A to B (the same way as I would do that without Istio).
What the absolute minimal required configuration to make it work?
Well, it seems like some local issue I having on my MicroK8s deployment. On EKS and another MicroK8s I able to communicate as desired without anything special.
So, the answer is: no special configuration required to make it work, it supposed to be able to communicate just as is.