Why go-micro Kubernetes plugin requires to register the pod to registry? - kubernetes

I have a question regarding how to use go-micro with Kubernetes. AFAIK, Kubernetes already has kube-dns for service discovery and kube-proxy with Service abstraction to expose the pods.
Is it possible to use go-micro, but skip the kubernetes go-micro plugin to register itself to the Kubernetes API server?
Because I am not sure why it is necessary in first place. The fact is that kubelet will do that for us automatically (by livenessProbe and readinessProbe check, it can then determine pod is healthy or not), by only including the healthy pod to the endpoint of service.
I am asking the question because we're also using istio-proxy. We got micro-services errors whenever the pod is starting, due to istio-proxy is not yet ready to route the traffic (even the traffic to kube api, since it intercepts the egress traffic from our main container (it uses the go-micro Kubernetes plugin)).
2018/10/17 04:37:55 Can't create server! reason: Patch
https://10.32.64.1:443/api/v1/namespaces/data-cdp/pods/cdp-booking-context-svc-stable-864645684b-xd2tb:
dial tcp 10.32.64.1:443: connect: connection refused
It then causes the main container (go-micro kube plugin app) in the crashloopback multiple times, until the istio-proxy is ready. This is not a big issue, but it troubles my mind about the motivation behind the registration thing.

Related

Deploying a stateless Go app with Redis on Kubernetes

I had deploy a stateless Go web app with Redis on Kubernetes. Redis pod is running fine but the main issue with application pod and getting error dial tcp: i/o timeout in log. Thank you!!
Please take look: aks-vm-timeout.
Make sure that the default network security group isn't modified and that both port 22 and 9000 are open for connection to the API server. Check whether the tunnelfront pod is running in the kube-system namespace using the kubectl get pods --namespace kube-system command.
If it isn't, force deletion of the pod and it will restart.
Also make sure if Redis port is open.
More info about troubleshooting: dial-backend-troubleshooting.
EDIT:
Answering on your question about tunnelfront:
tunnelfront is an AKS system component that's installed on every cluster that helps to facilitate secure communication from your hosted Kubernetes control plane and your nodes. It's needed for certain operations like kubectl exec, and will be redeployed to your cluster on version upgrades.
Speaking about VM:
I would SSH into the it and start watching the disk IO latency using bpf / bcc tools and the docker / kubelet logs.

How does the failover mechanism work in kubernetes service?

According to some of the tech blogs (e.g. Understanding kubernetes networking: services), k8s service dispatch all the requests through iptable rules.
What if one of the upstream pods crashed when a request happened to be routed on that pods.
Is there a failover mechanism in kubernetes service?
Will the request will be forwarded to next pod automatically?
How does kubernetes solve this through iptable?
Kubernetes offers a simple Endpoints API that is updated whenever the set of Pods in a Service changes. For non-native applications, Kubernetes offers a virtual-IP-based bridge to Services which redirects to the backend Pods
Here is the detail k8s service & endpoints
So your answer is endpoint Object
kubectl get endpoints,services,pods
There are liveness and readiness checks which decides if the pod is able to process the request or not. Kubelet with docker has mechanism to control the life cycle of pods. If the pod is healthy then its the part of the endpoint object.

Injecting Istio sidecar concurrently with the pod

We are using Istio with Kubernetes and have automatic sidecar injection enabled. The Istio proxy pod gets injected a few seconds after the pod is created and this is causing issues with the start of our service. We are making a mongo connection at the start of service and since the Istio proxy is not up by that time with service entries imposed error occurs.
Is it possible to ensure that the sidecar gets injected concurrently with the pod?
Not really from the Istio side. However, you can try adding readiness probes to your containers in your pods. You can add it with an initialDelaySeconds. This way they don't get any traffic until the Envoy proxy is fully operational.
Another option is to add a wrapper to your app in your container so that it waits for the envoy proxy to be injected until it really starts.

Traefik health checks via kubernetes annotation

I want setup Traefik backend health check via Kubernetes annotation, but looks like Kubernetes Ingress does not support that functionality according to official documentation.
Is any particular reason why Traefik does not support that functionality for Kubernetes Ingress? I'm wondering because Mesos support health checks for a backend.
I know that in Kubernetes you can configure readiness/liveness probe for the pods, but I have leader/follower fashion service, so Traefik should route the traffic only to the leader.
UPD:
The only leader can accept the connection from Traefik; a follower will refuse the connection.
I have two readiness checks in my mind:
Service is up and running, and ready to be elected as a leader (kubernetes readiness probe)
Service is up and running and promoted as a leader (traefik health check)
Traefik relies on Kubernetes to provide an indication of the health of the underlying pods to ascertain whether they are ready to provide service. Kubernetes exposes two mechanisms in a pod to communicate information to the orchestration layer:
Liveness checks to provide an indication to Kubernetes when the process(es) running in the pod have transitioned to a broken state. A failing liveness check will cause Kubernetes to destroy the pod and recreate it.
Readiness checks to determine when a pod is ready to provide service. A failing readiness check will cause the Endpoint Controller to remove the pod from the list of endpoints of any services it provides. However, it will remain running.
In this instance, you would expose information to Traefik via a readiness check. Configure your pods with a readiness check which fails if they are in a state in which they should not receive any traffic. When the readiness state changes, Kubernetes will update the list of endpoints against any services which route traffic to the pod to add or remove the pod. Traefik will accordingly update its view of the world to add or remove the pod from the list of endpoints backing the Ingress.
There is no reason for this model to be incompatible with your master/follower architecture, provided each pod can ascertain whether it is the master or follower and provide an appropriate indication in its readiness check. However, without taking special care, there will be races between the master/follower state changing and Kubernetes updating its endpoints, as readiness probes are only made periodically. I recommend assuming this will be the case and building-in logic to reject requests received by non-master pods.
As a future consideration to increase robustness, you might split the ingress layer of your service from the business logic implementing the master/follower system, allowing all instances to communicate with Traefik and enqueue work for consideration by whatever is the "master" node at this point.

Does the kube-apiserver expect the presence of kube-proxy?

I've been running my kubernetes masters separate from my kubernetes nodes. So I have kube-apiserver, kube-scheduler and kube-controllermanager running on a server without kubelet, kube-proxy or flannel.
So far this has worked perfectly. However, today I attempted to set up the Web UI and access it through an API server. I got the the following error when accessing http://kube-master-0:8080/ui:
Error: 'dial tcp 172.16.72.12:9090: getsockopt: connection timed out'
Trying to reach: 'http://172.16.72.12:9090/'
This suggests to me that the API server is trying to connect to the pod IP, as we don't have flannel or kube-proxy running on this host, the 172.16.72.12 IP will not be routed.
Am I expected to run kube-proxy and flannel on my API servers? Is there another way to let the API server proxy the UI?
It's not required, but it will certainly make your life easier.
The reason this isn't working is because kube-proxy isn't directing traffic to the service. Try kube-node:8080/ui (assuming you have exposed it as with NodePort configuration
In theory, Kube apiserver does not expect the presence of kube-proxy.
This means kube apiserver will run correctly, receives requests and handles them(mostly reads from and writes to etcd).
But if you want the whole cluster working, you will need other components running, for example:
if you want pods or deployments to be scheduled, kube-scheduler should be running
if you want pods and containers be running in nodes, kubelet has to be running
if you want replications can be guarded, controller-manager should be runing
As for kube-proxy and flannel, they are critical parts to make sure networking is working. Load Balance, service, across-hosts pod communication etc all depends on them.