We are using Istio with Kubernetes and have automatic sidecar injection enabled. The Istio proxy pod gets injected a few seconds after the pod is created and this is causing issues with the start of our service. We are making a mongo connection at the start of service and since the Istio proxy is not up by that time with service entries imposed error occurs.
Is it possible to ensure that the sidecar gets injected concurrently with the pod?
Not really from the Istio side. However, you can try adding readiness probes to your containers in your pods. You can add it with an initialDelaySeconds. This way they don't get any traffic until the Envoy proxy is fully operational.
Another option is to add a wrapper to your app in your container so that it waits for the envoy proxy to be injected until it really starts.
Related
I see in an article that I can access to pods from kubeproxy, so what is the role of kubernetes service here? and what is the difference between Kube Proxy and service? finally,
is kube proxy part of service?
As far as I understand:
Service is a Kubernetes object that has a stable name and stable IP and sits in front of a set of pods. All requests sent to the pods should go to the service.
Kube-proxy is a networking component running on every cluster node(basically its a Daemonset). It implements the low-level rules to allow communication to pods from inside as well as outside the Kubernetes Cluster. We can say that kube-proxy is a part of service.
So when a user tries to reach an application deployed on Kubernetes first it reaches the service and then forwards the request one of the underlying pods. This is done by using the rules that Kube proxy created.
For more understanding refer this video : Kube proxy & blog
Closer look at Kube proxy
From my understanding
If you are only accessing the pod ports inside the cluster, then there are no Service involved, as you need Service objects to expose your pods outside of your Cluster
Service exposes your pods outside of your Cluster. Service provides a stable virtual IP address. A controller keeps track of the pods that are associated with the Service. While kube-proxy is a daemon running on each node and watches the service resources defined in the cluster and manages the rules for the requests on a Service’s backend pods
kube-proxy interacts with the Service so kube-proxy can change the iptable rules when there are changes on Service objects. Hence they are separate entities.
We can discuss this for a while, but let's short a long story.
Requests come to Service
Then Service passes it to Kube-Proxy
Kube-Proxy decides to which Pod this request go
How requests are forwarded from Service to Pod
Kube Proxy forwards the request
Responsible for maintaining a list of Service IPs and corresponding Pod IPs
Check this section for more details...
I had istio configured but without the CNI addon enabled.
In that time, I had an init container with a service account that would call the Kubernetes API to verify a couple of things (via kubectl).
Since I enabled the CNI addon, this init container fails with the following message:
The connection to the server 10.23.64.1:443 was refused - did you specify the right host or port?
I tried removing all my network policies to see if that was the issue, but same result.
I also gave the service account that this pods uses the cluster-admin role, but it didn't do the trick.
I tested with both 1.6 and 1.7 branches of Istio.
What is the issue here? Other pods without this init container work fine.
In order to have init container network connectivity with istio cni enabled please follow the guide for a workaround from istio documentation:
Compatibility with application init containers
The Istio CNI plugin may cause networking connectivity problems for any application initContainers. When using Istio CNI, kubelet starts an injected pod with the following steps:
The Istio CNI plugin sets up traffic redirection to the Istio sidecar proxy within the pod.
All init containers execute and complete successfully.
The Istio sidecar proxy starts in the pod along with the pod’s other containers.
Init containers execute before the sidecar proxy starts, which can result in traffic loss during their execution. Avoid this traffic loss with one or both of the following settings:
Set the traffic.sidecar.istio.io/excludeOutboundIPRanges annotation to disable redirecting traffic to any CIDRs the init containers communicate with.
Set the traffic.sidecar.istio.io/excludeOutboundPorts annotation to disable redirecting traffic to the specific outbound ports the init containers use.
Is there a way to do active and passive load balancing between 2 PODs of a micro-service. Say I have 2 instance(PODs) running of Micro-service, which is exposed using a K8s service object. Is there a way to configure the load balancing such a way that one pod will always get the request and when that pod is down , the other pod will start receiving the request?
I have ingress object also on top of that service.
This is what the Kubernetes Service object does, which you already mentioned you are using. Make sure you set up a readiness probe in your pod template so that the system can tell when your app is healthy.
I have a question regarding how to use go-micro with Kubernetes. AFAIK, Kubernetes already has kube-dns for service discovery and kube-proxy with Service abstraction to expose the pods.
Is it possible to use go-micro, but skip the kubernetes go-micro plugin to register itself to the Kubernetes API server?
Because I am not sure why it is necessary in first place. The fact is that kubelet will do that for us automatically (by livenessProbe and readinessProbe check, it can then determine pod is healthy or not), by only including the healthy pod to the endpoint of service.
I am asking the question because we're also using istio-proxy. We got micro-services errors whenever the pod is starting, due to istio-proxy is not yet ready to route the traffic (even the traffic to kube api, since it intercepts the egress traffic from our main container (it uses the go-micro Kubernetes plugin)).
2018/10/17 04:37:55 Can't create server! reason: Patch
https://10.32.64.1:443/api/v1/namespaces/data-cdp/pods/cdp-booking-context-svc-stable-864645684b-xd2tb:
dial tcp 10.32.64.1:443: connect: connection refused
It then causes the main container (go-micro kube plugin app) in the crashloopback multiple times, until the istio-proxy is ready. This is not a big issue, but it troubles my mind about the motivation behind the registration thing.
I want setup Traefik backend health check via Kubernetes annotation, but looks like Kubernetes Ingress does not support that functionality according to official documentation.
Is any particular reason why Traefik does not support that functionality for Kubernetes Ingress? I'm wondering because Mesos support health checks for a backend.
I know that in Kubernetes you can configure readiness/liveness probe for the pods, but I have leader/follower fashion service, so Traefik should route the traffic only to the leader.
UPD:
The only leader can accept the connection from Traefik; a follower will refuse the connection.
I have two readiness checks in my mind:
Service is up and running, and ready to be elected as a leader (kubernetes readiness probe)
Service is up and running and promoted as a leader (traefik health check)
Traefik relies on Kubernetes to provide an indication of the health of the underlying pods to ascertain whether they are ready to provide service. Kubernetes exposes two mechanisms in a pod to communicate information to the orchestration layer:
Liveness checks to provide an indication to Kubernetes when the process(es) running in the pod have transitioned to a broken state. A failing liveness check will cause Kubernetes to destroy the pod and recreate it.
Readiness checks to determine when a pod is ready to provide service. A failing readiness check will cause the Endpoint Controller to remove the pod from the list of endpoints of any services it provides. However, it will remain running.
In this instance, you would expose information to Traefik via a readiness check. Configure your pods with a readiness check which fails if they are in a state in which they should not receive any traffic. When the readiness state changes, Kubernetes will update the list of endpoints against any services which route traffic to the pod to add or remove the pod. Traefik will accordingly update its view of the world to add or remove the pod from the list of endpoints backing the Ingress.
There is no reason for this model to be incompatible with your master/follower architecture, provided each pod can ascertain whether it is the master or follower and provide an appropriate indication in its readiness check. However, without taking special care, there will be races between the master/follower state changing and Kubernetes updating its endpoints, as readiness probes are only made periodically. I recommend assuming this will be the case and building-in logic to reject requests received by non-master pods.
As a future consideration to increase robustness, you might split the ingress layer of your service from the business logic implementing the master/follower system, allowing all instances to communicate with Traefik and enqueue work for consideration by whatever is the "master" node at this point.