Any idea how to setup gRPC for Kubernetes exposed services? We have deployed Linkerd in our cluster and that seems to work well and give us gRPC within Kubernetes. However we haven't found a solution that gives us gRPC connectivity from outside the cluster.
Related
My infrastructure is based on Kubernetes (k3s, with istio ingress). I would like to use istio to expose an application that is not in my cluster.
outside (internet) --https--> my router --> [cluster] istio --> [not cluster] application (192.168.1.29:8123)
I tried creating a HAProxy container, but it didn't work...
Any ideas?
If you insist on piping your traffic to the non-cluster application through the Kubernetes cluster, there are a couple of ways to handle this. You could use a Kubernetes-native ExternalName Kubernetes service.
The Istio way would be to create a ServiceEntry, though, and then use a VirtualService combined with a Gateway to direct traffic to your application outside of the cluster.
Hello I'm new to Istio and currently learning about Istio.
As per my understanding, Envoy proxy will resolve an IP address of destination instead of Kube DNS server. Envoy will send traffic directly to healthy pod based on information which received from control pane.
So... Does Kubernetes service required to setup, if I'm using Istio?
Correct me if I'm wrong.
Thanks!
From the docs
In order to direct traffic within your mesh, Istio needs to know where
all your endpoints are, and which services they belong to. To populate
its own service registry, Istio connects to a service discovery
system. For example, if you’ve installed Istio on a Kubernetes
cluster, then Istio automatically detects the services and endpoints
in that cluster.
So Kubernetes service is needed for istio to achieve service discovery i.e to know the POD IPs. But kubernetes service(L4) is not used for load balancing and routing traffic because L7 envoy proxy does that in istio.
From the docs.
A pod must belong to at least one Kubernetes service even if the pod
does NOT expose any port. If a pod belongs to multiple Kubernetes
services, the services cannot use the same port number for different
protocols, for instance HTTP and TCP.
My application is deployed in GKE, I'm trying to deploy Istio (1.2.2) and I ran into a problem:
One of the deployments is a pod consisting of two containers - gRPC service and an Envoy proxy.
we use the envoy as a workaround to expose an HTTP2 healthcheck for the Google loadbalancer, since the gRPC service is exposed to the world and healthcheck is mandatory.
When Istio injects it's envoy sidecar to this pod, all hell breaks loose:
The requests hit the existing envoy proxy and not the istio-sidecar.
Google healthchekcs to the backend servie fail.
the question arises - should I try to make both of the proxies work together or is it better to have only the Istio sidecar in this pod?
It's better to make both of the proxies to work since that Istio version is unable to distinguish between Health Checks and actual traffic.
In addition, you can find more information on the official Istio release notes.
I am deploying Redis and a sentinel architecture on Kubernetes.
when I work with deployments are my cluster that requires redis all is working fine.
the problem is that some services of my deployment are located on a different kubernetes cluster.
when the clients reach the redis sentinel ( which I exposed via NodePort that maps internally to 26379) they get an reply the master IP.
that actually happens is that they are getting the redis Master kubernetes IP and the internal port 6379.
as I said while working in KUbernetes that works fine since the clients can access that IP but when the a services are external it is not reachable.
I found that there is a configuration named:
cluster-announce-ip and cluster-announce-ip
I have set those values to the external IP of the cluster and the external port hoping that it will solve the problem but still no change.
I am using the formal docker image : redis:4.0.11-alpine
any help would be appreciated
I have Kubernetes Cluster v1.10 Over Centos 7 Over OVH Cloud Provider's Servers .
As I knew OVH does not Provide Loadbalancer Component Directly to Kubernetes
And I want to buy Loadbalancer Component From OVH From this Link and connect to kubernetes Cluster .
Can I connect Loadbalancer to Kubernetes?
And is there any tutorial?
Thank You :D
Yes.
You can follow this guide from OVH in terms setting up your load balancer.
And in terms of Kubernetes you'd either want to create a Kubernetes Ingress exposed on a NodePort, this is a good tutorial for that, or you can also expose your services directly on a NodePort and point your load balancer's backend to all the nodes in your cluster on that specific NodePort.
I would also familiarize with the Services abstraction in Kubernetes.
Yes, you can.
How, on the other side, is not an obvious one. My suggestion would be to make it part of your kubernetes infra provisioning with terraform. Using https://www.terraform.io/docs/providers/ovh/r/iploadbalancing_tcp_farm_server.html you can manage endpoints for your loadbalancer based on instances / hosts provisioned either manually or with openstack provider. That's how I do it on our OVH Kube cluster.