Kubernetes grpc http 2error - kubernetes

I deployed a grpc service on eks and expose the service using ingress. I deployed an demo https app, it worked. However, I have a problem with the grpc app. The service is running but when I log the service I get an error. The grpc request does not even go to the server. The log is as following
level=info msg="grpc: Server.Serve failed to create ServerTransport:
connection error: desc = \"transport: http2Server.HandleStreams
received bogus greeting from client: \\"GET / HTTP/1.1\\r\\nHost:
19\\"\"" system=system
It seems it should receive http2 but it just has HTTP/1.1??
For ingress I tried
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxxx
alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true'
for service.yml
annotations:
service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2", "http": "HTTP2"}'
For the service deployed it seems fine. Once I have ingress deployed it keeps have the error above.

Not sure if Ingress supports HTTP2/gRPC. If you are using GKE, you could try ESP

I use Istio service mesh to solve this problem. Istio virtual service can route HTTP1.1 HTTP2 GRPC traffic
By setting service port name to grpc or prefixing it with grpc-. Istio will configure the service with HTTP2 protocol

Related

Connect to gRPC service via Kubernetes API server proxy?

Let's say we have a Kubernetes service which serves both a RESTful HTTP API and a gRPC API:
apiVersion: v1
kind: Service
metadata:
namespace: mynamespace
name: myservice
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
We want to be able to reach those service endpoints externally, for example from another Kubernetes cluster.
This could be achieved by changing the service type from ClusterIP to LoadBalancer. However, let's assume that this is not desirable, for example because it requires additional public IP addresses.
An alternative approach would be to use the apiserver proxy which
connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
This works with the http endpoint. For example, if the http API exposes an endpoint /api/foo, it can be reached like this:
http://myapiserver/api/v1/namespaces/mynamespace/services/myservice:http/proxy/api/foo
Is it somehow possible to also reach the gRPC service via the apiserver proxy? It would seem that since gRPC uses HTTP/2, the apiserver proxy won't support it out of the box. e.g. doing something like this on the client side...
grpc.Dial("myapiserver/api/v1/namespaces/mynamespace/services/myservice:grpc/proxy")
... won't work.
Is there a way to connect to a gRPC service via the apiserver proxy?
If not, is there a different way to connect to the gRPC service from external, without using a LoadBalancer service?
You can use NodePort service. Each of your k8s workers will start listening on some high port. You can connect to any of the workers and your traffic would be routed to the target service.
apiserver-proxy solution looks like workaround to me and is far from production grade solution. You shouldn't route the traffic to your services through k8s API servers (even though it's technically possible). Control plane should be doing just control plane things and not data plane (traffic routing, running workloads, ...)
LoadBalancer service can be typically configured to create Internal LB (with internal IP from your VPC) instead External LB. This frankly the only 'correct' solution.
...not to require an additional public IP
NodePort is not bound to public IP. That is, your worker node can sits in the private network and reachable at the node private IP:nodePort#. The meantime, you can use kubectl port-forward --namespace mynamespace service myservice 8080:8080 and connect thru localhost.

How to configure Internal Ingress to target a specific service

I'm working on helm-charts and it should do the following:
Allow external requests over HTTPS only (to a specific service).
Allow internal requests over HTTP (to the same specific service).
I tried creating two ingress in which:
The first ingress is external and accepts only HTTPS requests (which is working as expected). I used the following anntations:
kubernetes.io/ingress.class: nginx
ingress.citrix.com/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
The second ingress is internal (with annotation kubernetes.io/ingress.class: "gce-internal"). This one is aimed to accept requests only from inside Minikube cluster. But it's is not working.
Running nslookup <host-name> from outside Minikube cluster returns ** server can't find <host-name>: NXDOMAIN.
Running nslookup <host-name> from inside MInikube cluster (minikube ssh) returns ;; connection timed out; no servers could be reached.
Running curl -kv "<host-name>" returns curl: (6) Could not resolve host: <host-name>.
Any ideas on how to fix this?
For internal communication ideally, you should be using the service name as the hostname.
However you can crate the internal ingress it is possible, but it will give you one single IP address. Using that IP address you can call the service.
Considering you are already running the ingress controller of Nginx and service.
You can edit the ingress service and inject or update this annotation to the service.
annotations:
cloud.google.com/load-balancer-type: "Internal"
this above line will create the Load Balancer service with the internal IP for ingress. You can create the ingress and resolve it using the curl.
https://stackoverflow.com/a/62559152/5525824
still with GCP if you are not getting the static IP address you can use this snippet
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.regional-static-ip-name: STATIC_IP_NAME
kubernetes.io/ingress.class: "gce-internal"
read more at : https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingress#static_ip_addressing

How do I get client IP addressed from HTTP requests in kubernetes services(EKS)

We are running our ms as pod behind ALB ingress (ALB load balancer). My problem is that all of the HTTP request logs show the cluster IP address instead of the IPs of the HTTP clients. Is there any other way I can make kubernetes service to pass this info to my app servers to show the client ip address?
Even tried with java code usig get.remote.address function and still the same result.
I know there is a method "service.spec.externalTrafficPolicy" but this is only for GCE ad Not for AWS.
Any help!!!!!!
you can use Network Load Balancer with Kubernetes services, Client traffic first hits the kube-proxy on a cluster-assigned nodePort and is passed on to all the matching pods in the cluster.
When the spec.externalTrafficPolicy is set to the default value of Cluster, the incoming LoadBalancer traffic may be sent by the kube-proxy to pods on the node, or to pods on other nodes. With this configuration the client IP is sent to the kube-proxy, but when the packet arrives at the end pod, the client IP shows up as the local IP of the kube-proxy.
By changing the spec.externalTrafficPolicy to Local, the kube-proxy will correctly forward the source IP to the end pods, but will only send traffic to pods on the node that the kube-proxy itself is running on. Kube-proxy also opens another port for the NLB health check, so traffic is only directed to nodes that have pods matching the service selector.
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
I was able to do this with the help of cloudfront.As our applications has high rate of data transfer so we used it in front of load balancer and in that i have also enabled the diffrent headers that Cloudfront Offers.

Kubernetes pods can not make https request after deploying istio service mesh

I am exploring the istio service mesh on my k8s cluster hosted on EKS(Amazon).
I tried deploying istio-1.2.2 on a new k8s cluster with the demo.yml file used for bookapp demonstration and most of the use cases I understand properly.
Then, I deployed istio using helm default profile(recommended for production) on my existing dev cluster with 100s of microservices running and what I noticed is my services can can call http endpoints but not able to call external secure endpoints(https://www.google.com, etc.)
I am getting :
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong
version number
Though I am able to call external https endpoints from my testing cluster.
To verify, I check the egress policy and it is mode: ALLOW_ANY in both the clusters.
Now, I removed the the istio completely from my dev cluster and install the demo.yml to test but now this is also not working.
I try to relate my issue with this but didn't get any success.
https://discuss.istio.io/t/serviceentry-for-https-on-httpbin-org-resulting-in-connect-cr-srvr-hello-using-curl/2044
I don't understand what I am missing or what I am doing wrong.
Note: I am referring to this setup: https://istio.io/docs/setup/kubernetes/install/helm/
This is most likely a bug in Istio (see for example istio/istio#14520): if you have any Kubernetes Service object, anywhere in your cluster, that listens on port 443 but whose name starts with http (not https), it will break all outbound HTTPS connections.
The instance of this I've hit involves configuring an AWS load balancer to do TLS termination. The Kubernetes Service needs to expose port 443 to configure the load balancer, but it receives plain unencrypted HTTP.
apiVersion: v1
kind: Service
metadata:
name: breaks-istio
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
selector: ...
ports:
- name: http-ssl # <<<< THIS NAME MATTERS
port: 443
targetPort: http
When I've experimented with this, changing that name: to either https or tcp-https seems to work. Those name prefixes are significant to Istio, but I haven't immediately found any functional difference between telling Istio the port is HTTPS (even though it doesn't actually serve TLS) vs. plain uninterpreted TCP.
You do need to search your cluster and find every Service that listens to port 443, and make sure the port name doesn't start with http-....

Whitelist an IP to access deployment with Kubernetes ingress Istio

I'm trying to whitelist an IP to access a deployment inside my Kubernetes cluster.
I looked for some documentation online about this, but I only found the
ingress.kubernetes.io/whitelist-source-range
for ingress to grant access to certain IP range. But still, I couldn't manage to isolate the deployment.
Here is the ingress configuration YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-internal
annotations:
kubernetes.io/ingress.class: "istio"
ingress.kubernetes.io/whitelist-source-range: "xxx.xx.xx.0/24, xx.xxx.xx.0/24"
spec:
rules:
- host: white.example.com
http:
paths:
- backend:
serviceName: white
servicePort: 80
I can access the deployment from my whitelisted IP and from the mobile phone (different IP not whitelisted in the config)
Has anyone stepped in the same problem using ingress and Istio?
Any help, hint, docs or alternative configuration will be much appreciated.
Have a look at the annotation overview, it seems that whitelist-source-range is not supported by istio:
whitelist-source-range: Comma-separate list of IP addresses to enable access to.
nginx, haproxy, trafficserver
I managed to solve whitelisting ip address problem for my istio-based service (app that uses istio proxy and exposed through the istio ingress gateway via public LB) using NetworkPolicy.
For my case, here is the topology:
Public Load Balancer (in GKE, using preserve clientIP mode) ==> A dedicated Istio Gateway Controller Pods (see my answer here) ==> My Pods (istio-proxy sidecar container, my main container).
So, I set up 2 network policy:
NetworkPolicy that guards the incoming connection from internet connection to my Istio Ingress Gateway Controller Pods. In my network policy configuration, I just have to set the spec.podSelector.matchLabels field to the pod label of Dedicated Istio Ingress Gateway Controller Pods's
Another NetworkPolicy that limits the incoming connection to my Deployment -> only from the Istio Ingress Gateway Controller pods/deployments.