Connect to gRPC service via Kubernetes API server proxy? - kubernetes

Let's say we have a Kubernetes service which serves both a RESTful HTTP API and a gRPC API:
apiVersion: v1
kind: Service
metadata:
namespace: mynamespace
name: myservice
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
We want to be able to reach those service endpoints externally, for example from another Kubernetes cluster.
This could be achieved by changing the service type from ClusterIP to LoadBalancer. However, let's assume that this is not desirable, for example because it requires additional public IP addresses.
An alternative approach would be to use the apiserver proxy which
connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
This works with the http endpoint. For example, if the http API exposes an endpoint /api/foo, it can be reached like this:
http://myapiserver/api/v1/namespaces/mynamespace/services/myservice:http/proxy/api/foo
Is it somehow possible to also reach the gRPC service via the apiserver proxy? It would seem that since gRPC uses HTTP/2, the apiserver proxy won't support it out of the box. e.g. doing something like this on the client side...
grpc.Dial("myapiserver/api/v1/namespaces/mynamespace/services/myservice:grpc/proxy")
... won't work.
Is there a way to connect to a gRPC service via the apiserver proxy?
If not, is there a different way to connect to the gRPC service from external, without using a LoadBalancer service?

You can use NodePort service. Each of your k8s workers will start listening on some high port. You can connect to any of the workers and your traffic would be routed to the target service.
apiserver-proxy solution looks like workaround to me and is far from production grade solution. You shouldn't route the traffic to your services through k8s API servers (even though it's technically possible). Control plane should be doing just control plane things and not data plane (traffic routing, running workloads, ...)
LoadBalancer service can be typically configured to create Internal LB (with internal IP from your VPC) instead External LB. This frankly the only 'correct' solution.

...not to require an additional public IP
NodePort is not bound to public IP. That is, your worker node can sits in the private network and reachable at the node private IP:nodePort#. The meantime, you can use kubectl port-forward --namespace mynamespace service myservice 8080:8080 and connect thru localhost.

Related

How do I get client IP addressed from HTTP requests in kubernetes services(EKS)

We are running our ms as pod behind ALB ingress (ALB load balancer). My problem is that all of the HTTP request logs show the cluster IP address instead of the IPs of the HTTP clients. Is there any other way I can make kubernetes service to pass this info to my app servers to show the client ip address?
Even tried with java code usig get.remote.address function and still the same result.
I know there is a method "service.spec.externalTrafficPolicy" but this is only for GCE ad Not for AWS.
Any help!!!!!!
you can use Network Load Balancer with Kubernetes services, Client traffic first hits the kube-proxy on a cluster-assigned nodePort and is passed on to all the matching pods in the cluster.
When the spec.externalTrafficPolicy is set to the default value of Cluster, the incoming LoadBalancer traffic may be sent by the kube-proxy to pods on the node, or to pods on other nodes. With this configuration the client IP is sent to the kube-proxy, but when the packet arrives at the end pod, the client IP shows up as the local IP of the kube-proxy.
By changing the spec.externalTrafficPolicy to Local, the kube-proxy will correctly forward the source IP to the end pods, but will only send traffic to pods on the node that the kube-proxy itself is running on. Kube-proxy also opens another port for the NLB health check, so traffic is only directed to nodes that have pods matching the service selector.
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
I was able to do this with the help of cloudfront.As our applications has high rate of data transfer so we used it in front of load balancer and in that i have also enabled the diffrent headers that Cloudfront Offers.

ActiveMQ Broker url on Kubernetes

I have deployed my ActiveMQ on Kubernetes, but how to configure the broker to connect queue using port 61616? If I use the POD IP then it will not be static IP and every time pod recreate will create new IP. Is there anyway to get static IP or using ingress can we setup broker on port 61616?
This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
For exposing any of your microservices in kubernetes either externally or internally you have a Service
As David Maze already stated in his comment:
There should be a matching Service, which will have a known DNS name;
use that. – David Maze yesterday
You don't need to worry about static IP. Services have also dynamic IPs assigned but they provide a reliable way to access your backend Pods via stable DNS name. Take a look also at this section in the official docks.
In your case its enough to create a simple ClusterIP Service (which is the default Service type). It may look as follows:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 61616
targetPort: 61616
provided your app is listening on TCP port 61616 and you want your Service to expose the same port (port) as your Pods (targetPort).

Apply annotations for load balancer on GCP for source IP firewall rules on GKE

I want to install jenkins using its official helm chart on GKE.
I want to expose the agent service (port 50000) using LoadBalancer (will be hitting it from some remote agents).
Will this annotation
service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8"
also help secure a GCP load balancer, or is it only applicable on AWS?
Will the agents initiated internally in GKE still have to pass through the internet to reach the service, or will they be routed internally to the corresponding agent service?
If you are asking about capability to whitelist firewalls using 'loadBalancerSourceRanges' parameter service.beta.kubernetes.io/load-balancer-source-ranges annotation is supported and often use on GCP.
Here is example Loadbalancer service with defined source-ranges:
apiVersion: v1
kind: Service
metadata:
name: example-loadbalancer
annotations:
service.beta.kubernetes.io/load-balancer-source-ranges: "172.0.0.0/8, 10.0.0.0/8"
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 8888
targetPort: 8888
Unlike Network Load Balancing, access to TCP Proxy Load Balancing cannot be controlled by using firewall rules. This is because TCP Proxy Load Balancing is implemented at the edge of the Google Cloud and firewall rules are implemented on instances in the data center.
Useful documentations: gcp-external-load-balancing, load-balancing.

NodePort service is not externally accessible via `port` number

I have following service configuration:
kind: Service
apiVersion: v1
metadata:
name: web-srv
spec:
type: NodePort
selector:
app: userapp
tier: web
ports:
- protocol: TCP
port: 8090
targetPort: 80
nodePort: 31000
and an nginx container is behind this service. Although I can access to the service via nodePort, service is not accessible via port field. I'm able to see the configs with kubectl and Kubernetes dashboard but curling to that port (e.g. curl http://192.168.0.100:8090) raises a Connection Refused error.
I'm not sure what is the problem here. Do I need to make sure any proxy services is running inside the Node or Container?
Get the IP of the kubernetes service and then hit 8090; it will work.
nodePort implies that the service is bound to the node at port 31000.
These are the 3 things that will work:
curl <node-ip>:<node-port> # curl <node-ip>:31000
curl <service-ip>:<service-port> # curl <svc-ip>:8090
curl <pod-ip>:<target-port> # curl <pod-ip>:80
So now, let's look at 3 situations:
1. You are inside the kubernetes cluster (you are a pod)
<service-ip> and <pod-ip> and <node-ip> will work.
2. You are on the node
<service-ip> and <pod-ip> and <node-ip> will work.
3. You are outside the node
Only <node-ip> will work assuming that <node-ip> is reachable.
The behavior is as expected since I assume you are trying to access the service from outside the cluster. That means only the nodePort exposes the service to the world outside the cluster. The port refers to the port on the pod, as exposed by the container inside the pod. This is generally desired behavior as to support clusters of services that are represented by a loadbalancer typically. So the load balancer will expose the port you want for your service (e.g. load-balancer:80) and forward to the nodePort on all nodes as to distribute the load.
If you accessing the service from inside the cluster you should be able to reach it via service-name:service-port thanks to the built in DNS.
More detailed information can be found at the docs.

Assign an External IP to a Node

I'm running a bare metal Kubernetes cluster and trying to use a Load Balancer to expose my services. I know typically that the Load Balancer is a function of the underlying public cloud, but with recent support for Ingress Controllers it seems like it should now be possible to use nginx as a self-hosted load balancer.
So far, i've been following the example here to set up an nginx Ingress Controller and some test services behind it. However, I am unable to follow Step 6 which displays the external IP for the node that the load balancer is running on as my node does not have an ExternalIP in the addresses section, only a LegacyHostIP and InternalIP.
I've tried manually assigning an ExternalIP to my cluster by specifying it in the service's specification. However, this appears to be mapped as the externalID instead.
How can I manually set my node's ExternalIP address?
This is something that is tested and works for an nginx service created on a particular node.
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
- port: 443
protocol: TCP
targetPort: 443
name: https
externalIPs:
- '{{external_ip}}'
selector:
app: nginx
Assumes an nginx deployment upstream listening on port 80, 443.
The externalIP is the public IP of the node.
I would suggest checking out MetalLB: https://github.com/google/metallb
It allows for externalIP addresses in a baremetal cluster using either ARP or BGP. It has worked great for us and allows you to simply request a LoadBalancer service like you would in the cloud.