Exposing a k8s serivce with tcp - kubernetes

I have an eks cluster, all up and working.
I want to run a service which listens to tcp request on port 5000.
I'm trying to read about it but all guides I could find are using http for the examples.
I think I'm a bit confused with all the different concepts.
If I define:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 5000
protocol: TCP
and run this (actually using helm), I can see on aws-console that a classic load balancer is created.
but i can't ping it.
So basically, how do I create a network load balancer that forwards port 5000 to my service? it seems like this should be simple by I can't find how to do this. Eventually I want to have a staticIP for the nlb so I can send requests to that port.
Do I need to install nginx (or other ingress controller) to make this work?

First of all you are on the correct way of creating a service with LoadBalancer type. Only missing thing by looking at Service.yaml contents is selector field in the Service object is missing by which your service doesn't know at which pods to forward traffic. Try adding it.
If you are going to have only single service exposed publicly, then LoadBalancer type makes sense. But if you are going to have multiple services then it would be costly to create multiple LoadBalancer services. In that case Nginx Ingress helps you with requiring only one LoadBalancer type service. Refer https://medium.com/better-programming/how-to-expose-your-services-with-kubernetes-ingress-7f34eb6c9b5a

Related

Connect to gRPC service via Kubernetes API server proxy?

Let's say we have a Kubernetes service which serves both a RESTful HTTP API and a gRPC API:
apiVersion: v1
kind: Service
metadata:
namespace: mynamespace
name: myservice
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
We want to be able to reach those service endpoints externally, for example from another Kubernetes cluster.
This could be achieved by changing the service type from ClusterIP to LoadBalancer. However, let's assume that this is not desirable, for example because it requires additional public IP addresses.
An alternative approach would be to use the apiserver proxy which
connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
This works with the http endpoint. For example, if the http API exposes an endpoint /api/foo, it can be reached like this:
http://myapiserver/api/v1/namespaces/mynamespace/services/myservice:http/proxy/api/foo
Is it somehow possible to also reach the gRPC service via the apiserver proxy? It would seem that since gRPC uses HTTP/2, the apiserver proxy won't support it out of the box. e.g. doing something like this on the client side...
grpc.Dial("myapiserver/api/v1/namespaces/mynamespace/services/myservice:grpc/proxy")
... won't work.
Is there a way to connect to a gRPC service via the apiserver proxy?
If not, is there a different way to connect to the gRPC service from external, without using a LoadBalancer service?
You can use NodePort service. Each of your k8s workers will start listening on some high port. You can connect to any of the workers and your traffic would be routed to the target service.
apiserver-proxy solution looks like workaround to me and is far from production grade solution. You shouldn't route the traffic to your services through k8s API servers (even though it's technically possible). Control plane should be doing just control plane things and not data plane (traffic routing, running workloads, ...)
LoadBalancer service can be typically configured to create Internal LB (with internal IP from your VPC) instead External LB. This frankly the only 'correct' solution.
...not to require an additional public IP
NodePort is not bound to public IP. That is, your worker node can sits in the private network and reachable at the node private IP:nodePort#. The meantime, you can use kubectl port-forward --namespace mynamespace service myservice 8080:8080 and connect thru localhost.

ActiveMQ Broker url on Kubernetes

I have deployed my ActiveMQ on Kubernetes, but how to configure the broker to connect queue using port 61616? If I use the POD IP then it will not be static IP and every time pod recreate will create new IP. Is there anyway to get static IP or using ingress can we setup broker on port 61616?
This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
For exposing any of your microservices in kubernetes either externally or internally you have a Service
As David Maze already stated in his comment:
There should be a matching Service, which will have a known DNS name;
use that. – David Maze yesterday
You don't need to worry about static IP. Services have also dynamic IPs assigned but they provide a reliable way to access your backend Pods via stable DNS name. Take a look also at this section in the official docks.
In your case its enough to create a simple ClusterIP Service (which is the default Service type). It may look as follows:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 61616
targetPort: 61616
provided your app is listening on TCP port 61616 and you want your Service to expose the same port (port) as your Pods (targetPort).

Access IP address of PODs via lookup on Ingress URL

I am learning Kubernetes and have deployed a headless service on Kubernetes(on AWS) which is exposed to the external world via nginx ingress.
I want nslookup <ingress_url> to directly return IP address of PODs.
How to achieve that?
Inside the cluster:
It's not a good idea to let a <ingress_host> resolved to Pod IP. It's a common design to let different kinds of pod served on one single hostname under different paths, but you can only set one (or one group of, with DNS load balance) IP record for it.
However, you can do this by adding <ingress_host> <Pod_IP> into /etc/hosts in init script, since you can get <Pod_IP> by doing nslookup <headless_service>.
HostAlias is another option if you konw the pod ip before applying the deployment.
From outside:
I don't think it's possible outside the cluster. Because you need to do the DNS lookup to get to the ingress controller first, which means it has to be resolved to the IP of ingress controller.
At last, it's a bad idea to use a headless service on Pod because many apps do DNS lookups once and cache the results, which might bring a problem because the IP of Pod can be "changed" frequently.
If you declare a “headless” service with selectors, then the internal DNS for the service will be configured to return the IP addresses of its pods directly. This is a somewhat unusual configuration and you should also expect an effect on other, cluster internal, users of that service.
This is documented here. Example:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
clusterIP: None
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376

Is it possible to route traffic to a specific Pod?

Say I am running my app in GKE, and this is a multi-tenant application.
I create multiple Pods that hosts my application.
Now I want:
Customers 1-1000 to use Pod1
Customers 1001-2000 to use Pod2
etc.
If I have a gcloud global IP that points to my cluster, is it possible to route a request based on the incoming ipaddress/domain to the correct Pod that contains the customers data?
You can guarantee session affinity with services, but not as you are describing. So, your customers 1-1000 won't use pod-1, but they will use all the pods (as a service makes a simple load balancing), but each customer, when gets back to hit your service, will be redirected to the same pod.
Note: always within time specified in (default 10800):
service.spec.sessionAffinityConfig.clientIP.timeoutSeconds
This would be the yaml file of the service:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
If you want to specify time, as well, this is what needs to be added:
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10
Note that the example above would work hitting ClusterIP type service directly (which is quite uncommon) or with Loadbalancer type service, but won't with an Ingress behind NodePort type service. This is because with an Ingress, the requests come from many, randomly chosen source IP addresses.
Not with Pods by themselves, but you should be able to with Services.
Pods are intended to be stateless and indistinguishable from one another.
But you should be able to create a Deployment per customer group, and a Service per Deployment. The Ingress nginx should be able to be told to map incoming requests by whatever attributes are relevant to specific customer group Services.

NodePort service is not externally accessible via `port` number

I have following service configuration:
kind: Service
apiVersion: v1
metadata:
name: web-srv
spec:
type: NodePort
selector:
app: userapp
tier: web
ports:
- protocol: TCP
port: 8090
targetPort: 80
nodePort: 31000
and an nginx container is behind this service. Although I can access to the service via nodePort, service is not accessible via port field. I'm able to see the configs with kubectl and Kubernetes dashboard but curling to that port (e.g. curl http://192.168.0.100:8090) raises a Connection Refused error.
I'm not sure what is the problem here. Do I need to make sure any proxy services is running inside the Node or Container?
Get the IP of the kubernetes service and then hit 8090; it will work.
nodePort implies that the service is bound to the node at port 31000.
These are the 3 things that will work:
curl <node-ip>:<node-port> # curl <node-ip>:31000
curl <service-ip>:<service-port> # curl <svc-ip>:8090
curl <pod-ip>:<target-port> # curl <pod-ip>:80
So now, let's look at 3 situations:
1. You are inside the kubernetes cluster (you are a pod)
<service-ip> and <pod-ip> and <node-ip> will work.
2. You are on the node
<service-ip> and <pod-ip> and <node-ip> will work.
3. You are outside the node
Only <node-ip> will work assuming that <node-ip> is reachable.
The behavior is as expected since I assume you are trying to access the service from outside the cluster. That means only the nodePort exposes the service to the world outside the cluster. The port refers to the port on the pod, as exposed by the container inside the pod. This is generally desired behavior as to support clusters of services that are represented by a loadbalancer typically. So the load balancer will expose the port you want for your service (e.g. load-balancer:80) and forward to the nodePort on all nodes as to distribute the load.
If you accessing the service from inside the cluster you should be able to reach it via service-name:service-port thanks to the built in DNS.
More detailed information can be found at the docs.