ActiveMQ Broker url on Kubernetes - kubernetes

I have deployed my ActiveMQ on Kubernetes, but how to configure the broker to connect queue using port 61616? If I use the POD IP then it will not be static IP and every time pod recreate will create new IP. Is there anyway to get static IP or using ingress can we setup broker on port 61616?

This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
For exposing any of your microservices in kubernetes either externally or internally you have a Service
As David Maze already stated in his comment:
There should be a matching Service, which will have a known DNS name;
use that. – David Maze yesterday
You don't need to worry about static IP. Services have also dynamic IPs assigned but they provide a reliable way to access your backend Pods via stable DNS name. Take a look also at this section in the official docks.
In your case its enough to create a simple ClusterIP Service (which is the default Service type). It may look as follows:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 61616
targetPort: 61616
provided your app is listening on TCP port 61616 and you want your Service to expose the same port (port) as your Pods (targetPort).

Related

Connect to gRPC service via Kubernetes API server proxy?

Let's say we have a Kubernetes service which serves both a RESTful HTTP API and a gRPC API:
apiVersion: v1
kind: Service
metadata:
namespace: mynamespace
name: myservice
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
We want to be able to reach those service endpoints externally, for example from another Kubernetes cluster.
This could be achieved by changing the service type from ClusterIP to LoadBalancer. However, let's assume that this is not desirable, for example because it requires additional public IP addresses.
An alternative approach would be to use the apiserver proxy which
connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
This works with the http endpoint. For example, if the http API exposes an endpoint /api/foo, it can be reached like this:
http://myapiserver/api/v1/namespaces/mynamespace/services/myservice:http/proxy/api/foo
Is it somehow possible to also reach the gRPC service via the apiserver proxy? It would seem that since gRPC uses HTTP/2, the apiserver proxy won't support it out of the box. e.g. doing something like this on the client side...
grpc.Dial("myapiserver/api/v1/namespaces/mynamespace/services/myservice:grpc/proxy")
... won't work.
Is there a way to connect to a gRPC service via the apiserver proxy?
If not, is there a different way to connect to the gRPC service from external, without using a LoadBalancer service?
You can use NodePort service. Each of your k8s workers will start listening on some high port. You can connect to any of the workers and your traffic would be routed to the target service.
apiserver-proxy solution looks like workaround to me and is far from production grade solution. You shouldn't route the traffic to your services through k8s API servers (even though it's technically possible). Control plane should be doing just control plane things and not data plane (traffic routing, running workloads, ...)
LoadBalancer service can be typically configured to create Internal LB (with internal IP from your VPC) instead External LB. This frankly the only 'correct' solution.
...not to require an additional public IP
NodePort is not bound to public IP. That is, your worker node can sits in the private network and reachable at the node private IP:nodePort#. The meantime, you can use kubectl port-forward --namespace mynamespace service myservice 8080:8080 and connect thru localhost.

Exposing a k8s serivce with tcp

I have an eks cluster, all up and working.
I want to run a service which listens to tcp request on port 5000.
I'm trying to read about it but all guides I could find are using http for the examples.
I think I'm a bit confused with all the different concepts.
If I define:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 5000
protocol: TCP
and run this (actually using helm), I can see on aws-console that a classic load balancer is created.
but i can't ping it.
So basically, how do I create a network load balancer that forwards port 5000 to my service? it seems like this should be simple by I can't find how to do this. Eventually I want to have a staticIP for the nlb so I can send requests to that port.
Do I need to install nginx (or other ingress controller) to make this work?
First of all you are on the correct way of creating a service with LoadBalancer type. Only missing thing by looking at Service.yaml contents is selector field in the Service object is missing by which your service doesn't know at which pods to forward traffic. Try adding it.
If you are going to have only single service exposed publicly, then LoadBalancer type makes sense. But if you are going to have multiple services then it would be costly to create multiple LoadBalancer services. In that case Nginx Ingress helps you with requiring only one LoadBalancer type service. Refer https://medium.com/better-programming/how-to-expose-your-services-with-kubernetes-ingress-7f34eb6c9b5a

Session Affinity Settings for multiple Pods exposed by a single service

I have a setup Metallb as LB with Nginx Ingress installed on K8S cluster.
I have read about session affinity and its significance but so far I do not have a clear picture.
How can I create a single service exposing multiple pods of the same application?
After creating the single service entry point, how to map the specific client IP to Pod abstracted by the service?
Is there any blog explaining this concept in terms of how the mapping between Client IP and POD is done in kubernetes?
But I do not see Client's IP in the YAML. Then, How is this service going to map the traffic to respective clients to its endpoints? this is the question I have.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
Main concept of Session Affinity is to redirect traffic from one client always to specific node. Please keep in mind that session affinity is a best-effort method and there are scenarios where it will fail due to pod restarts or network errors.
There are two main types of Session Affinity:
1) Based on Client IP
This option works well for scenario where there is only one client per IP. In this method you don't need Ingress/Proxy between K8s services and client.
Client IP should be static, because each time when client will change IP he will be redirected to another pod.
To enable the session affinity in kubernetes, we can add the following to the service definition.
service.spec.sessionAffinity: ClientIP
Because community provided proper manifest to use this method I will not duplicate.
2) Based on Cookies
It works when there are multiple clients from the same IP, because it´s stored at web browser level. This method require Ingress object. Steps to apply this method with more detailed information can be found here under Session affinity based on Cookie section.
Create NGINX controller deployment
Create NGINX service
Create Ingress
Redirect your public DNS name to the NGINX service public/external IP.
About mapping ClientIP and POD, according to Documentation
kube-proxy is responsible for SessionAffinity. One of Kube-Proxy job
is writing to IPtables, more details here so thats how it is
mapped.
Articles which might help with understanding Session Affinity:
https://sookocheff.com/post/kubernetes/building-stateful-services/
https://medium.com/#diegomrtnzg/redirect-your-users-to-the-same-pod-by-using-session-affinity-on-kubernetes-baebf6a1733b
follow the service reference for session affinity
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000

Exposing Kafka cluster in Kubernetes using LoadBalancer service

Suppose if I have 3 node Kafka cluster setup. Then how do I expose it outside a cloud using Load Balancer service? I have read reference material but have a few doubts.
Say for example below is a service for a broker
apiVersion: v1
kind: Service metadata:
name: kafka-0
annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 9092
name: outside
targetPort: 9092
selector: app: kafka kafka-pod-id: "0"
What is port and targetPort?
Do I setup LoadBalancer service for each of the brokers?
Do these multiple brokers get mapped to single public IP address of cloud LB?
How does a service outside k8s/cloud access individual broker? By using public-ip:port? or by using kafka-<pod-id>.kafka.my.company.com:port?. Also which port is used here? port or targetPort?
How do I specify this configuration in Kafka broker's Advertised.listeners property? As port can be different for services inside k8s cluster and outside it.
Please help.
Based on the information you provided I will try give you some answers, eventually give some advise.
1) port: is the port number which makes a service visible to other services running within the same K8s cluster. In other words, in case a service wants to invoke another service running within the same Kubernetes cluster, it will be able to do so using port specified against port in the service spec file.
targetPort: is the port on the POD where the service is running. Your application needs to be listening for network requests on this port for the service to work.
2/3) Each Broker should be exposed as LoadBalancer and be configured as headless service for internal communication. There should be one addiational LoadBalancer with external ip for external connection.
Example of Service
apiVersion: v1
kind: Service
metadata:
name: kafka-0
annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
pod-name: kafka-0
type: LoadBalancer
4) You have to use kafka-<pod-id>.kafka.my.company.com:port
5) It should be set to the external addres so that clients can connect to it. This article might help with understanding.
Similar case was on Github, it might help you also - https://github.com/kow3ns/kubernetes-kafka/issues/3
In addition, You could also think about Ingress - https://tothepoint.group/blog/accessing-kafka-on-google-kubernetes-engine-from-the-outside-world/

Access IP address of PODs via lookup on Ingress URL

I am learning Kubernetes and have deployed a headless service on Kubernetes(on AWS) which is exposed to the external world via nginx ingress.
I want nslookup <ingress_url> to directly return IP address of PODs.
How to achieve that?
Inside the cluster:
It's not a good idea to let a <ingress_host> resolved to Pod IP. It's a common design to let different kinds of pod served on one single hostname under different paths, but you can only set one (or one group of, with DNS load balance) IP record for it.
However, you can do this by adding <ingress_host> <Pod_IP> into /etc/hosts in init script, since you can get <Pod_IP> by doing nslookup <headless_service>.
HostAlias is another option if you konw the pod ip before applying the deployment.
From outside:
I don't think it's possible outside the cluster. Because you need to do the DNS lookup to get to the ingress controller first, which means it has to be resolved to the IP of ingress controller.
At last, it's a bad idea to use a headless service on Pod because many apps do DNS lookups once and cache the results, which might bring a problem because the IP of Pod can be "changed" frequently.
If you declare a “headless” service with selectors, then the internal DNS for the service will be configured to return the IP addresses of its pods directly. This is a somewhat unusual configuration and you should also expect an effect on other, cluster internal, users of that service.
This is documented here. Example:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
clusterIP: None
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376