How to access from outside a RabbitMQ Kubernetes cluster using rabbitmqctl? - kubernetes

I've a RabbitMQ cluster running on a Kubernetes environment. I don't have access to the containers shell, so I'm trying to run rabbitmqctl from a local container (same image).
These ports are exposed:
- 15672 (exposed as 32672)
- 5671 (exposed as 32671)
- 4369 (exposed as 32369)
- 25672 (exposed as 32256)
The correct cookie is on $HOME/.erlang.cookie on my local container.
How to specify the cluster URL and port to rabbitmqctl, so I can access the RabbitMQ cluster from outside?
Is it necessary to expose other ports?
Is it even possible to do this, since I can't find any reference to this on documentation?

You will want to expose ports 4369 and 25672 using the same port numbers externally as I can't think of a way to tell the Erlang VM running rabbitmqctl to use a different port for EPMD lookups. You should also expose 35672-35682 using the same port range externally.
Since you're using Kubernetes, I'll assume that you are using long names. Assume that, within your container, your node name is rabbit#container1.my.org, to access it externally use this command:
rabbitmqctl -l -n rabbit#container1.my.org
Please note that container1.my.org must resolve via DNS to the correct IP address to connect to that container.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on Stack Overflow.

Related

How does Kubernetes map ports of multi-container Pods?

I'm trying to learn Kubernetes. One thing I don't understand is the following scenario:
Given I have a pod with 2 containers. One container runs an app listening on port 80, the other container is a sidecar which does some polling from a web resource but doesn't listen on any port.
Now when I start a service with TargetPort = 80, how does Kubernetes know which container within the pod exposes this port? Does it inspect all containers to check for exposed ports? Or does it just do a mapping for port 80 on all containers within the pod?
Additionally, is it possible to change the containers exposed port in Kubernetes, so the port the container exposes (=containerPort) maps to a different port within the container?
I mean something similar like the -p argument in Docker.
The Kubernetes overview documentation of Pods notes:
Every container in a Pod shares the network namespace.... Within a Pod, containers share an IP address and port space....
So if you have multiple containers in a Pod, from outside that Pod, they all look "the same", in the same way that you could have multiple server processes running on a single physical machine with a single IP address. You can't run two containers that listen on the same port in the same Pod. The inbound request will reach whichever of the containers happens to be listening on that port (if any).
Is it possible to change the containers exposed port in Kubernetes, so the port the container exposes (=containerPort) maps to a different port within the container?
You can do this with your Service. Remember that you don't generally connect directly to a Pod; instead, you connect to a Service and that forwards the request on to one of the matching Pods. So if your Pod spec says
containers:
- name: sidecar
# ...
- name: server
ports:
- name: http
containerPort: 8080
then the corresponding Service can say
ports:
- port: 80
targetPort: http
and you'll connect to http://service-name.namespace.svc.cluster.local using the default HTTP port 80, even if the container process is actually listening on port 8080 or 3000 or whatever else.
If I good understand your question the explanation about your question will be in this article:
Containers are often intended to solve a single, narrowly defined problem, such as a microservice, but in the real world, problems require multiple containers for a complete solution. In this article, we’re going to talk about combining multiple containers into a single Kubernetes Pod, and what it means for inter-container communication.
There are several types of communication between containers in a single pod and they are described in the article.
The most important part should be Inter-container network communication.
Look also at this guide about Multi-Container Pods in Kubernetes.
You can also find the tutorial with examples - Extending applications on Kubernetes with multi-container pods.

How do I get the pod to expose the prometheus monitoring port in Flink application mode on Kubernetes?

I'm running flink run-application targetting Kubernetes, using these options:
-Dmetrics.reporter.prom.class=org.apache.flink.metrics.prometheus.PrometheusReporter
-Dmetrics.reporter.prom.port=9249
I specify a container image which has the Prometheus plugin copied into /opt/flink/plugins. From within the job manager container I can download Prometheus metrics on port 9249. However, kubectl describe on the flink pod does not show that the Prometheus port is exposed. The ports line in the kubectl output is:
Ports: 8081/TCP, 6123/TCP, 6124/TCP
Therefore, I expect that nothing outside the container will be able to read the Prometheus metrics.
You are misunderstanding the concept of exposed ports.
When you expose a port in kubernetes with the ports option (the same apply with Docker and the EXPOSE tag), nothing is open on this port from the outside world.
It's basically just a hint for users of that image to tell them "Hey, you want to use this image ? Ok, you may want to have a look at this port on this container."
So if your port does not appear when you do kubectl describe, then it does not mean that you can't reach that port. You can still map it with a service targetting this port.
Furthermore, if you really want to make it appear with kubectl describe, then you just have to add it to your kubernetes descriptor file :
...
containers:
- ports:
- name: prom-http
containerPort: 9249

Kubernetes: RabbitMQ Client could not connect. (None of the specified endpoints were reachable)

When I use RabbitMQ from localhost, I supply my RabbitMQConnectionString as localhost in my ASP.NET Core WebApi and everything works fine.
But I wanted to use RabbitMQ from a Kubernetes Cluster, so, I created a new Namespace in Kubernetes Cluster for RabbitMQ, then I created an app from the Kubernetes Dashboard with the image: rabbitmq:management I specified External Service with Port and target Port as 15672 for both. And waited for it to be deployed.
I can access the management portal of RabbitMQ with the External IP of service: xx.xx.153.133:15672 in the browser, but When I use this IP with port as RabbitMQConnectionString in my ASP.NET Core WebApi, it gives me the following error(through seq):
And when I supply IP only i.e. xx.xx.153.133, it searches RabbitMQ on 5672 instead of 15672 and gives me the following error:
Can someone please guide me through on how to proceed and fix the error.
I figured it out, I added all 3 ports on Kubernetes now: 15672, 5672, 25672. And used only the IP as the RabbitMQConnectionString. It then automatically uses 5672 port to send & receive messages.

I just want to run a simple app in Kubernetes

I have a docker image that serves a simple static web page.
I have a working Kubernetes cluster of 4 nodes (physical servers not in the cloud anywhere).
I want to run that docker image on 2 of the 4 Kubernetes nodes and have it be accessible to the world outside the cluster and load balanced and have it move it to another node if one dies.
Do I need to make a pod then a replication controller then a kube proxy something?
Or do I need to just make a replication controller and expose it somehow?
Do I need to make service?
I don't need help with how to make any of those things, that seems well documented, but what I can't tell what I need to make.
What you need is to expose your service (that consists of pods which are run/scaled/restarted by your replication controller). Using deployment instead of replication controller has additional benefits (mainly for updating the app).
If you are on bare metal then you probably wish to expose your service via type: NodePort - so every node in your cluster will open a static port that routes traffic to pods.
You can then either point your load balancer to that nodes on that port, or make a DNS entry with all Kubernetes nodes.
Docs: http://kubernetes.io/docs/user-guide/quick-start/
You'll need:
1) A load balancer on one of your nodes in your cluster, that is a reverse proxy Pod like nginx to proxy the traffic to an upstream.
This Pod will need to be exposed to the outside using hostPort like
ports:
- containerPort: 80
hostPort: 80
name: http
- containerPort: 443
hostPort: 443
name: https
2) A Service that will use the web server selector as target.
3) Set the Service name (which will resolve to the Service IP) as the upstream in nginx config
4) Deploy your web server Pods, which will have the selector to be targeted by the Service.
You might also want to look at External IP for the Service
http://kubernetes.io/docs/user-guide/services/#external-ips
but I personally never managed to get that working on my bare metal cluster.

How to expose dynamic ports using Kubernetes service on Google Container Engine?

I am trying to connect to a Docker container on Google Container Engine(GKE) from my local machine through the internet by TCP protocol. So far I have used Kubernetes services which gives an external IP address, so the local machine can connect to the container on GKE using the service. When we create a service, we can specify only one port and cannot specify the port range. Please see the my-ros-service.yaml below. In this case, we can access the container by 11311 port from outside of GCE.
However, some applications that run on my container expose dynamic ports to connect to other applications. Therefore I cannot determine the port number that the application uses and cannot create the Kubernetes services before I run the application.
So far I have managed to connect to the container by creating many services which have different port while running the application. But this is not a realistic way to solve the problem.
My question is that:
How to connect to the application that exposes dynamic ports on Docker container from outside of the GCE by using Kubernetes service?
If possible, can we create a service which exposes dynamic port for incoming connection before running the application which runs on the container?
Any advice or information you could provide would be greatly appreciated.
Thank you in advance.
my-ros-service.yaml
kind: Service
apiVersion: v1beta1
id: my-ros-service
port: 11311
selector:
name: my-ros
containerPort: 11311
createExternalLoadBalancer: true
I don't think there is currently a better solution than what you are doing. There is already a related issue, kubernetes issue 1802, about having multiple ports per service. I mentioned your requirements on that issue. You might want to follow up there with more information about your use case, such as what program you are running (if it is publicly available), and whether the dynamic ports come from a specific contiguous range.