In a Kubernetes pod, I have:
busybox container running in a dind container
fluentd container
I understand if dind wants to access fluentd, it needs to simply connect to localhost:9880. But what if busybox wants to access fluentd as the depicted diagram below. Which address should I use?
These tips may help you:
1. First approach
From inside the docker:latest container, where you were trying to access it originally, it will be available on whatever hostname is set for the docker:dind container. In this case, you used --name dind, therefore curl dind:busybox_port would give you the standard.
And then you could from inside the docker:dind container (busybox) connect to fluentd, it will be available on localhost:9880.
2. Second approach
Another approach is to EXPOSE [/<protocol>...] and in this case we assume that busyboox and fluentd are in different networks
You can also specify this within a docker run command, such as:
$ docker run --expose=1234 busybox
But EXPOSE will not allow communication via the defined ports to containers outside of the same network or to the host machine. To allow this to happen you need to publish the ports.
Publish ports and map them to the host
To publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
$ docker run -p 80:80/tcp -p 80:80/udp busybox
And then connect from busybox to fluentd using localhost:9880
You can find more information here: docker-in-docker.
I hope it helps.
Related
Is it possible to map multiple ports in Kubernetes to the same container port within a deployment? I know this is possible in Docker, for example
docker run -tid -p 3000:3000 -p 3001:3000 nginx
I have not been able to find any documentation for this use case within a Kubernetes deployment. The reason I need this is to bypass mTLS in my liveness probe without changing my backend webapp container, where the service and healthcheck endpoints are hosted on port 3000.
As #zerkms commented, it is not possible to map host ports to container ports in the same way that docker does.
To fix my mTLS/healthcheck problem, I created a curl container called liveness inside my pod and set the liveness container's healthcheck to be of type exec that curls localhost on my app's port for the healthcheck endpoint.
I'm running flink run-application targetting Kubernetes, using these options:
-Dmetrics.reporter.prom.class=org.apache.flink.metrics.prometheus.PrometheusReporter
-Dmetrics.reporter.prom.port=9249
I specify a container image which has the Prometheus plugin copied into /opt/flink/plugins. From within the job manager container I can download Prometheus metrics on port 9249. However, kubectl describe on the flink pod does not show that the Prometheus port is exposed. The ports line in the kubectl output is:
Ports: 8081/TCP, 6123/TCP, 6124/TCP
Therefore, I expect that nothing outside the container will be able to read the Prometheus metrics.
You are misunderstanding the concept of exposed ports.
When you expose a port in kubernetes with the ports option (the same apply with Docker and the EXPOSE tag), nothing is open on this port from the outside world.
It's basically just a hint for users of that image to tell them "Hey, you want to use this image ? Ok, you may want to have a look at this port on this container."
So if your port does not appear when you do kubectl describe, then it does not mean that you can't reach that port. You can still map it with a service targetting this port.
Furthermore, if you really want to make it appear with kubectl describe, then you just have to add it to your kubernetes descriptor file :
...
containers:
- ports:
- name: prom-http
containerPort: 9249
I've a RabbitMQ cluster running on a Kubernetes environment. I don't have access to the containers shell, so I'm trying to run rabbitmqctl from a local container (same image).
These ports are exposed:
- 15672 (exposed as 32672)
- 5671 (exposed as 32671)
- 4369 (exposed as 32369)
- 25672 (exposed as 32256)
The correct cookie is on $HOME/.erlang.cookie on my local container.
How to specify the cluster URL and port to rabbitmqctl, so I can access the RabbitMQ cluster from outside?
Is it necessary to expose other ports?
Is it even possible to do this, since I can't find any reference to this on documentation?
You will want to expose ports 4369 and 25672 using the same port numbers externally as I can't think of a way to tell the Erlang VM running rabbitmqctl to use a different port for EPMD lookups. You should also expose 35672-35682 using the same port range externally.
Since you're using Kubernetes, I'll assume that you are using long names. Assume that, within your container, your node name is rabbit#container1.my.org, to access it externally use this command:
rabbitmqctl -l -n rabbit#container1.my.org
Please note that container1.my.org must resolve via DNS to the correct IP address to connect to that container.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on Stack Overflow.
I am trying to start a Bro container in a Pod. In docker I would normally run something like this:
docker run -d --net=host --name bro
Is there something in the container spec that would replicate that functionality?
You can use the hostNetwork option of the API to run a pod on the host's network.
I couldn't find any other way to establish cross-host communication between two Docker containers (at this moment) than using the Ambassador pattern proposed here.
I am wondering what are the advantages of using this pattern over the simple use of port exposition provided by Docker. Here's an example on how I use the port exposition technique:
Node A
ifconfig eth0 192.168.56.101
docker run -i -t -p 5558 --name NodeA ubuntu /bin/bash
Then the local port, to the Docker container: 5558 maps to the physical port 49153 of the host.
(5558 --> 49153)
Node B
ifconfig eth0 192.168.56.103
docker run -i -t -p 5558 --name NodeB ubuntu /bin/bash
Then the local port, to the Docker container: 5558 maps to the physical port 49156 of the host.
(5558 --> 49156)
*The port mapping from the Docker container to the physical port can be forced by using -p 5558:5558
Cross-Host Container Communication
Then NodeA can communicate with NodeB, container to container, through the following ip address:
192.168.56.103:49156
And NodeB can listen on port 5558 from inside the container.
Conclusion
This seems like a straight-forward way to achieve this kind of communication, although it very much feels like a hack around the concept of containers. My question is why use one option over the other and also does it actually break the concept of isolation from the host?