Cross-Host Communication: Ambassador pattern versus port exposition - sockets

I couldn't find any other way to establish cross-host communication between two Docker containers (at this moment) than using the Ambassador pattern proposed here.
I am wondering what are the advantages of using this pattern over the simple use of port exposition provided by Docker. Here's an example on how I use the port exposition technique:
Node A
ifconfig eth0 192.168.56.101
docker run -i -t -p 5558 --name NodeA ubuntu /bin/bash
Then the local port, to the Docker container: 5558 maps to the physical port 49153 of the host.
(5558 --> 49153)
Node B
ifconfig eth0 192.168.56.103
docker run -i -t -p 5558 --name NodeB ubuntu /bin/bash
Then the local port, to the Docker container: 5558 maps to the physical port 49156 of the host.
(5558 --> 49156)
*The port mapping from the Docker container to the physical port can be forced by using -p 5558:5558
Cross-Host Container Communication
Then NodeA can communicate with NodeB, container to container, through the following ip address:
192.168.56.103:49156
And NodeB can listen on port 5558 from inside the container.
Conclusion
This seems like a straight-forward way to achieve this kind of communication, although it very much feels like a hack around the concept of containers. My question is why use one option over the other and also does it actually break the concept of isolation from the host?

Related

Double port forwarding kubernetes + docker

Summary:
I have a docker container which is running kubectl port-forward, forwarding the port (5432) of a postgres service running as a k8s service to a local port (2223).
In the Dockerfile, I have exposed the relevant port 2223. Then I ran the container by publishing the said port (-p 2223:2223)
Now when I am trying to access the postgres through psql -h localhost -p 2223, I am getting the following error:
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
However, when I do docker exec -ti to the said container and run the above psql command, I am able to connect to postgres.
Dockerfile CMD:
EXPOSE 2223
CMD ["bash", "-c", "kubectl -n namespace_test port-forward service/postgres-11-2 2223:5432"]
Docker Run command:
docker run -it --name=k8s-conn-12 -p 2223:2223 my_image_name:latest
Output of the docker run command:
Forwarding from 127.0.0.1:2223 -> 5432
So the port forwarding is successful, and I am able to connect to the postgres instance from inside the docker container. What I am not able to do is to connect from outside the container with the exposed and published port
You are missing a following parameter with your $ kubectl port-forward ...:
--address 0.0.0.0
I've reproduced the setup that you've tried to achieve and this was the reason the connection wasn't possible. I've included more explanation below.
Explanation
$ kubectl port-forward --help
Listen on port 8888 on all addresses, forwarding to 5000 in the pod
kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000
Options:
--address=[localhost]: Addresses to listen on (comma separated). Only accepts IP addresses or
localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1
and will fail if neither of these addresses are available to bind.
By default: $ kubectl port-forward will bind to the localhost i.e. 127.0.0.1. In this setup the localhost will be the internal to the container and will not be accessible from your host even with the --publish (-p) parameter.
To allow the connections that are not originating from localhost you will need to pass earlier mentioned: --address 0.0.0.0. This will make kubectl listen on all IP addresses and respond to the traffic accordingly.
Your Dockerfile CMD should look similar to:
CMD ["bash", "-c", "kubectl -n namespace_test port-forward --address 0.0.0.0 service/postgres-11-2 2223:5432"]
Additional reference:
Kubernetes.io: Docs: Reference: Generated: Kubectl commands

PgAdmin not working with Postgres container

I am connecting to a postgresql docker service with the following commands :
docker create --name postgres-demo -e POSTGRES_PASSWORD=Welcome -p 5432:5432 postgres:11.5-alpine
docker start postgres-demo
docker exec -it postgres-demo psql -U postgres
I can successfully connect to postgresql conatiner service
Now I want to connect to PgAdmin4 to make some queries to the existing data in postgres database
However I keep having this error
The IP address that I am using is the one I extracted from docker inspect DOCKERID
I have restarted the postgresql service on windows but nothing happens. What I am doing wrong ?
Thanks
In fact, what you get with docker inspect(172.17.0.2) is just the ip of container, to visit the service in container, you need port binding host's port to container's port.
I see you already used -p 5432:5432 to do it, so please get the ip of host using ip a s, then if you get e.g. 10.10.0.186, then use this host ip to visit the service, use 5432 as a port.
To publish a port for our container, we’ll use the --publish flag (-p for short) on the docker run command. The format of the --publish command is [host port]:[container port]. So if we wanted to expose port 8000 inside the container to port 3000 outside the container, we would pass 3000:8000 to the --publish flag.
A diagram let you know the topologic of docker network, FYI:
You should try to connect to:
host: 0.0.0.0
port: 5432
while your docker container is up and running.

kubectl port-forward to another endpoint

Is there a corresponding command with kubectl to:
ssh -L8888:rds.aws.com:5432 example.com
kubectl has port-forward you can also specify --address but that strictly requires an IP address.
The older answer is valid.
Still, a workaround would be to use something like
https://hub.docker.com/r/marcnuri/port-forward
kubectl run --env REMOTE_HOST=your.service.com --env REMOTE_PORT=8080 --env LOCAL_PORT=8080 --port 8080 --image marcnuri/port-forward test-port-forward
Run it on the cluster and then port forward to it.
kubectl port-forward test-port-forward 8080:8080
Short answer, No.
In OpenSSH, local port forwarding is configured using the -L option:
ssh -L 80:intra.example.com:80 gw.example.com
This example opens a connection to the gw.example.com jump server, and forwards any connection to port 80 on the local machine to port 80 on intra.example.com.
By default, anyone (even on different machines) can connect to the specified port on the SSH client machine. However, this can be restricted to programs on the same host by supplying a bind address:
ssh -L 127.0.0.1:80:intra.example.com:80 gw.example.com
You can read the docs here.
The port-forward in Kubernetes works only within the cluster, you can forward traffic that will hit specified port to Deployment or Service or a Pod
kubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]
--address flag is to specify what to listen on 0.0.0.0 means everything localhost is as name and you can set an IP on which it can be listening on.
Documentation is available here, you can also read Use Port Forwarding to Access Applications in a Cluster.
One workaround you can use if you have an SSH server somewhere on the Internet is to SSH to your server from your pod, port-forwarding in reverse:
# Suppose a web console is being served at
# http://my-service-8f6717ab-e.default:8888/
# inside your cluster:
kubectl exec -it my-job-f523b248-7htj6 -- ssh -R8888:my-service-8f6717ab-e.default:8888 user#34.23.1.2
Then you can connect to the service inside Kubernetes from outside of it. If the SSH server is not local to you, you can SSH to it from your local machine with a normal port forward:
me#my-macbook-pro:$ ssh -L8888:localhost:8888 user#34.23.1.2
Then point your browser to http://localhost:8888/

Container in dind access another container in the same Kubernetes pod

In a Kubernetes pod, I have:
busybox container running in a dind container
fluentd container
I understand if dind wants to access fluentd, it needs to simply connect to localhost:9880. But what if busybox wants to access fluentd as the depicted diagram below. Which address should I use?
These tips may help you:
1. First approach
From inside the docker:latest container, where you were trying to access it originally, it will be available on whatever hostname is set for the docker:dind container. In this case, you used --name dind, therefore curl dind:busybox_port would give you the standard.
And then you could from inside the docker:dind container (busybox) connect to fluentd, it will be available on localhost:9880.
2. Second approach
Another approach is to EXPOSE [/<protocol>...] and in this case we assume that busyboox and fluentd are in different networks
You can also specify this within a docker run command, such as:
$ docker run --expose=1234 busybox
But EXPOSE will not allow communication via the defined ports to containers outside of the same network or to the host machine. To allow this to happen you need to publish the ports.
Publish ports and map them to the host
To publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
$ docker run -p 80:80/tcp -p 80:80/udp busybox
And then connect from busybox to fluentd using localhost:9880
You can find more information here: docker-in-docker.
I hope it helps.

Docker establish tcp communication between host and container

I have a docker image that implements a tcp node via twisted and I would like to establish a communication to and from the host
on the host I start netcat
nc -l -p 6789
if I run the docker with
docker run -it -p 6789:6789 image_name
I get
Bind for 0.0.0.0:6789 failed: port is already allocated
If I try the opposite order, so docker run and after start netcat on the host I get
Error: Couldn't setup listening socket (err=-3)
is there a way to bind an allocated port from the host to the container?
The problem is that you are using the same port on the host to run nc -l -p 6789 and to map the containers port (-p 6789:6789). Try to change one of them.