How to make a TCP outgoing connection with Docker container? - sockets

My Go application makes TLS connections via tls.Dial() to exchange data.
It works fine when run from the host:
But the outgoing connection doesn't seem to work when the app is run from a Docker container. The app hangs indefinitely.
Note 1: Same behavior with using docker run -p $(docker-machine ip):2500:2500 ...
Note 2: VM doesn't have extra port forwarding settings other than the default settings that came with docker-machine's default VM.
Docker image build with Dockerfile:
FROM golang:latest
RUN mkdir -p "$GOPATH/src/path/to/app"
# Install dependencies
RUN go get github.com/path/to/dep
VOLUME "$GOPATH/src/path/to/app"
EXPOSE 2500
WORKDIR "$GOPATH/src/path/to/app"
CMD ["go", "run", "main.go"]
Host is OS X running docker-machine.
Question
How can I make the TCP outgoing connection to work?

You are either using boot2docker or docker-machine (since you are running docker on OSX). If you are using boot2docker, you have to forward the ports on VirtualBox as well as docker, have a look at this blog post:
https://fogstack.wordpress.com/2014/02/09/docker-on-osx-port-forwarding/
If you are using docker-machine, you have to connect to the docker-machine assigned ip, not localhost, have a look at this post:
https://github.com/docker/machine/issues/710
I see now that you are using docker-machine specifically, so the post about docker-machine should answer your question.
Edit: I misunderstood the question. You are trying to make an outgoing connection on a forwarded port. That is not correct. By default docker can make outgoing connections on any port. The port forwarding is for incoming connections only. Please try again without specifying any ports to forward. My suspicion is that you are trying to make an outgoing connection on the incoming (forwarded) port.

I've just had exactly the same problem. Was unable to connect out at all.
Restarted the container, and suddenly outgoing connections worked fine. It's possible that the container survived an update of docker?
Currently using Docker version 18.09.3, build 774a1f4

Related

Run a K3S server in a docker container, and connect a K3S agent in another docker container

I know k3d can do this magically via k3d cluster create myname --token MYTOKEN --agents 1, but I am trying to figure out how to do the most simple version of that 'manually'. I want to create a server something like:
docker run -e K3S_TOKEN=MYTOKEN rancher/k3s:latest server
And connect an agent something like like:
docker run -e K3S_TOKEN=MYTOKEN -e K3S_URL=https://localhost:6443 rancher/k3s:latest agent
Does anyone know what ports need to be forwarded here? How can I set this up? Nearly everything I try, the agent complains about port 6444 already in use, even if I disable as much as possible about the server with any combination of --no-deploy servicelb --disable-agent --no-deploy traefik
Feel free to disable literally everything other than the server and the agent, I'm trying to make this ultra ultra simple, but just butting my head against a wall at the moment. Thanks!
The containers must "see" each other. Docker isolates the networks by default, so "localhost" in your agent container is the agent container itself.
Possible solutions:
Run both containers without network isolation using --net=host, map API port of the server to the host with --port and use the host IP in the agent container or use docker-compose.
A working example for docker-compose is described here: https://www.trion.de/news/2019/08/28/kubernetes-in-docker-mit-k3s.html

Unable to access application through minikube tunnel

I'm currently using minikube and I'm trying to access my application by utilizing the minikube tunnel since the service type is LoadBalancer.
I'm able to obtain an external IP when I execute the minikube tunnel, however, when I try to check it on the browser it doesn't work. I've also tried Postman and curl, they both don't work.
To add to this, if I shell into the pod I can use curl and it does work. Furthermore, I executed kubectl port-forward and I was able to access my application through localhost.
Does anyone have any idea as to why I'm not being able to access my application even though everything seems to be running correctly?
Your service is probably bound to localhost. Minikube starts the cluster in a VM or docker (depending on the driver you are using) that is bound to an external IP, $(minikube ip).
When you are running a minikube tunnel you're tunneling from minikube cluster external IP to the internal IP of the load balancer, the LB service in Kubernete the External IP goes from "Pending" to an actual internal IP and something like this should work:
curl -H 'Host: localhost' -v $(minikube ip)
However, it doesn't in the browser, since in the above command you are sending the request to the minikube's IP, not localhost. What I do for this to work is a ssh tunnel like this one:
ssh -i $(minikube ssh-key) docker#$(minikube ip) -L 8008:localhost:80
This maps the LB listener in port 80, in minikube's cluster, to 8008 in localhost. The external IP of the service remains pending but it works since the Kube controller can still find it. If you want to map port 80 then you will need to add sudo.
If the version of ssh on your system (the one in your path) is less than 8.0, 'minikube tunnel' will silently fail to instantiate the ssh tunnel for some port forwards. (e.g. privileged ports)
Open a command prompt as administrator, and type 'where.exe ssh'. Navigate to that location in windows explorer, and right-click on 'ssh.exe'. Choose Properties->Details to see the version.
If this is less than version 8.0 you must upgrade that to at least version 8.0 to prevent this silent failure of ssh by 'minikube tunnel'.
After upgrading, ssh, ensure that the newer version is the one that will be executed by using the 'where.exe' command again. If there are two on your system, then reorder the paths in your path environment variable. Restart your shell (or better) reboot the system so that all processes environments pick up the path changes.
Then try 'minikube tunnel' again. When it is working, you should see an ssh instance in the task manager for each tunnel that minikube creates.
In my case minikube service <serviceName> solved this issue.
For further details look here in minikube docs.

Docker Tooling for Eclipse - how to connect to docker daemon running inside VM

I have a docker daemon/engine running inside guest (Ubuntu) virtual machine
and as per Docker Tooling for Eclipse instruction I had downloaded and setup the plugin in Eclipse Mars on my host Mac OS machine.
How do I connect to Docker running in guest VM from the host machine IDE.
As per instructions, I would need to enter TCP and authentication so how do I get these details to setup the connection?
I had tried with guest OS IP (i.e. tcp://127.0.0.1:2376 output of ifconfig command with local host IP) but was not able to connect.
Here are the steps I used to get Docker Tooling working in Eclipse Neon on Windows.
Open the Docker Quickstart Terminal
Execute docker-machine ls
Copy the URL (e.g. tcp://192.168.99.100:2376)
Click the Add Connection button in the toolbar for Docker Explorer
Provide a Connection name:
Select TCP Connection
Paste the above URL into the URI: edit box
Change tcp to https in the edit box
Select Enable authentication
Set the path to C:\Users\username\.docker\machine\certs
Click on Test Connection to verify
There are two parts to this. First, enabling the TCP socket (which I'll answer). Then, setting TLS authentication on the socket (which I'll link to but won't cover). The first part should get you up.
You'll need to edit the DOCKER_OPTS settings in /etc/default/docker in the VM. Edit this file and set DOCKER_OPTS to something like:
DOCKER_OPTS="-H tcp://0.0.0.0:2376 -H unix://"
Then, restart Docker (sudo service docker restart). This should get you a TCP connection that you can put in your Eclipse settings as:
tcp://10.0.2.15:2376
The second part (which is optional at this point) would be setting up the CA and certificates per https://docs.docker.com/engine/articles/https/. But I'd actually recommend just installing Docker Machine and provisioning your VM that way as it will create the needed certificates for you. Then, if your machine was named dev, you just point the authentication dir to ~/.docker/machine/machines/dev.
If Docker Daemon is running(i.e docker desktop running) in window task bar , not inside the VM , just get the URI from its context menu setting. In eclipse docker tooling perspective , we can connect to running docker daemon only by providing the URI.

Dokku: Expose two ports from an application

I am trying to deploy a Scala based application to dokku, the application runs a http server and a customised sshd server.
The problem I have is it seems that dokku only supports one port for the application.
I need dokku to expose both my applications ports to the web.
In docker this is possible and quite straight forward to do, but when I implement the same technique in the dokku file, I get an error.
Any suggestions on allowing two ports to be accessible?
Since this is, after all, docker, you can use an ambassador...
You will need a line like:
docker run -t -i -link mysql:mysql -name mysql_ambassador -p 3306:3306 ctlc/ambassador
Replacing with your port and mysql with your container name (from docker images)
See https://www.ctl.io/developers/blog/post/deploying-multi-server-docker-apps-with-ambassadors
NOTE: Make sure you docker pull svendowideit/ambassador:latest before...

port redirect to docker containers by hostname

I want to setup serve multiple sites from one server:
1. http://www.example.org => node.js-www (running on port (50000)
2. http://files.example.org => node.js-files (running on port 50001)
Until now I only found out to have docker doing port redirect when using static ips.
Is is actual possible to use docker for port redirection via hostname?
I use a free amazon EC2 insance.
Thanks
Bo
EDIT:
I want to have multiple nodes applications running on the same port but however serving a different hostname.
As far as I'm aware docker does not have such functionality built in, nor it should.
To accomplish what you're trying to do you'd probably need some sort of reverse proxy, so node.js or nginx would do. Bouncy might be a good option: https://github.com/substack/bouncy
There is a great docker project on GitHub called nginx-proxy by jwilder.
This allows you to create a docker container that is doing a reverse-proxy by mapping only his port 80/443 to the host, instead of other containers. Then, all you have to do is for every new web container you create, provide a new environment variable VIRTUAL_HOST=some.domain.com.
An example:
Create a new nginx-proxy container
docker run -d -p 80:80 --net shared_hosting -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Create a container for each website. For example:
docker run -d -p 80 --net shared_hosting -e VIRTUAL_HOST=hello1.domain.com tutum/hello-world
docker run -d -p 80 --net shared_hosting -e VIRTUAL_HOST=drupal.domain.com drupal
You need to make sure that the hosts you own, configured in DNS to point to the server that runs the docker container. In this example, I will add the to the /etc/hosts file:
echo "127.0.0.1 hello1.domain.com drupal.domain.com" >> /etc/hosts
Navigate to http://hello1.domain.com and then to http://drupal.domain.com, and see that they both use port 80 but give you a different pages.
An important note about this service. As you noticed, I have added --net argument, this is because all containers you want to be a part of a shared hosting (proxy and websites) must be on the same virtual network (this can be defined by the argument --net or --network to the docker run command), especially when you use docker-compose to create dockers, because docker-compose creates its own virtual network, thus makes one container not reachable by another, so make sure the network is explicitly defined in the docker-compose.yml file.
Hope it helps.
I used varnish as a docker container that worked as my reverse proxy
it's on the docker index
https://index.docker.io/u/sysdia/docker-varnish/
I know this is an old question, but ran across it and wanted to point out that there are much cleaner ways to do what was requested. Since you are using AWS, you can have each of your two hostnames pointing at their own load balancer (ELB) in Route53. You could then deploy your container into ECS, for example, listening on both ports. Each of those load balancers can redirect traffic to the appropriate listening port. Now you have accomplished what you want, and if your traffic becomes too heavy or imbalanced, you can easily split the tasks into two different ECS clusters so they can scale independently.