Docker : java.net.ConnectException: Connection refused - Application running at port 8083 is not able to access other application on port 3000 - rest

I have to consume an external rest API(using restTemplate.exchange) with Spring Boot. My rest API is running on port 8083 with URL http://localhost:8083/myrest (Docker command : docker run -p 8083:8083 myrest-app)
External API is available in form of public docker image and after running below command , I am able to pull and run it locally.
docker pull dockerExternalId/external-rest-api docker
run -d -p 3000:3000 dockerExternalId/external-rest-api
a) If I enter external rest API URL, for example http://localhost:3000/externalrestapi/testresource directly in chrome, then I get valid JSON data.
b) If I invoke it with myrest application from eclipse(Spring Boot Application), still I am getting valid JSON Response. (I am using Windows Platform to test this)
c) But if I run it on Docker and execute myrest service (say http://localhost:8083/myrest), then i am facing java.net.ConnectException: Connection refused
More details :
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://localhost:3000/externalrestapi/testresource": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
P.S - I am using Docker on Windows.

# The problem
You run with:
docker run -p 8083:8083 myrest-app
But you need to run like:
docker run --network "host" --name "app" myrest-app
So passing the flag --network with value host will allow you container to access your computer network.
Please ignore my first approach, instead use a better alternative that does not expose the container to the entire host network... is possible to make it work, but is not a best practice.
A Better Alternative
Create a network to be used by both containers:
docker network create external-api
Then run both containers with the flag --network external-api.
docker run --network "external-api" --name "app" -p 8083:8083 myrest-app
and
docker run -d --network "external-api" --name "api" -p 3000:3000 dockerExternalId/external-rest-api
The use of flag -p to publish the ports for the api container are only necessary if you want to access it from your computers browser, otherwise just leave them out, because they aren't needed for 2 containers to communicate in the external-api network.
TIP: docker pull is not necessary, once docker run will try to pull the image if does not found it in your computer.
Let me know how it went...
Call the External API
So in both solutions I have added the --name flag so that we can reach the other container in the network.
So to reach the external api from my rest app you need to use the url http://api:3000/externalrestapi/testresource.
Notice how I have replaced localhost by api that matches the value for --name flag in the docker run command for your external api.

From your myrest-app container if you try to access http://localhost:3000/externalrestapi/testresource, it will try to access 3000 port of the same myrest-app container.
Because each container is a separate running Operating System and it has its own network interface, file system, etc.
Docker is all about Isolation.
There are 3 ways by which you can access an API from another container.
Instead of localhost, provide the IP address of the external host machine (i.e the IP address of your machine on which docker is running)
Create a docker network and attach these two containers. Then you can provide the container_name instead of localhost.
Use --link while starting the container (deprecated)

Related

Connect to PostgreSQL from Flask app in another docker container

On a virtual machine I have 2 docker containers running with the names <postgres> and <system> that run on the network with name <network>. I can't change options in these containers. I have created a flask application that connects to a database and outputs the required information. To connect from local computer I use
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<VM_ip>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
and it worked great.
But, when I run my application on the same VM and specify
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<postgres>.<network>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
I get an error:
psycopg2.OperationalError: could not parse network address \"<postgres>.<network>\": Name or service not known.
Local connections are allowed in pg_hba, the problem is in connecting from the new container in VM.
Here are the settings of my new container:
version: '3'
services:
app:
container_name: app
restart: always
build: ./app
ports:
- "5000:5000"
command: gunicorn -w 1 -b 0.0.0.0:8000 wsgi:server
I tried to make the same connection as from the local computer, specifying the VM_ip, but that didn't help either.
I also tried to specify the <postgres> container ip instead of its name in the host=, but this also caused an error.
Do you know what could be the problem?
You need to create a network first which you will use to communicate between containers. You can do that by:
docker network create <example> #---> you can name it whatever you want
Then you need to connect both containers with the network that you made.
docker run -d --net example --name <postgres_container> <postgres_image>
docker run -d --net example --name <flask_container> <flask_image>
You can read more about the docker network in its documentation here:
https://docs.docker.com/network/
from what I can see you might be using the docker-compose file for the deployment of the services, you can add one more layer above the service layer for the network where you can define the network that is supposed to be used by the services that are deployed. The network that is defined needs also be mentioned in the service definition this lets the Internal DNS engine that docker-compose creates in the background discover all the services in the network with the help of the service name.
A Bridge network may be a good driver to be used here.
You can use the following links for a better understanding of networks in docker-compose.
https://docs.docker.com/compose/compose-file/compose-file-v3/#network
https://docs.docker.com/compose/compose-file/compose-file-v3/#networks

Docker Container - Unable to access on the browser

I am new to docker and I've been trying to run pgadmin through docker. I ran the following command:
docker run -p 5555:80 --name pgadmin -e PGADMIN_DEFAULT_EMAIL="user#domain.com" -e PGADMIN_DEFAULT_PASSWORD="***" dpage/pgadmin4
The container is currently running but I'm not able to access it through the browser (localhost:5555). It keeps loading gives me an error "Secure connection failed". Where am I making a mistake?
PS:Please do let me know if any further information is needed to answer/understand my question.
I can replicate your issue if I use HTTPS with port 80.
At the time of writing this answer, the dpage/pgadmin4 with the latest tag (which is the one you're using) exposes ports 80 and 443 - try using the other, secure one instead.
Your line should be:
docker run -p 5555:443 --name pgadmin -e PGADMIN_DEFAULT_EMAIL="user#domain.com" -e PGADMIN_DEFAULT_PASSWORD="***" dpage/pgadmin4
My guess is you're using something like HTTPS Everywhere and it forces you to use HTTPS on unsecured port, thus giving you this warning.

REST request from one docker container to another fails

I have two applications, one of which has a RESTful interface that is used by the other. Both are running on the same machine.
Application A runs in a docker container. I am running it using the command line:
docker run -p 40000:8080 --name AppA image1
When I test Application B outside a docker container (in other words, before it is dockerized) Application B successfully executes all RESTful requests and receives responses without problems.
Unfortunately, when I dockerize and run Application B within a container:
docker run -p 8081:8081 --name AppB image2
whenever I attempt to send a RESTful request to Application A, I get the following:
Connect to localhost:40000 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused
Of course, I also tried making Application B connect using my machine's IP address. When I do that, I get the following failure:
Connect to 192.168.1.101:40000 failed: No route to Host
Has anyone seen this kind of behavior before? What causes an application that communicates perfectly well with another dockerized application outside a docker container to fail to communicate with that same dockerized application once it is itself dockerized???
Someone please advise...
Simply linking B to A docker run -p 8081:8081 --link AppA --name AppB image2, then you can access the REST service using AppA:8080.
The reason is that Docker containers run on its own subnet (normally 172.17.0.0-255) and they cannot access the network that your host is on. Also localhost would be the container itself, not the host.

How do I set up linkage between Docker containers so that restarting won't break it?

I have a few Docker containers running like:
Nginx
Web app 1
Web app 2
PostgreSQL
Since Nginx needs to connect to the web application servers inside web app 1 and 2, and the web apps need to talk to PostgreSQL, I have linkages like this:
Nginx --- link ---> Web app 1
Nginx --- link ---> Web app 2
Web app 1 --- link ---> PostgreSQL
Web app 2 --- link ---> PostgreSQL
This works pretty well at first. However, when I develop a new version of web app 1 and web app 2, I need to replace them. What I do is remove the web app containers, set up new containers and start them.
For the web app containers, their IP addresses at first would be something like:
172.17.0.2
172.17.0.3
And after I replace them, they will have new IP addresses:
172.17.0.5
172.17.0.6
Now, those exposed environment variables in the Nginx container are still pointing to the old IP addresses. Here comes the problem. How do I replace a container without breaking linkage between containers? The same issue will also happen to PostgreSQL. If I want to upgrade the PostgreSQL image version, I certainly need to remove it and run the new one, but then I need to rebuild the whole container graph, so this is not ideal for real-life server operation.
The effect of --link is static, so it will not work for your scenario (there is currently no re-linking, although you can remove links).
We have been using two different approaches at dockerize.it to solve this, without links or ambassadors (although you could add ambassadors too).
1) Use dynamic DNS
The general idea is that you specify a single name for your database (or any other service) and update a short-lived DNS server with the actual IP as you start and stop containers.
We started with SkyDock. It works with two docker containers, the DNS server and a monitor that keeps it updated automatically. Later we moved to something more custom using Consul (also using a dockerized version: docker-consul).
An evolution of this (which we haven't tried) would be to setup etcd or similar and use its custom API to learn the IPs and ports. The software should support dynamic reconfiguration too.
2) Use the docker bridge ip
When exposing the container ports you can just bind them to the docker0 bridge, which has (or can have) a well known address.
When replacing a container with a new version, just make the new container publish the same port on the same IP.
This is simpler but also more limited. You might have port conflicts if you run similar software (for instance, two containers can not listen on the 3306 port on the docker0 bridge), etcétera… so our current favorite is option 1.
Links are for a specific container, not based on the name of a container. So the moment you remove a container, the link is disconnected and the new container (even with the same name) will not automatically take its place.
The new networking feature allows you to connect to containers by
their name, so if you create a new network, any container connected to
that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases
You can use an ambassador container. But do not link the ambassador container to your client, since this creates the same problem as above. Instead, use the exposed port of the ambassador container on the docker host (typically 172.17.42.1). Example:
postgres volume:
$ docker run --name PGDATA -v /data/pgdata/data:/data -v /data/pgdata/log:/var/log/postgresql phusion/baseimage:0.9.10 true
postgres-container:
$ docker run -d --name postgres --volumes-from PGDATA -e USER=postgres -e PASS='postgres' paintedfox/postgresql
ambassador-container for postgres:
$ docker run -d --name pg_ambassador --link postgres:postgres -p 5432:5432 ctlc/ambassador
Now you can start a postgresql client container without linking the ambassador container and access postgresql on the gateway host (typically 172.17.42.1):
$ docker run --rm -t -i paintedfox/postgresql /bin/bash
root#b94251eac8be:/# PGHOST=$(netstat -nr | grep '^0\.0\.0\.0 ' | awk '{print $2}')
root#b94251eac8be:/# echo $PGHOST
172.17.42.1
root#b94251eac8be:/#
root#b94251eac8be:/# psql -h $PGHOST --user postgres
Password for user postgres:
psql (9.3.4)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.
postgres=#
postgres=# select 6*7 as answer;
answer
--------
42
(1 row)
bpostgres=#
Now you can restart the ambassador container whithout having to restart the client.
If anyone is still curious, you have to use the host entries in /etc/hosts file of each docker container and should not depend on ENV variables as they are not updated automatically.
There will be a host file entry for each of the linked container in the format LINKEDCONTAINERNAME_PORT_PORTNUMBER_TCP etc..
The following is from docker docs
Important notes on Docker environment variables
Unlike host entries in the /etc/hosts file, IP addresses stored in the
environment variables are not automatically updated if the source
container is restarted. We recommend using the host entries in
/etc/hosts to resolve the IP address of linked containers.
These environment variables are only set for the first process in the
container. Some daemons, such as sshd, will scrub them when spawning
shells for connection.
This is included in the experimental build of docker 3 weeks ago, with the introduction of services: https://github.com/docker/docker/blob/master/experimental/networking.md
You should be able to get a dynamic link in place by running a docker container with the --publish-service <name> arguments. This name will be accessible via the DNS. This is persistent on container restarts (as long as you restart the container with the same service name that is of course)
You may use dockerlinks with names to solve this.
Most basic setup would be to first create a named database container :
$ sudo docker run -d --name db training/postgres
then create a web container connecting to db :
$ sudo docker run -d -P --name web --link db:db training/webapp python app.py
With this, you don't need to manually connect containers with their IP adresses.
with OpenSVC approach, you can workaround by :
use a service with its own ip address/dns name (the one your end users will connect to)
tell docker to expose ports to this specific ip address ("--ip" docker option)
configure your apps to connect to the service ip address
each time you replace a container, you are sure that it will connect to the correct ip address.
Tutorial here => Docker Multi Containers with OpenSVC
don't miss the "complex orchestration" part at the end of tuto, which can help you start/stop containers in the correct order (1 postgresql subset + 1 webapp subset + 1 nginx subset)
the main drawback is that you expose webapp and PostgreSQL ports to public address, and actually only the nginx tcp port need to be exposed in public.
You could also try the ambassador method of having an intermediary container just for keeping the link intact... (see https://docs.docker.com/articles/ambassador_pattern_linking/ ) for more info
You can bind the connection ports of your images to fixed ports on the host and configure the services to use them instead.
This has its drawbacks as well, but it might work in your case.
Another alternative is to use the --net container:$CONTAINER_ID option.
Step 1: Create "network" containers
docker run --name db_net ubuntu:14.04 sleep infinity
docker run --name app1_net --link db_net:db ubuntu:14.04 sleep infinity
docker run --name app2_net --link db_net:db ubuntu:14.04 sleep infinity
docker run -p 80 -p 443 --name nginx_net --link app1_net:app1 --link app2_net:app2 ubuntu:14.04 sleep infinity
Step 2: Inject services into "network" containers
docker run --name db --net container:db_net pgsql
docker run --name app1 --net container:app1_net app1
docker run --name app2 --net container:app1_net app2
docker run --name nginx --net container:app1_net nginx
As long as you do not touch the "network" containers, the IP addresses of your links should not change.
Network-scoped alias is what you need is this case. It's a rather new feature, which can be used to "publish" a container providing a service for the whole network, unlike link aliases accessible only from one container.
It does not add any kind of dependency between containers — they can communicate as long as both are running, regardless of restarts and replacement and launch order. It uses DNS internally, I believe, instead of /etc/hosts
Use it like this: docker run --net=some_user_definied_nw --net-alias postgres ... and you can connect to it using that alias from any container on the same network.
Does not work on the default network, unfortunately, you have to create one with docker network create <network> and then use it with --net=<network> for every container (compose supports it as well).
In addition to container being down and hence unreachable by alias multiple containers can also share an alias in which case it's not guaranteed that it will be resolved to the right one. But in some case that can help with seamless upgrade, probably.
It's all not very well documented as of yet, hard to figure out just by reading the man page.

port redirect to docker containers by hostname

I want to setup serve multiple sites from one server:
1. http://www.example.org => node.js-www (running on port (50000)
2. http://files.example.org => node.js-files (running on port 50001)
Until now I only found out to have docker doing port redirect when using static ips.
Is is actual possible to use docker for port redirection via hostname?
I use a free amazon EC2 insance.
Thanks
Bo
EDIT:
I want to have multiple nodes applications running on the same port but however serving a different hostname.
As far as I'm aware docker does not have such functionality built in, nor it should.
To accomplish what you're trying to do you'd probably need some sort of reverse proxy, so node.js or nginx would do. Bouncy might be a good option: https://github.com/substack/bouncy
There is a great docker project on GitHub called nginx-proxy by jwilder.
This allows you to create a docker container that is doing a reverse-proxy by mapping only his port 80/443 to the host, instead of other containers. Then, all you have to do is for every new web container you create, provide a new environment variable VIRTUAL_HOST=some.domain.com.
An example:
Create a new nginx-proxy container
docker run -d -p 80:80 --net shared_hosting -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Create a container for each website. For example:
docker run -d -p 80 --net shared_hosting -e VIRTUAL_HOST=hello1.domain.com tutum/hello-world
docker run -d -p 80 --net shared_hosting -e VIRTUAL_HOST=drupal.domain.com drupal
You need to make sure that the hosts you own, configured in DNS to point to the server that runs the docker container. In this example, I will add the to the /etc/hosts file:
echo "127.0.0.1 hello1.domain.com drupal.domain.com" >> /etc/hosts
Navigate to http://hello1.domain.com and then to http://drupal.domain.com, and see that they both use port 80 but give you a different pages.
An important note about this service. As you noticed, I have added --net argument, this is because all containers you want to be a part of a shared hosting (proxy and websites) must be on the same virtual network (this can be defined by the argument --net or --network to the docker run command), especially when you use docker-compose to create dockers, because docker-compose creates its own virtual network, thus makes one container not reachable by another, so make sure the network is explicitly defined in the docker-compose.yml file.
Hope it helps.
I used varnish as a docker container that worked as my reverse proxy
it's on the docker index
https://index.docker.io/u/sysdia/docker-varnish/
I know this is an old question, but ran across it and wanted to point out that there are much cleaner ways to do what was requested. Since you are using AWS, you can have each of your two hostnames pointing at their own load balancer (ELB) in Route53. You could then deploy your container into ECS, for example, listening on both ports. Each of those load balancers can redirect traffic to the appropriate listening port. Now you have accomplished what you want, and if your traffic becomes too heavy or imbalanced, you can easily split the tasks into two different ECS clusters so they can scale independently.