Connection to mongodb on travis refused - mongodb

I'm trying to setup mongo on Travis to be used for my integration tests. My travis configuration that initializes mongodb is:
env:
global:
- MONGODB_URI: mongodb://fillrx_test:test#127.0.0.1:27017/fillrx_test_db
before_install:
- sleep 15
- mongo fillrx_test_db --eval 'db.createUser({user:"fillrx_test", pwd:"test",roles:["readWrite"]});'
- docker build -t jeremycod/api-test -f ./server/Dockerfile.dev ./server
script:
- docker run --env MONGODB_URI -e CI=true jeremycod/api-test npm test
However, this fails on Travis with the connection refused error:
1) Create address endpoint /api/v1/address/user/userId
"before all" hook: connectToTestDB for "Create address with correct input":
MongoNetworkError: failed to connect to server [127.0.0.1:27017] on first connect [Error: connect ECONNREFUSED 127.0.0.1:27017
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1146:16) {
name: 'MongoNetworkError'
}]
UPDATE:
Solution to this problem is to run docker container with "--network host", e.g.
- docker run --network host --env MONGODB_URI -e CI=true jeremycod/api-test npm test
This approach binds docker container directly to the Docker host’s network, so it's available from docker container at url:
mongodb://127.0.0.1:27017/mongo_db_name

Related

Hasura container cannot connect to postgres DB?

I am running a postgres db running in Docker with:
docker run -d -p 5432:5432 --name db --env POSTGRES_PASSWORD=postgres --env POSTGRES_USER=postgres postgres
I'm trying to connect Hasura to it, with this command
docker run --name hasura -p 5002:8082 --env HASURA_GRAPHQL_DATABASE_URL=postgres://postgres:postgres#localhost:5432 -e HASURA_GRAPHQL_ENABLE_CONSOLE=true hasura/graphql-engine:latest
but I get this response:
{"type":"startup","timestamp":"2021-04-20T05:49:08.548+0000","level":"info","detail":{"kind":"server_configuration","info":{"live_query_options":{"batch_size":100,"refetch_delay":1},"transaction_isolation":"ISOLATION LEVEL READ COMMITTED","plan_cache_options":{"plan_cache_size":4000},"enabled_log_types":["http-log","websocket-log","startup","webhook-log"],"server_host":"HostAny","enable_allowlist":false,"log_level":"info","auth_hook_mode":null,"use_prepared_statements":true,"unauth_role":null,"stringify_numeric_types":false,"enabled_apis":["metadata","graphql","config","pgdump"],"enable_telemetry":true,"enable_console":true,"auth_hook":null,"jwt_secret":null,"cors_config":{"allowed_origins":"*","disabled":false,"ws_read_cookie":null},"console_assets_dir":null,"admin_secret_set":false,"port":8080}}}
{"type":"startup","timestamp":"2021-04-20T05:49:08.548+0000","level":"info","detail":{"kind":"postgres_connection","info":{"retries":1,"database_url":"postgres://postgres:...#localhost:5432"}}}
{"type":"pg-client","timestamp":"2021-04-20T05:49:08.548+0000","level":"warn","detail":{"message":"postgres connection failed, retrying(0)."}}
{"type":"pg-client","timestamp":"2021-04-20T05:49:08.548+0000","level":"warn","detail":{"message":"postgres connection failed, retrying(1)."}}
{"type":"startup","timestamp":"2021-04-20T05:49:08.548+0000","level":"error","detail":{"kind":"catalog_migrate","info":{"internal":"could not connect to server: Connection refused\n\tIs the server running on host \"localhost\" (127.0.0.1) and accepting\n\tTCP/IP connections on port 5432?\n","path":"$","error":"connection error","code":"postgres-error"}}}
{"internal":"could not connect to server: Connection refused\n\tIs the server running on host \"localhost\" (127.0.0.1) and accepting\n\tTCP/IP connections on port 5432?\n","path":"$","error":"connection error","code":"postgres-error"}
I am running Docker Desktop v3.3.1 on Windows 10.
Any help on this would be appreciated.
Thanks.
As anemyte mentioned, changing localhost to host.docker.internal works for me.

GitLab CI: docker container cannot connect to MongoDB service

In a GitLab CI pipeline I need to connect my own C++ code from a docker container to a MongoDB running as servive but I cannot connect.
This is a minimal gitlab-ci.yml example showing the problem:
stages:
- connect1
- connect2
image: docker:latest
variables:
MONGOIMAGE: "mongo:4.2.3-bionic"
MONGO_INITDB_ROOT_USERNAME: "root"
MONGO_INITDB_ROOT_PASSWORD: "geheim"
connect1:
stage: connect1
services:
- name: $MONGOIMAGE
image: mongo
script:
- mongo --host mongo --username $MONGO_INITDB_ROOT_USERNAME --password $MONGO_INITDB_ROOT_PASSWORD --eval "db.help()"
connect2:
stage: connect2
services:
- name: $MONGOIMAGE
script:
- docker run --rm mongo mongo --host mongo --username $MONGO_INITDB_ROOT_USERNAME --password $MONGO_INITDB_ROOT_PASSWORD --eval "db.help()"
The error message of connect2 is that the host cannot be found:
connecting to: mongodb://mongo:27017/?compressors=disabled&gssapiServiceName=mongodb
2020-06-04T16:20:47.614+0000 E QUERY [js] Error: couldn't connect to server mongo:27017, connection attempt failed: HostNotFound: Could not find address for mongo:27017: SocketException: Host not found (authoritative) :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2020-06-04T16:20:47.615+0000 F - [main] exception: connect failed
2020-06-04T16:20:47.615+0000 E - [main] exiting with code 1
I already tried with --host localhost but this is working neighter. How can I achive that a container started with docker run like in connect2 can connect to the MongoDB service?
Finally I found a solution:
connect2:
stage: connect2
services:
- name: $MONGOIMAGE
script:
- ping mongo -c 5
- docker run --rm mongo mongo --add-host mongo:`cat /etc/hosts | grep mongo | awk '{print $1}'` --net host --username $MONGO_INITDB_ROOT_USERNAME --password $MONGO_INITDB_ROOT_PASSWORD --eval "db.help()"
Docker is client/server application. You're running the client, but there is no server. So you need to run a Docker daemon using Docker-in-Docker.
You can see an example GitLab CI config here: https://stackoverflow.com/a/61106578/6214034
And the relevant documentation: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html

Docker container can't connect circleCI postgres database

I am trying to set up a circleCI test, I have created a database in circleCI and I have a docker container which needs to connect to the database, but it can't. Inside my docker container is a script which before it does anything it runs pg_isready, this cannot connect to the database. Here's my circle job creation
postgres_tests:
docker:
- image: circleci/python:3.7
- image: circleci/postgres:9.6.2-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_DB: my_test
steps:
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: /tmp/workspace
- run:
name: Install awscli docker-squash
working_directory: /
command: sudo pip3 install awscli docker-squash
- run: eval `aws ecr get-login --no-include-email --region eu-west-1`
- checkout
- run: echo 'export PATH=/usr/lib/postgresql/9.6/bin/:$PATH' >> $BASH_ENV
- run: sudo apt-get update && sudo apt-get install -y postgresql-client
- run: psql -h localhost -U postgres --command "ALTER USER postgres WITH PASSWORD 'password';"
- run:
name: run_pg_tests
working_directory: /tmp/workspace
command: |
/tmp/workspace/sql/t/run_tests.sh
The run_tests.sh is a script which pulls my docker image from the company repo and then does a docker run on that image.
I have read other people have issues where the database isn't ready so to test this I added pg_isready before the docker run
So my script looks like this
DB_HOST=`psql -X -A -h localhost -U postgres -p 5432 -t -c "select inet_server_addr()"`
DB_PORT=5432
DB_NAME=my_test
DB_USER=postgres
DB_PASSWORD=password
pg_isready -h "${DB_HOST}" -p "${DB_PORT}"
#restore database from supplied image
docker run \
-e SAPIENTIA_DB_HOST=$DB_HOST \
-e SAPIENTIA_DB_PORT=$DB_PORT \
-e SAPIENTIA_DB_NAME=$DB_NAME \
-e SAPIENTIA_DB_PASSWORD=$DB_PASSWORD \
-e SAPIENTIA_DB_USER=$DB_USER \
$EMPTY_DB_FULL_PATH \
path_to_file/file
I have also tried setting the DB_HOST variable directly to 'localhost' the result is exactly the same
Here's what I get as a result:
127.0.0.1:5432 - accepting connections
127.0.0.1:5432 - no response
I have also tried re-running the test with ssh and connecting myself. Same result, I can connect to the database, but i I then run docker exec and try to connect from inside the docker container it can't connect.
I'm pretty stumped here, so any help would be useful.
EDIT: I've found this documentation page about your issue:
It is not possible to start a service in remote docker and ping it directly from a primary container or to start a primary container that can ping a service in remote docker. To solve that, you’ll need to interact with a service from remote docker, as well as through the same container
That line is not 100% clear to me, but I understand that they tell us that we should run the containers we want to communicate from another container manually. Therefore:
- run:
name: run_pg_tests
working_directory: /tmp/workspace
command: |
docker run -d --name postgres --env POSTGRES_USER=postgres --env POSTGRES_DB=my_test circleci/postgres:9.6.2-alpine
/tmp/workspace/sql/t/run_tests.sh
Since the postgres container is not accessible anymore through the local network, your up check could be docker exec postgres pg_isready
You can then set your DB_HOST to postgres in your run script.
Original answer:
I'm not well versed into CircleCI configuration, but my guess would be that your Docker container you run manually is not attached to the same network as the containers launched by CircleCI.
From what I see in the documentation, you can specify the hostname of the service container:
The name the container is reachable by. By default, container services are accessible through localhost
So maybe if you try something lile this:
- image: circleci/postgres:9.6.2-alpine
name: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_DB: my_test
You can then set your DB_HOST to postgres in your run script.

Connect to Postgresql on Windows host machine with Docker

I'm currently running a REST API on a docker container with the following Dockerfile:
FROM python:2.7
WORKDIR /app
RUN pip install uwsgi
COPY awf/requirements.txt ./awf/requirements.txt
RUN pip install -r awf/requirements.txt
COPY ./ ./
# Call collectstatic (customize the following line with the minimal environment variables needed for manage.py to run):
RUN python manage.py collectstatic --noinput
EXPOSE 8000
ENTRYPOINT [ "uwsgi" ]
CMD [ "--wsgi-file", "awf/wsgi.py", "--ini", "uwsgi.ini" ]
The Python REST API has the following DATABASE config in settings.yaml:
DATABASE: {
engine: 'django.contrib.gis.db.backends.postgis',
name: 'name',
user: 'user',
password: 'pass',
host: '192.168.99.100',
port: '5432',
}
I have set this host:'192.168.99.100' because this is the docker-machine ip output.
When I run the docker container without mapping port 5432, I get the following error:
OperationalError: could not connect to server: Connection refused
Is the server running on host "192.168.99.100" and accepting TCP/IP
connections on port 5432?
But then I map ports while running the docker container:
docker run -p 8000:8000 -p 5432:5432 img
And I get the following error:
OperationalError: server closed the connection unexpectedly This
probably means the server terminated abnormally before or while
processing the request.
I don't know if I missed some configuration. But I added the IP address range 172.17.0.0/16 to pg_hba.conf and also configured PostreSQL to listen for connections on all IP, according to this solution:
Allow docker container to connect to a local/host postgres database
EDIT
pg_hba.conf:
# IPv4 local connections:
host all all 127.0.0.1/32 trust
host all all 192.168.99.0/16 trust
host all all 172.17.0.0/16 trust

I m not able to connect to mongo with a container docker

I m new to docker
so i followed up a tutorial here , part 6,7 and 8 in order to use and learn docker in a project.
The problem is that when i use docker-compose.yml to build images on my laptop,
my stack_server can connect to mongo.
But if i build images from docker hub and i pull and run them seperately on my laptop,my stack_server CAN't connect to mongo.
here is my docker-compose.yml :
client:
build: ./client
restart: always
ports:
- "80:80"
links:
- server
mongo:
image: mongo
command: --smallfiles
restart: always
ports:
- "27017:27017"
server:
build: ./server
restart: always
ports:
- "8080:8080"
links:
- mongo
However, my stack_client can connect to the stack_server.
and my commands to run my image (my images are public)
docker-run -i -t -p 27017:27017 mongo
docker-run -i -t -p 80:80 mik3fly4steri5k/stack_client
docker-run -i -t -p 8080:8080 mik3fly4steri5k/stack_server
and my error log
bryan#debian-dev7:~$ sudo docker run -i -t -p 8080:8080 mik3fly4steri5k/stack_server
[sudo] password for bryan:
Express server listening on 8080, in development mode
connection error: { Error: connect ECONNREFUSED 127.0.0.1:27017
at Object._errnoException (util.js:1021:11)
at _exceptionWithHostPort (util.js:1043:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1175:14)
name: 'MongoError',
message: 'connect ECONNREFUSED 127.0.0.1:27017' }
here is my stack_server dockerfile
FROM node:latest
# Set in what directory commands will run
WORKDIR /home/app
# Put all our code inside that directory that lives in the container
ADD . /home/app
# Make sure NPM is not cached, remove everything first
RUN rm -rf /home/app/node_modules/npm \
&& rm -rf /home/app/node_modules
# Install dependencies
RUN npm install
# Tell Docker we are going to use this port
EXPOSE 8080
# The command to run our app when the container is run
CMD ["node", "app.js"]
First Solution but deprecated
docker-run -i -t -p 27017:27017 --name:mongo mongo
docker-run -i -t -p 80:80 mik3fly4steri5k/stack_client
docker-run -i -t -p 8080:8080 --link mongo:mongo mik3fly4steri5k/stack_server
I had to put a name on my mongo and to link it with the parameter --link when i run my stack_server
docker documentation
Warning: The --link flag is a deprecated legacy feature of Docker.
It may eventually be removed.
Unless you absolutely need to continue using it,
we recommend that you use user-defined networks to facilitate communication
between two containers instead of using --link.
One feature that user-defined networks do not support that you can do with
--link is sharing environmental variables between containers.
However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
Run your stack_server with below command
sudo docker run -i -t -p 8080:8080 --link mongo:mongo mik3fly4steri5k/stack_server
Note: --link flag is a deprecated
Your container is running. But is it healthy? I recommend that you implement Healthcheck
It’s actually would look pretty much the same:
(Docker-compose YAML)
healthcheck:
test: curl -sS http://127.0.0.1:8080 || echo 1
interval: 5s
timeout: 10s
retries: 3
Docker health checks is a cute little feature that allows attaching shell command to container and use it for checking if container’s content is alive enough.
I hope it can help you.
more info https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
To start MongoDB on a desire port you would have to do the followings:
client:
build: ./client
restart: always
ports:
- "80:80"
links:
- server
mongo:
image: mongo
command: mongod --port 27017
restart: always
ports:
- "27017:27017"
server:
build: ./server
restart: always
ports:
- "8080:8080"
links:
- mongo
Lets look at error
connection error: { Error: connect ECONNREFUSED 127.0.0.1:27017
Server trying to connect to localhost instead of mongo. You need to configure server to connect mongo to mongo:27017
mongo is alias created by docker for linking containers to each other.