docker-compose - Application can't communicate with postgres container - postgresql

I have a scrapy application which I'm trying to containerized it. Basically, this is my docker-compose.yml file:
version: '3'
services:
scrapper:
container_name: scrapper
build: .
ports:
- 80:80
depends_on:
- db
links:
- db
db:
volumes:
- ./scrapper/sql:/docker-entrypoint-initdb.d
image: postgres
container_name: postgres
restart: always
ports:
- 5432:5432
And this is my Dockerfile:
FROM python:3
WORKDIR /usr/app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
But when I try to execute my application using the following command: docker run -it scrapper_scrapper scrapy crawl angeloni, I'm receiving this message:
File "/usr/local/lib/python3.7/site-packages/scrapy/crawler.py", line 88, in crawl
yield self.engine.open_spider(self.spider, start_requests)
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
Why this is happening? When I execute docker-compose ps command, it shows:
Name Command State Ports
--------------------------------------------------------------------------
postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
scrapper python3 Exit 0

When running docker-compose up to start db, that container will run under its network that is also created by docker compose. As such, running docker run ... will not be able to connect to that instance, since it is not running on the same network. But you can specify it with:
docker run --network $network_name
To get the docker networks available, you can run:
docker networks ls

I think you have to explicitly define a user network and put your containers on it:
https://docs.docker.com/network/bridge/
Under the section:
User-defined bridges provide automatic DNS resolution between containers.

Related

Docker with postgresql in flask web application (part 2)

I am building a Flask application in Python. I'm using SQLAlchemy to connect to PostgreSQL.
In the flask application, I'm using this to connect SQLAlchemy to PostgreSQL
engine = create_engine('postgresql://postgres:[mypassword]#db:5432/employee-manager-db')
And this is my docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
links:
- db:db
depends_on:
- pgadmin
db:
image: postgres:14.5
restart: always
volumes:
- .dbdata:/var/lib/postgresql
hostname: postgres
environment:
POSTGRES_PASSWORD: [mypassword]
POSTGRES_DB: employee-manager-db
pgadmin:
image: 'dpage/pgadmin4'
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: [myemail]
PGADMIN_DEFAULT_PASSWORD: [mypassword]
ports:
- "5050:80"
depends_on:
- db
I can do "docker build -t employee-manager ." to build the image. However, when I do "docker run -p 5000:5000 employee-manager" to run the image, I get an error saying
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "db" to address: Try again
Does anybody know how to fix this? Thank you so much for your help
Your containers are on different networks and that is why they don't see each other.
When you run docker-compose up, docker-compose creates a separate network and puts all the services defined inside docker-compose.yml on that network. You can see that with docker network ls.
When you run a container with docker run, it is attached to the default bridge network, which is isolated from other networks.
There are several ways to fix this, but this one will serve you in many other scenarios:
Run docker container ls and identify the name or ID of the db container that was started with docker-compose
Then run your container with:
# ID_or_name from the previous point
docker run -p 5000:5000 --network container:<ID_or_name> employee-manager
This attached the new container to the same network as your database container.
Other ways include creating a network manually and defining that network as default in the docker-compose.yml. Then you can use docker run --network <network_name> ... to attach other containers to that network.
docker run doesn't read any of the information in the docker-compose.yml file, and it doesn't see things like the Docker network that Compose automatically creates.
In your case you already have the service fully-defined in the docker-compose.yml file, so you can use Compose commands to build and restart it
docker-compose build
docker-compose up -d # will delete and recreate changed containers
(If the name of the image is important to you – maybe you're pushing to a registry – you can specify image: alongside build:. links: are obsolete and you should remove them. I'd also avoid replacing the image's content with volumes:, since this misses any setup or modification that's done in the Dockerfile and it means you're running untested code if you ever deploy the image without the mount.)

Error: timeout expired on trying to connect in Docker Postgres using pgAdmin

I've been created a docker container of postgres service, but on start it and try to connect in database I get erros like I didn't defined a user and database to Postgres instance, I already tried to change the docker-compose and find the poblem but I didn't find.
Follow the attachments:
Dockerfile:
FROM wyveo/nginx-php-fpm:latest
RUN chmod -R 775 /usr/share/nginx/
RUN export pwd=pwd
docker-compose.yml:
version: '3'
services:
laravel-app_prm:
build: .
ports:
- "8099:80"
volumes:
- ${pwd}/.docker/nginx/:/usr/share/nginx
postgres_prm:
image: postgres
restart: always
environment:
- POSTGRES_USER=db_usr
- POSTGRES_PASSWORD=postgres_password
- POSTGRES_DB=db_prm
ports:
- "5432:5440"
volumes:
- ${pwd}/.docker/dbdata:/var/lib/postgresql/data/
**when I try to connect to the database directly through the container's bash, I get an error that user and database, both being inserted in the same way as defined in docker-compose.yml, do not exist.
sudo docker container <postgres_container_id> bash
psql -h localhost -U db_usr
... and so on...
And to set up connection in pgAdmin I got the container IP using:
sudo docker container inspect <postgres_container_id>
and getting the value from IPAddress atribute.

Can't see my mongo database when using mongo cmd on a Docker container

Similar to Can't connect to MongoDB container from other Docker container - but answers from this post don't work for me.
I am new to Docker. Trying to learn it on a typescript/express/mongo/mongoose api example.
What I am trying to do (and having problems with), is to use mongo cmd line on a running mongo container after it has been spun up using docker compose up. Even though I have my data nicely persisted on a Docker volume, I don't seem to be able to log into the database using cmd line.
This is my docker-compose.yml file:
version: '3.9'
services:
api:
container_name: api_ts
build: .
restart: unless-stopped
environment:
- DB_URL=mongodb://myself:pass123#mongo:27017/
ports:
- '3131:3131'
depends_on:
- mongo
links: # (seems to be needed)
- mongo
mongo:
container_name: mongo_container
image: mongo:latest
restart: always
volumes:
- mongo_dbv:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=myself
- MONGO_INITDB_ROOT_PASSWORD=pass123
ports:
- '27017:27017'
volumes:
mongo_dbv: {}
This is my Dockerfile:
FROM node:alpine
WORKDIR /usr/src/app
COPY package*.json .
RUN npm ci
COPY . .
ENV PORT=3131
EXPOSE 3131
COPY .env ./dist
CMD ["npm", "start"]
I am running
docker compose up -d --build
After both services are ready, I do:
docker exec -it mongo_container mongo
show dbs
...and the output of the last cmd is empty
(same occurs when trying to follow the answers in the post mentioned above)
I am sure the database contains data, because I am able to verify it using REST client.
Also, I am a bit puzzled - and maybe this is somehow connected - why there is no indication, either in docker-compose.yml or in Dockerfile, of the database name which I am using. I would expect it to be part of show dbs output. Despite that, my api runs just fine.
Listing databases requires authentication
docker exec -it mongo_container mongo -u myself -p pass123
Now you can list databases
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
Note: mongo should show you warning that "mongo" shell has been superseded by "mongosh". When you use mongosh, a proper authentication error would be shown on the database listing attempt.

docker-compose.yml + ssh-server not working

I am trying to start my docker-compose.yml (example below), but whenever I start the containers the sshd server service not working:
# My docker-compose.yml
version: '3'
services:
server1:
image: server-dev:v0.8
hostname: server-dev1
command: bash -c "/usr/sbin/init"
ports:
- "2222:22"
- 80:80
server2:
image: server-dev:v0.8
hostname: server-dev2
command: bash -c "/usr/sbin/init"
depends_on:
- server1
Any suggestions ?
Building an image from your Dockerfile and running it with
docker run -p 2222:22 dschuldt/test
throws:
Could not load host key: /etc/ssh/ssh_host_rsa_key
Could not load host key: /etc/ssh/ssh_host_ecdsa_key
Could not load host key: /etc/ssh/ssh_host_ed25519_key
sshd: no hostkeys available -- exiting.
You can add this line to you dockerfile before the last CMD command to make it work (by the way, you have two CMD commands... the first one will be overwritten):
RUN /usr/bin/ssh-keygen -A
Just another small hint: Your Image is 739MB. Maybe you should rethink your use case ;-)
Have a nice evening, regards
dschuldt

docker-compose external_link mongo network not reachable

I am having a strange situation where I can not connect to my running mongo DB in my docker compose. My compose file:
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
external_links:
- myname:mongo
environment:
- MONGO_URL=mongodb://myname:27017/test
I have found a few infos on that that all did not solve my issue. I.e. I tried:
1)
Create a custom network:
docker network create mongonet
Then start mongo with the --network mongonet flag and add to the compose:
networks:
default:
external:
name: mongonet
Got nothing there either.
I looked into my /etc/hosts file on my compose, and it did not list any DNS entry.
If i do a docker inspect and grab the mongo IP and add it to my compose, that is fine and works like a charm.
I start mongo like this:
docker run -d -p 27017:27017 -v ~/mongo_data:/data/db mongo
I am really rather confused as I believed this to be a out-of-the-box kind of thing. Strangely I can't make it work. I have found examples on internal links (vs external_link) but that does not work for me as I have many services that I would like to run like this and not all of them should run at the same time.
I start my docker compose as this:
docker-compose up --force-recreate
My versions are:
docker-compose version 1.17.1, build 6d101fb
Docker version 17.05.0-ce, build 89658be
My question: How do I successfully link a running mongo container as an external link into my application containers such that they can connect to them?
My docker PS:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5cf6e08d6fde mongo "docker-entrypoint..." About an hour ago Up About an hour 0.0.0.0:27017->27017/tcp gallant_feynman
Links are deprecated, use networks instead.
Notes:
If you’re using the version 2 or above file format, the
externally-created containers must be connected to at least one of the
same networks as the service which is linking to them. Links are a
legacy option. We recommend using networks instead.
The network way should work. I think you are missing some pieces. Make sure to give the mongo container a name, and make sure to attach the app container to the network in the compose file:
docker network create mongonet
docker run -d -p 27017:27017 --network mongonet --name mongo -v ~/mongo_data:/data/db mongo
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
environment:
- MONGO_URL=mongodb://mongo:27017/test
networks:
- mongonet
networks:
default:
external:
name: mongonet