I created the following Dockerfile:
FROM postgres
COPY short_codes.csv /var/lib/postgresql/data/short_codes.txt
ENTRYPOINT ["docker-entrypoint.sh"]
And docker-compose:
version: '3'
services:
codes:
container_name: short_codes
build:
context: codes_store
image: andrey1981spb/short_codes
ports:
- 5432:5432
I up docker-compose successfully. But when I try to enter in container, I receive:
"Container ... is not running"
I suppose, I have to prescribe some run-command in Dockerfile. But what is this command?
Your container is probably not running because you haven't copied your docker-entrypoint.sh script anywhere to your container.
You also don't need to supply a run command, since entrypoint is going to run a command on start up, and docker-compose up auto runs your container.
Related
I am building a Flask application in Python. I'm using SQLAlchemy to connect to PostgreSQL.
In the flask application, I'm using this to connect SQLAlchemy to PostgreSQL
engine = create_engine('postgresql://postgres:[mypassword]#db:5432/employee-manager-db')
And this is my docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
links:
- db:db
depends_on:
- pgadmin
db:
image: postgres:14.5
restart: always
volumes:
- .dbdata:/var/lib/postgresql
hostname: postgres
environment:
POSTGRES_PASSWORD: [mypassword]
POSTGRES_DB: employee-manager-db
pgadmin:
image: 'dpage/pgadmin4'
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: [myemail]
PGADMIN_DEFAULT_PASSWORD: [mypassword]
ports:
- "5050:80"
depends_on:
- db
I can do "docker build -t employee-manager ." to build the image. However, when I do "docker run -p 5000:5000 employee-manager" to run the image, I get an error saying
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "db" to address: Try again
Does anybody know how to fix this? Thank you so much for your help
Your containers are on different networks and that is why they don't see each other.
When you run docker-compose up, docker-compose creates a separate network and puts all the services defined inside docker-compose.yml on that network. You can see that with docker network ls.
When you run a container with docker run, it is attached to the default bridge network, which is isolated from other networks.
There are several ways to fix this, but this one will serve you in many other scenarios:
Run docker container ls and identify the name or ID of the db container that was started with docker-compose
Then run your container with:
# ID_or_name from the previous point
docker run -p 5000:5000 --network container:<ID_or_name> employee-manager
This attached the new container to the same network as your database container.
Other ways include creating a network manually and defining that network as default in the docker-compose.yml. Then you can use docker run --network <network_name> ... to attach other containers to that network.
docker run doesn't read any of the information in the docker-compose.yml file, and it doesn't see things like the Docker network that Compose automatically creates.
In your case you already have the service fully-defined in the docker-compose.yml file, so you can use Compose commands to build and restart it
docker-compose build
docker-compose up -d # will delete and recreate changed containers
(If the name of the image is important to you – maybe you're pushing to a registry – you can specify image: alongside build:. links: are obsolete and you should remove them. I'd also avoid replacing the image's content with volumes:, since this misses any setup or modification that's done in the Dockerfile and it means you're running untested code if you ever deploy the image without the mount.)
I have a scrapy application which I'm trying to containerized it. Basically, this is my docker-compose.yml file:
version: '3'
services:
scrapper:
container_name: scrapper
build: .
ports:
- 80:80
depends_on:
- db
links:
- db
db:
volumes:
- ./scrapper/sql:/docker-entrypoint-initdb.d
image: postgres
container_name: postgres
restart: always
ports:
- 5432:5432
And this is my Dockerfile:
FROM python:3
WORKDIR /usr/app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
But when I try to execute my application using the following command: docker run -it scrapper_scrapper scrapy crawl angeloni, I'm receiving this message:
File "/usr/local/lib/python3.7/site-packages/scrapy/crawler.py", line 88, in crawl
yield self.engine.open_spider(self.spider, start_requests)
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
Why this is happening? When I execute docker-compose ps command, it shows:
Name Command State Ports
--------------------------------------------------------------------------
postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
scrapper python3 Exit 0
When running docker-compose up to start db, that container will run under its network that is also created by docker compose. As such, running docker run ... will not be able to connect to that instance, since it is not running on the same network. But you can specify it with:
docker run --network $network_name
To get the docker networks available, you can run:
docker networks ls
I think you have to explicitly define a user network and put your containers on it:
https://docs.docker.com/network/bridge/
Under the section:
User-defined bridges provide automatic DNS resolution between containers.
How does the ENTRYPOINT command works given in the below docker compose file. I have found the docker compose file in replica Set mongo docker-compose. "usr/bin/mongod" is the first command given in the ENTRYPOINT, my Question is whether the usr/bin/mongod will start the local mongo and run it has docker container or it will pull the mongo image from repository and run it as container if so why we are using "usr/bin/mongod".
version: "3"
services:
mongo-1:
hostname: mongo-1
container_name: mongo-1
image: mongo:4.0
ports:
- "127.0.0.1:28000:28000"
volumes:
- ./mongo-1/data:/data/db
restart: always
entrypoint: [ "/usr/bin/mongod", "--port", "28000", "--bind_ip_all", "--replSet", "rs1" ]
using this compose file i am able to connect mongoDB running in the port 28000 from host machine but when i replace the ENTRYPOINT with CMD in the compose file i am not able to connect the mongoDB from host machine.
You are using the image "mongo:4.0" which has its own ENTRYPOINT and CMD defined. When you redeclare any of them in your docker-compose.yml then you are overwriting the original ones.
ENTRYPOINT is the main command and CMD are only parameters sent to the main command. For this reason it works for you when you define ENTRYPOINT with a valid command. If you change it to CMD then when is run is the original ENTRYPOINT ("docker-entrypoint.sh") and the new command.
Your entrypoint uses the binary from inside your container that is based on the specified image.
For reference you can see the Dockerfile of the official image: https://github.com/docker-library/mongo/blob/40056ae591c1caca88ffbec2a426e4da07e02d57/4.0/Dockerfile
The ENTRYPOINT and CMD are defined at the end.
I create an Dockerfile with Postgresql with this code:
FROM postgres:9.4
MAINTAINER Fabio Ebner
ENV POSTGRES_PASSWORD="dna44100"
ENV POSTGRES_PORT=5432
EXPOSE ${POSTGRES_PORT}
COPY init.sql /docker-entrypoint-initdb.d/
so How can I specify to always save my db data in my user Machine? cause with this code everty time I stop the container my data are lost
You will need to mount a volume. pointing your host machine to the container's directory /var/lib/postgresql
Source: docker mounting volumes on host
You need to mount a volume to the data directory of PostgreSQL.
You can use the following, using the docker-compose file:
version: "3"
services:
test-postgresql:
image: postgres:9.4
container_name: test-postgresql
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: dna44100
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
- ./folder-on-host:/var/lib/postgresql/data
With the docker-compose file you can start the container with docker-compose up and stop the container with docker-compose down. The database and settings are saved on the specified directory (./folder-on-host).
If you want to remove the volume you can use the command: docker-compose down -v
You can also use the docker run to mount a volume, using the -v or -volume option:
docker run -v ./folder-on-host:/var/lib/postgresql/data yourimagename
How do I get a service container to exit once the dependent container has finished?
I have test suite running in the app_unittestbot container that depends_on a postgresql db server (postgres:9.5-alpine) running in separate container. Once the test suite exits, I want to check the return code of the test suite and halt the database container. With the docker-compose.yml below, the db service container never halts.
docker-compose.yml
version: '2.1'
services:
app_postgresql95:
build: ./postgresql95/
ports:
- 54321:5432
app_unittestbot:
command: /root/wait-for-it.sh app_postgresql95:5432 --timeout=60 -- nose2 tests
build: ./unittestbot/
links:
- app_postgresql95
volumes:
- /app/src:/src
depends_on:
- 'app_postgresql95'
You can run docker-compose up --abort-on-container-exit to have compose stop all the containers if any one of them exits. That will likely solve your use case.
For something a little more resilient, I'd probably split this into two compose files so that an abort on postgresql doesn't get accidentally registered as a successful test. Then you'd just run those files in the order you need:
docker-compose -f docker-compose.yml up -d
docker-compose -f docker-compose.test.yml up
docker-compose -f docker-compose.yml down