docker-compose - unable to attach to containers - postgresql

Using below docker-compose.yml file if I run "docker-compose up" or "docker-compose up -d" command then I see both containers status as exited however when I run docker restart <postgres-containerId> then its up and running but when I try to run docker restart <java8-containerId> then its restarting and again exiting.
Could you please suggest what parameter I need to specify to make these containers up and running after docker-compose up command and how do I attach to java container I tried with docker attach <java8-containerId> command but was not able to attach ?
docker-compose.yml file -
postgres:
image: postgres:9.4
ports:
- "5430:5432"
javaapp:
image:java8:latest
volumes:
- /pgm:/pgm
working_dir: /pgm
links:
- postgres
command: /bin/bash
docker-compose ps results -
Name Command State Ports
--------------------------------------------------------------------
compose_javaapp_1 /bin/bash Exit 0
compose_postgres_1 /docker-entrypoint.sh postgres Exit 0

To see available containers:
docker ps -a
To open container shell:
docker exec -it <container-name> /bin/bash

Related

can docker compose with various services enable terminal interaction with a specific service?

I know I can use two diffents terminals. Here is the example
I have a create-react-app project, and I want run
sudo docker compose up
And I want interact with test service via terminal, Jest give me some options like a to run all tests or p to filter some files.
docker-compose.yml
services:
test:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm test'
stdin_open: true # docker run -i
tty: true # docker run -t
dev:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm start'
When I run
sudo docker compose up
I can't interact with test service.
When I run
sudo docker compose run --rm test
I can interact with jest.
There is any way to use only one terminal and interact directly with test service?
There is any way to use only one terminal and interact directly with test service?
No, not in the way you probably expect.
docker-compose up is intended for starting the whole project with all its services. Running it in the foreground will output the logs of all started containers.
In no case will docker-compose up connect you directly to one of the containers.
Instead, use docker-compose run to start a one-off container or docker-compose exec to connect to a running service.
So, to start the project and connect to the test container using one terminal you could do
docker compose up -d # start the services in the background
docker compose run --rm test
Knowing this, you can now further optimize your docker-compose.yml for this:
drop stdin_open and tty since these will be automatically set when using docker-compose run / docker-compose exec
use service profiles so the test service is not automatically started by default but only when using docker-compose run to start it explicitly - and interactively
if test needs the dev service to be running add a depends_on so it will be started automatically whenever test is started
services:
test:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm test'
depends_on:
- dev # start `dev` whenever `test` is started
profiles:
- cli-only # start `test` only when specified explicitly
dev:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm start'
With this you can simply run
docker compose run --rm test
to start an interactive terminal connected to test. The dev service will be started automatically if it is not already running - so basically the same as above but without prior docker-compose up -d.
On the other hand, running
docker compose up
would now only start the dev service.

Docker compose cannot run command service not running

I created docker-compose.yml which content you can find below. I navigate to the folder where file resist and run command:
docker-compose up -d
This was shown:
Starting postgres ... done
then i run that query:
docker-compose ps
Result:
Name Command State Ports
---------------------------------------------------------
postgres docker-entrypoint.sh postgres Exit 1
Now i wanted to run some command:
docker exec -it postgres psql -h localhost -p 54320 -U robert
This is what i get:
Error response from daemon: Container ae1565a84bcf0b3662b47d4f277efd2830273554b6bcf4437129e33b31c88b35 is not running
Is my container not running or? please of support.
docker-compose.yml:
version: "3"
services:
# Create a service named db.
db:
# Use the Docker Image postgres. This will pull the newest release.
image: "postgres"
# Give the container the name my_postgres. You can changes to something else.
container_name: "postgres"
# Setup the username, password, and database name. You can changes these values.
environment:
- POSTGRES_USER=robert
- POSTGRES_PASSWORD=robert
- POSTGRES_DB=mydb
# Maps port 54320 (localhost) to port 5432 on the container. You can change the ports to fix your needs.
ports:
- "54320:5432"
# Set a volume some that database is not lost after shutting down the container.
# I used the name postgres-data but you can changed it to something else.
volumes:
- ./volumes/postgres:/var/lib/postgresql/data
Can you attempt exec
docker run -it postgres psql -h localhost -p 54320 -U robert
?
$ docker exec --help
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Since your container has the status exit, you can't use docker exec
Can you use this docker-compose file?
version: "3"
volumes:
postgres_app: ~
services:
# Create a service named db.
postgres:
image: "postgres"
environment:
POSTGRES_USER: robert
POSTGRES_PASSWORD: robert
POSTGRES_DB: "mydb"
volumes:
- "postgres_app:/var/lib/postgresql/data"
ports:
- "54320:5432"
restart: always
And this command docker-compose exec postgres psql -U robert -d mydb
I hope this will help!
On my computer i executed this file

Add custom config location to Docker Postgres image preserving its access parameters

I have written a Dockerfile like this:
FROM postgres:11.2-alpine
ADD ./db/postgresql.conf /etc/postgresql/postgresql.conf
CMD ["-c", "config_file=/etc/postgresql/postgresql.conf"]
It just adds custom config location to a generic Postgres image.
Now I have the following docker-compose service description
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
POSTGRES_PASSWORD passwordhere
POSTGRES_USER: user
POSTGRES_DB: db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
The problem is I can no longer remotely connect to DB using these credentials if I add this Config option. Without that CMD line it works just fine.
If I prepend "postgres" in CMD it has the same effect due to the underlying script prepending it itself.
Provided all the files are where they need to be, I believe the only problem with your setup is that you've omitted an actual executable from the CMD -- specifying just the option. You need to actually run postgres:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]
That should work!
EDIT in response to OP's first comment below
First, I did confirm that behavior doesn't change whether "postgres" is in the CMD or not. It's exactly as you said. Onward!
Then I thought there must be a problem with the particular postgresql.conf in use. If we could just figure out what the default file is.. turns out we can!
How to get the existing postgres.conf out of the postgres image
1. Create docker-compose.yml with the following contents:
version: "3"
services:
db:
image: postgres:11.2-alpine
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
2. Spin up the service using
$ docker-compose run --rm --name=postgres db
3. In another terminal get the location of the file used in this release:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SHOW config_file"
config_file
------------------------------------------
/var/lib/postgresql/data/postgresql.conf
(1 row)
4. View the contents of default postgresql.conf
$ docker exec -it postgres cat /var/lib/postgresql/data/postgresql.conf
5. Replace local config file
Now all we have to do is replace the local config file ./db/postgresql.conf with the contents of the known-working-state config and modify it as necessary.
Database objects are only created once!
Database objects are only created once by the postgres container (source). So when developing the database parameters we have to remove them to make sure we're in a clean state.
Here's a nuclear (be careful!) option to
(1) remove all exited Docker containers, and then
(2) remove all Docker volumes not attached to containers:
$ docker rm $(docker ps -a -q) -f && docker volume prune -f
So now we can be sure to start from a clean state!
Final setup
Let's bring our Dockerfile back into the picture (just like you have in the question).
docker-compose.yml
version: "3"
services:
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
Connect to the db
Now all we have to do is build from a clean state.
# ensure all volumes are deleted (see above)
$ docker-compose build
$ docker-compose run --rm --name=postgres db
We can now (still) connect to the database:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SELECT COUNT(1) FROM pg_database WHERE datname='db_name'"
Finally, we can edit the postgres.conf from a known working state.
As per this other discussion, your CMD command only has arguments and is missing a command. Try:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]

docker-compose - Application can't communicate with postgres container

I have a scrapy application which I'm trying to containerized it. Basically, this is my docker-compose.yml file:
version: '3'
services:
scrapper:
container_name: scrapper
build: .
ports:
- 80:80
depends_on:
- db
links:
- db
db:
volumes:
- ./scrapper/sql:/docker-entrypoint-initdb.d
image: postgres
container_name: postgres
restart: always
ports:
- 5432:5432
And this is my Dockerfile:
FROM python:3
WORKDIR /usr/app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
But when I try to execute my application using the following command: docker run -it scrapper_scrapper scrapy crawl angeloni, I'm receiving this message:
File "/usr/local/lib/python3.7/site-packages/scrapy/crawler.py", line 88, in crawl
yield self.engine.open_spider(self.spider, start_requests)
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
Why this is happening? When I execute docker-compose ps command, it shows:
Name Command State Ports
--------------------------------------------------------------------------
postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
scrapper python3 Exit 0
When running docker-compose up to start db, that container will run under its network that is also created by docker compose. As such, running docker run ... will not be able to connect to that instance, since it is not running on the same network. But you can specify it with:
docker run --network $network_name
To get the docker networks available, you can run:
docker networks ls
I think you have to explicitly define a user network and put your containers on it:
https://docs.docker.com/network/bridge/
Under the section:
User-defined bridges provide automatic DNS resolution between containers.

docker-compose.yml + ssh-server not working

I am trying to start my docker-compose.yml (example below), but whenever I start the containers the sshd server service not working:
# My docker-compose.yml
version: '3'
services:
server1:
image: server-dev:v0.8
hostname: server-dev1
command: bash -c "/usr/sbin/init"
ports:
- "2222:22"
- 80:80
server2:
image: server-dev:v0.8
hostname: server-dev2
command: bash -c "/usr/sbin/init"
depends_on:
- server1
Any suggestions ?
Building an image from your Dockerfile and running it with
docker run -p 2222:22 dschuldt/test
throws:
Could not load host key: /etc/ssh/ssh_host_rsa_key
Could not load host key: /etc/ssh/ssh_host_ecdsa_key
Could not load host key: /etc/ssh/ssh_host_ed25519_key
sshd: no hostkeys available -- exiting.
You can add this line to you dockerfile before the last CMD command to make it work (by the way, you have two CMD commands... the first one will be overwritten):
RUN /usr/bin/ssh-keygen -A
Just another small hint: Your Image is 739MB. Maybe you should rethink your use case ;-)
Have a nice evening, regards
dschuldt