can docker compose with various services enable terminal interaction with a specific service? - docker-compose

I know I can use two diffents terminals. Here is the example
I have a create-react-app project, and I want run
sudo docker compose up
And I want interact with test service via terminal, Jest give me some options like a to run all tests or p to filter some files.
docker-compose.yml
services:
test:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm test'
stdin_open: true # docker run -i
tty: true # docker run -t
dev:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm start'
When I run
sudo docker compose up
I can't interact with test service.
When I run
sudo docker compose run --rm test
I can interact with jest.
There is any way to use only one terminal and interact directly with test service?

There is any way to use only one terminal and interact directly with test service?
No, not in the way you probably expect.
docker-compose up is intended for starting the whole project with all its services. Running it in the foreground will output the logs of all started containers.
In no case will docker-compose up connect you directly to one of the containers.
Instead, use docker-compose run to start a one-off container or docker-compose exec to connect to a running service.
So, to start the project and connect to the test container using one terminal you could do
docker compose up -d # start the services in the background
docker compose run --rm test
Knowing this, you can now further optimize your docker-compose.yml for this:
drop stdin_open and tty since these will be automatically set when using docker-compose run / docker-compose exec
use service profiles so the test service is not automatically started by default but only when using docker-compose run to start it explicitly - and interactively
if test needs the dev service to be running add a depends_on so it will be started automatically whenever test is started
services:
test:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm test'
depends_on:
- dev # start `dev` whenever `test` is started
profiles:
- cli-only # start `test` only when specified explicitly
dev:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm start'
With this you can simply run
docker compose run --rm test
to start an interactive terminal connected to test. The dev service will be started automatically if it is not already running - so basically the same as above but without prior docker-compose up -d.
On the other hand, running
docker compose up
would now only start the dev service.

Related

Docker with postgresql in flask web application (part 2)

I am building a Flask application in Python. I'm using SQLAlchemy to connect to PostgreSQL.
In the flask application, I'm using this to connect SQLAlchemy to PostgreSQL
engine = create_engine('postgresql://postgres:[mypassword]#db:5432/employee-manager-db')
And this is my docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
links:
- db:db
depends_on:
- pgadmin
db:
image: postgres:14.5
restart: always
volumes:
- .dbdata:/var/lib/postgresql
hostname: postgres
environment:
POSTGRES_PASSWORD: [mypassword]
POSTGRES_DB: employee-manager-db
pgadmin:
image: 'dpage/pgadmin4'
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: [myemail]
PGADMIN_DEFAULT_PASSWORD: [mypassword]
ports:
- "5050:80"
depends_on:
- db
I can do "docker build -t employee-manager ." to build the image. However, when I do "docker run -p 5000:5000 employee-manager" to run the image, I get an error saying
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "db" to address: Try again
Does anybody know how to fix this? Thank you so much for your help
Your containers are on different networks and that is why they don't see each other.
When you run docker-compose up, docker-compose creates a separate network and puts all the services defined inside docker-compose.yml on that network. You can see that with docker network ls.
When you run a container with docker run, it is attached to the default bridge network, which is isolated from other networks.
There are several ways to fix this, but this one will serve you in many other scenarios:
Run docker container ls and identify the name or ID of the db container that was started with docker-compose
Then run your container with:
# ID_or_name from the previous point
docker run -p 5000:5000 --network container:<ID_or_name> employee-manager
This attached the new container to the same network as your database container.
Other ways include creating a network manually and defining that network as default in the docker-compose.yml. Then you can use docker run --network <network_name> ... to attach other containers to that network.
docker run doesn't read any of the information in the docker-compose.yml file, and it doesn't see things like the Docker network that Compose automatically creates.
In your case you already have the service fully-defined in the docker-compose.yml file, so you can use Compose commands to build and restart it
docker-compose build
docker-compose up -d # will delete and recreate changed containers
(If the name of the image is important to you – maybe you're pushing to a registry – you can specify image: alongside build:. links: are obsolete and you should remove them. I'd also avoid replacing the image's content with volumes:, since this misses any setup or modification that's done in the Dockerfile and it means you're running untested code if you ever deploy the image without the mount.)

Start database container with mounted config file in Github actions workflow

I'd like to run some integration tests against a real database, but I fail to start an additional container (for the db), because I need to mount a config file that is in my repo before it is starting up.
This is how I use the database on my local computer (docker-compose):
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-
I guess I cannot use a service container as the code is not available at the time the service container is started, therefore it won't pick up my configuration.
That's why I tried to run a container within my container using --network host (see below) and the container seems to be running fine, still I'm not able to curl it.
- name: Start DB for tests
run: |
docker run -d \
--network host \
-v ${{ github.workspace }}/dev/conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml \
-v ${{ github.workspace }}/dev/conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties \
tinkerpop/gremlin-server:3.5
- name: Test connection
run: |
curl "localhost:8182/gremlin?gremlin=g.V().valueMap()"
According to the documentation about the job context the id of the container network should be available ({{job.container.network}}) but is empty if you don’t use any job-level service or container.
Any ideas what I could try next?
This is what I ended up with: I'm now using docker-compose to run the integration tests (on my local computer as well as on GitHub Actions). I'm just mounting the entire directory/repo in the test container. Pulling the node:14-slim delays the build by some seconds, but I guess it's still the best option:
version: "3.2"
services:
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./data/:/opt/gremlin-server/data/
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties
- ./conf/initData.groovy:/opt/gremlin-server/scripts/initData.groovy
test:
image: node:14-slim
working_dir: /app
depends_on:
- gremlin-server
networks:
- graphdb_net
volumes:
- ../:/app
environment:
- NEPTUNE_CONNECTION_STRING=ws://gremlin-server:8182
command:
yarn test
networks:
graphdb_net:
driver: bridge
and I'm running them like this in my workflow:
- name: Spin up test environment
run: |
docker compose -f dev/docker-compose.yaml pull
docker compose -f dev/docker-compose.yaml build
- name: Run tests
run: |
docker compose -f dev/docker-compose.yaml run test
It's based on #DannyB's suggestion and his answer here so all props go to him.

How to run schema scripts after running couchbase via docker compose?

I have a schema script /data/cb-create.sh that I have made available on a container volume. When I run docker-compose up, my server is not initialized at the time command is executed. So those commands fail because the server isn't launched just yet. I do not see a Starting Couchbase Server -- Web UI available at http://<ip>:8091 log line when the .sh script is running to initialize the schema. This is my docker compose file. How can I sequence it properly?
version: '3'
services:
couchbase:
image: couchbase:community-6.0.0
deploy:
replicas: 1
ports:
- 8091:8091
- 8092:8092
- 8093:8093
- 8094:8094
- 11210:11210
volumes:
- ./:/data
command: /bin/bash -c "/data/cb-create.sh"
container_name: couchbase
volumes:
kafka-data:
First: You should choose either an entrypoint or a command statement.
I guess an option is to write a small bash script where you put these commands in order.
Then in the command you specify running that bash script.

docker-compose - Application can't communicate with postgres container

I have a scrapy application which I'm trying to containerized it. Basically, this is my docker-compose.yml file:
version: '3'
services:
scrapper:
container_name: scrapper
build: .
ports:
- 80:80
depends_on:
- db
links:
- db
db:
volumes:
- ./scrapper/sql:/docker-entrypoint-initdb.d
image: postgres
container_name: postgres
restart: always
ports:
- 5432:5432
And this is my Dockerfile:
FROM python:3
WORKDIR /usr/app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
But when I try to execute my application using the following command: docker run -it scrapper_scrapper scrapy crawl angeloni, I'm receiving this message:
File "/usr/local/lib/python3.7/site-packages/scrapy/crawler.py", line 88, in crawl
yield self.engine.open_spider(self.spider, start_requests)
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
Why this is happening? When I execute docker-compose ps command, it shows:
Name Command State Ports
--------------------------------------------------------------------------
postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
scrapper python3 Exit 0
When running docker-compose up to start db, that container will run under its network that is also created by docker compose. As such, running docker run ... will not be able to connect to that instance, since it is not running on the same network. But you can specify it with:
docker run --network $network_name
To get the docker networks available, you can run:
docker networks ls
I think you have to explicitly define a user network and put your containers on it:
https://docs.docker.com/network/bridge/
Under the section:
User-defined bridges provide automatic DNS resolution between containers.

docker-compose exit depends_on service after tests

How do I get a service container to exit once the dependent container has finished?
I have test suite running in the app_unittestbot container that depends_on a postgresql db server (postgres:9.5-alpine) running in separate container. Once the test suite exits, I want to check the return code of the test suite and halt the database container. With the docker-compose.yml below, the db service container never halts.
docker-compose.yml
version: '2.1'
services:
app_postgresql95:
build: ./postgresql95/
ports:
- 54321:5432
app_unittestbot:
command: /root/wait-for-it.sh app_postgresql95:5432 --timeout=60 -- nose2 tests
build: ./unittestbot/
links:
- app_postgresql95
volumes:
- /app/src:/src
depends_on:
- 'app_postgresql95'
You can run docker-compose up --abort-on-container-exit to have compose stop all the containers if any one of them exits. That will likely solve your use case.
For something a little more resilient, I'd probably split this into two compose files so that an abort on postgresql doesn't get accidentally registered as a successful test. Then you'd just run those files in the order you need:
docker-compose -f docker-compose.yml up -d
docker-compose -f docker-compose.test.yml up
docker-compose -f docker-compose.yml down