docker-compose down with a non-default yml file name - docker-compose

I have a non-default docker-compose file name (docker-compose-test.yml).
There is only one service defined in it.
I am starting the container using "docker-compose -f docker-compose-test.yml up"
I am trying to stop the container started above using docker-compose down, but it is not working.
I am getting below error,
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
I understand that it is looking for default docker compose file name. Is there a way to specify the custom config file name during docker-compose down?

You should run the docker-compose down command with the same flags that when you started the containers.
docker-compose -f docker-compose-test.yml down

You can create a .env file and add the following:
COMPOSE_FILE=docker-compose-test.yml

Note, that the syntax of docker-compose is such that -f needs to be before up/down, and thereafter -d:
docker-compose -f docker-compose.prod.yml up -d
If you put -f after up/down, it doesn't work, and also if you put -d before up/down you get the help display or an error. Down works of course without -d:
docker-compose -f docker-compose.prod.yml down

If you use multiple files for docker-compose AND a custom name, you should write like this:
docker-compose -f docker-compose.yml -f docker-compose.override.yml -p custom_name down

Related

docker-compose up command with non default file name, how does it work?

I have two files docker-compose-dapr.yml and docker-compose-infra.yml files.
Basically I am trying to run this example.
First I ran this command
docker-compose -f docker-compose-infra.yml up -d
It built, and some containers are ready.
Next I ran simply the up command.
docker-compose up -d
Things work as expected.
The question is which files does docker-compose up command pick up. Both?
As per my earlier understanding, if the file name is something non-default, then we need to explicitly specify using the -f. But here I am not specifying any file, so which files is it picking?

Locust docker container cannot find the locust file

I try to run locustfile in locustio/locust docker image and it cannot find the locustfile, despite the locustfile exists in the locust directory.
~ docker run -p 8089:8089 -v $PWD:/locust locustio/locust locust -f /locust/locustfile.py
Could not find any locustfile! Ensure file ends in '.py' and see --help for available options.
(I'm reposting this question as my own, because the original poster deleted it immediately after getting an answer!)
Remove the extra "locust" from your command, so that it becomes:
docker run ... locustio/locust -f /locust/locustfile.py

Locust not generating failures.csv and expections.csv in distributed mode without UI in docker

When testing API using locust distributed mode without UI in docker. The distribution.csv, requests.csv are getting generated but the failures.csv and expection.csv are not getting generated but the requests.csv show failures as given below.
"Method","Name","# requests","# failures","Median response time","Average response time","Min response time","Max response time","Average Content Size","Requests/s"
"POST","/api/something/something",197009,56,470,559,78,156714,1,436.31
Can you please help.
The problem is that file need to be written to a folder that it has permission to, and a volume that is mounted to your host. If you add a mounted folder before the file name, it should work. For example:
Docker file:
# Set base image
FROM locustio/locust
ADD locustfile.py locustfile.py
Docker create Command:
docker build -t mykey/myimage:1.0 .
Docker run command (on Windows, replace with %CD% with $pwd on linux):
docker run --volume "%CD%:/mnt/locust" -e LOCUSTFILE_PATH=/mnt/locust/locustfile.py -e TARGET_URL=https://example.com -e LOCUST_OPTS="--clients=10 --no-web --run-time=600 --csv=/mnt/locust/output" mykey/myimage:1.0
The files will now write to the same folder where locustfile.py is located.

"docker-compose" command to set path for docker-compose.yml file

Reading the help by docker-compose -h, or this official manual, will give us the option --project-directory PATH
docker-compose [-f ...] [options] [COMMAND] [ARGS...]
--project-directory PATH Specify an alternate working directory
(default: the path of the compose file)
But I tried to call the below command and failed - I have ensured the file at ./mySubFolder/docker-compose.yml has already been created.
docker-compose --project-directory ./mySubFolder up
The error
Can't find a suitable configuration file in this directory or anyparent.
Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
What I did wrong? How to pass the parameter properly?
Use the -f param instead --project-directory view details
You can use -f flag to specify a path to Compose file that is not located in the current directory
docker-compose -f /path/to/docker-compose.yml

Copy folder with wildcard from docker container to host

Creating a backup script to dump mongodb inside a container, I need to copy the folder outside the container, Docker cp doesn't seem to work with wildcards :
docker cp mongodb:mongo_dump_* .
The following is thrown in the terminal :
Error response from daemon: lstat /var/lib/docker/aufs/mnt/SomeHash/mongo_dump_*: no such file
or directory
Is there any workaround to use wildcards with cp command ?
I had a similar problem, and had to solve it in two steps:
$ docker exec <id> bash -c "mkdir -p /extract; cp -f /path/to/fileset* /extract"
$ docker cp <id>:/extract/. .
It seems there is no way yet to use wildcards with the docker cp command https://github.com/docker/docker/issues/7710.
You can create the mongo dump files into a folder inside the container and then copy the folder, as detailed on the other answer here.
If you have a large dataset and/or need to do the operation often, the best way to handle that is to use docker volumes, so you can directly access the files from the container into your host folder without using any other command: https://docs.docker.com/engine/userguide/containers/dockervolumes/
Today I have faced the same problem. And solved it like:
docker exec container /bin/sh -c 'tar -cf - /some/path/*' | tar -xvf -
Hope, this will help.