I try to run locustfile in locustio/locust docker image and it cannot find the locustfile, despite the locustfile exists in the locust directory.
~ docker run -p 8089:8089 -v $PWD:/locust locustio/locust locust -f /locust/locustfile.py
Could not find any locustfile! Ensure file ends in '.py' and see --help for available options.
(I'm reposting this question as my own, because the original poster deleted it immediately after getting an answer!)
Remove the extra "locust" from your command, so that it becomes:
docker run ... locustio/locust -f /locust/locustfile.py
Related
I want to to build a PostgreSQL image that only contains some extra .sql files to be executed at starting time
Dockerfile:
FROM postgres:11.9-alpine
USER postgres
WORKDIR /
COPY ddl/*.sql /docker-entrypoint-initdb.d/
Then I build the image:
docker build -t my-postgres:1.0.0 -f Dockerfile .
And run the container
docker run -d --name my-database \
-e POSTGRES_PASSWORD=abc123 \
-p 5432:5432 \
my-postgres:1.0.0
The output of it is the container id
33ed596792a80fc08f37c7c0ab16f8827191726b8e07d68ce03b2b5736a6fa4e
Checking the running containers returns nothing:
Docker container ls
But if I explicitly start it, it works
docker start my-postgres
In the original PostgreSQL image the Docker run command already starts the database. Why after building my own image it doesn't?
It turned out that one of the copied .sql files was failing to execute and, based on this documentation, it makes the entrypoint script to exit. Fixing the SQL solved the issue and the container started normally with Docker run
I've run in detached mode docker-compose up.
I have then run docker-compose exec container_name command
and i had nothing, so I have then run docker-compose exec -it container_name bash to get the shell and i got nothing either.
I have tried these command but I have no output from the first command and i dont have access to the shell with the second command. Do you know how i can have output or access the shell?
I am under macos x catalina 10.15.7. I have tried to reboot but it's the same.
docker-compose build and docker-compose up are running correctly
note docker ps -a gives me the same container id as docker-compose without the #1 at the end (react-wagtail-blog_web_cont with docker ps -a and react-wagtail-blog_web_cont_1 with docker-compose ps).
I can access my react-wagtail-blog_web_cont container with docker exec -it react-wagtail-blog_web_cont bash...
Thank you
If you just want to see the normal log output you can use docker logs <container> where container is the container name or hash. To find that, just run docker ps
Also if you want to see the logs realtime as they come use `docker logs -f
See docs
When testing API using locust distributed mode without UI in docker. The distribution.csv, requests.csv are getting generated but the failures.csv and expection.csv are not getting generated but the requests.csv show failures as given below.
"Method","Name","# requests","# failures","Median response time","Average response time","Min response time","Max response time","Average Content Size","Requests/s"
"POST","/api/something/something",197009,56,470,559,78,156714,1,436.31
Can you please help.
The problem is that file need to be written to a folder that it has permission to, and a volume that is mounted to your host. If you add a mounted folder before the file name, it should work. For example:
Docker file:
# Set base image
FROM locustio/locust
ADD locustfile.py locustfile.py
Docker create Command:
docker build -t mykey/myimage:1.0 .
Docker run command (on Windows, replace with %CD% with $pwd on linux):
docker run --volume "%CD%:/mnt/locust" -e LOCUSTFILE_PATH=/mnt/locust/locustfile.py -e TARGET_URL=https://example.com -e LOCUST_OPTS="--clients=10 --no-web --run-time=600 --csv=/mnt/locust/output" mykey/myimage:1.0
The files will now write to the same folder where locustfile.py is located.
There is a docker container with Postgres server. Ones postgres is stopped or crashed (doesn't matter) I need to check some environment variables and the state of a few files.
By default, the container stops after an application is finished.
I know there is an option to change the default behavior in dockerfile but I no longer to find it ((
If somebody knows that please give me an Dockerfile example like this :
FROM something
RUN something ...
ENTRYPOINT [something]
You can simply run non exiting process in the end of entrypoint to keep the container alive, even if the main process exits.
For example use
tail -f 'some log file'
There isn't an "option" to keep a container running when the main process has stopped or died. You can run something different in the container while debugging the actual startup scripts. Sometimes you need to override an entrypoint to do this.
docker run -ti $IMAGE /bin/sh
docker run -ti --entrypoint=/bin/sh $IMAGE
If the main process will not stay running when you docker start the existing container then you won't be able to use that container interactively, otherwise you could:
docker start $CID
docker exec -ti $CID sh
For getting files from an existing container, you can docker cp anything you need from the stopped container.
docker cp $CID:/a/path /some/local/path
You can also docker export a tar archive of the complete container.
docker export $CID -o $CID.tar
tar -tvf $CID.tar | grep afile
The environment Docker injects can be seen with docker inspect, but this won't give you anything the process has added to the environment.
docker inspect $CID --format '{{ json .Config.Env }}'
In general, Docker requires a process to keep running in the foreground. Otherwise, it assumes that the application is stopped and the container is shut down. Below, I outline a few ways, that I'm aware of, which can prevent a container from stopping:
Use a process manager such as runit or systemd to run a process inside a container:
As an example, here you can find a Redhat article about running systemd within a docker container.
A few possible approaches for debugging purposes:
a) Add an artificial sleep or pause to the entrypoint:
For example, in bash, you can use this to create an infinite pause:
while true; do sleep 1; done
b) For a fast workaround, one can run the tail command in the container:
As an example, with the command below, we start a new container in detached/background mode (-d) and executing the tail -f /dev/null command inside the container. As a result, this will force the container to run forever.
docker run -d ubuntu:18.04 tail -f /dev/null
And if the main process crashed/exited, you may still look up the ENV variable or check out files with exec and the basic commands like cd, ls. A few relevant commands for that:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{$value}} {{end}}' name-of-container
docker exec -it name-of-container bash
Creating a backup script to dump mongodb inside a container, I need to copy the folder outside the container, Docker cp doesn't seem to work with wildcards :
docker cp mongodb:mongo_dump_* .
The following is thrown in the terminal :
Error response from daemon: lstat /var/lib/docker/aufs/mnt/SomeHash/mongo_dump_*: no such file
or directory
Is there any workaround to use wildcards with cp command ?
I had a similar problem, and had to solve it in two steps:
$ docker exec <id> bash -c "mkdir -p /extract; cp -f /path/to/fileset* /extract"
$ docker cp <id>:/extract/. .
It seems there is no way yet to use wildcards with the docker cp command https://github.com/docker/docker/issues/7710.
You can create the mongo dump files into a folder inside the container and then copy the folder, as detailed on the other answer here.
If you have a large dataset and/or need to do the operation often, the best way to handle that is to use docker volumes, so you can directly access the files from the container into your host folder without using any other command: https://docs.docker.com/engine/userguide/containers/dockervolumes/
Today I have faced the same problem. And solved it like:
docker exec container /bin/sh -c 'tar -cf - /some/path/*' | tar -xvf -
Hope, this will help.