I need only to "unblock" my terminal after docker-compose up... There are some option or setpup for it?
NOTES: my service is using unless-stopped so supposing that not need to use -d on the command line.
...
image: etc/etc
restart: unless-stopped
network_mode: host
...
PS: I need keep container running and I not need to use & in the terminal line, I need no extra-process running. Seens "-t" ... The name of this "blocking mode" is "pseudo-TTY"?
All examples say to use simplest docker-compose up but it is not, seems that you MUST use option even when yml say what you want: sudo docker-compose up --detach.
Related
I have a docker container that is running a mongo database, and then a service that is checking for data stored on it, but first some basic setup has to be done like adding a user and collection. I have a script that does all of that, but as of now I have to run it manually with docker exec -it logging-service_mongo_1 bash docker-entrypoint-initdb.d/test2.sh
Note that the script is a volume for the container. Is there a way that I can have the script run when the container running mongo has been established? I have tried using entrypoint, but had no luck with that. Apologies if this information is lacking, this is my first attempt using both docker and mongodb
One more thing is that the code I inherited contains this
CMD [ "npm", "run", "start:prod" ]
which I think may have been messing with the entrypoint when I attempted that
If you are using the official MongoDB docker image, take a look at the "Initializing a fresh instance" section of the image documentation:
When a container is started for the first time it will execute files with extensions .sh and .js that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. .js files will be executed by mongo using the database specified by the MONGO_INITDB_DATABASE variable, if it is present, or test otherwise. You may also switch databases within the .js script.
You can either build a new image based on this one that bakes the script into /docker-entrypoint-initdb.d, or you can mount scripts into that directory using bind mounts (docker run -v $PWD/myscript.sh:/docker-entrypoint-initdb.d/myscript.sh ...)
if you only need to setup the user and password yo can set it while starting the container
docker run -d --name container_name \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=password \
mongo
If you use docker-compose
mongodb:
container_name: mongodb
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- mongo_data:/data/db
Regarding the collection it will be created once you insert data on it.
If you really need to do this after the mongo container start, i suggest you to create another container that tries to setup the mongo when it detects that mongo turned on
I want to be able to run a simple bash script within a container service on the hour using cron. I'm using Alpine Linux via docker-compose with a custom Dockerfile to produce a php-fpm based image, on which I hope to get crond running as well - except I can't.
Executing ps aux | grep cron on the container once built, returns nothing.
From what I understand, the usual Linux startup processes don't exist in Docker containers - fine - so how do I auto-start crond? Its dirs under /etc/periodic/ are created automatically, so I don't understand why the applicable process that consumes those dirs, isn't also running.
I tried creating a dedicated service definition within docker-compose.yml, which actually worked but the shell script to be run hourly needs access to a php binary which is running in a different container, so this isn't a viable solution.
If I shell into the container and run rc-service crond start I get this - but it never "finishes":
/var/www/html # rc-service crond start
* WARNING: crond is already starting
#> docker --version
Docker version 19.03.8, build afacb8b7f0
#> docker-compose --version
docker-compose version 1.23.2, build 1110ad01
I need a solution that I can place into my Dockerfile or docker-compose.yml files.
Dockerd is running on Ubuntu Xenial FWIW.
to run a cronjob container (Alpine), you need to make sure sure that the command of your docker container is
exec crond -f
if you want to add this to a docker file
CMD ["exec", "crond", "-f"]
you also may need to update the corn files before running the above command
Update based on the docker file and compose
To be able to solve your issues you need to update your docker-compose to have two containers one for cron and one for web
service_php_cron:
build:
context: .
dockerfile: .docker/services/php/Dockerfile.dev
container_name: base_service_php
command: 'cron_jobs'
volumes:
- ./app:/var/www/html/public
env_file:
- ./.env
# Low level container logging
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "5"
service_php:
build:
context: .
dockerfile: .docker/services/php/Dockerfile.dev
ports:
- "9000:9000"
command: 'web_server'
container_name: base_service_php
volumes:
- ./app:/var/www/html/public
env_file:
- ./.env
# Low level container logging
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "5"
you also need to update your docker file to be able to handle multiple commands using docker entry points
Add the below line to your docker file + remove the CMD one
COPY ./docker-entrypoint.sh /
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
and finally, create the entry point (make sure it hash execute permissions)
#!/bin/sh -e
case $1 in
web_server)
YOUR WEB SERVER COMMAND
;;
cron_jobs)
exec crond -f
;;
*)
exec "$#"
;;
esac
exit 0
you can check this link for more info about entrypoints
So I'm setting up with Docker swarm.
I am now cool with the docker stack deploy -c docker-compose.yml myapp command which replaces my former docker-compose up.
But one of my service is my DB and I need to pgrestore inside it.
Previously with compose, I would run:
docker-compose run --rm postgres pg_restore --rest-of-command
How can I do the same with stack deploy?
Unfortunately, the container created with compose is not the same as the one from stack deploy: the first one is called myapp_postgres while the second myapp_postgres.1.zamd6kb6cy4p8mtfha0gn50vh.
I guess I could write something like docker exec 035803286af0 but then I loose all the benefits of the config from docker-compose.yml, which in this case is:
postgres:
env_file:
- ./.env
image: postgres:11.0-alpine
volumes:
- "..:/app" # toe make the dump accessible to the container
- "/var/run/postgresql:/var/run/postgresql"
So this solution is not very IaC.
So ain't there a docker service run or something?
Thanks
You can follow docker image docs (Initialization scripts section):
and create *.sh script under /docker-entrypoint-initdb.d which will run pg_restore ... when Postgres container will run as part of the Docker service.
It doesn't seem to be a direct answer to your question, however it may achieve your goal of restoring the dump during Postgres initialization.
I want to run mongo db in docker container. I've pulled image and run it. So it seems work ok.
But every time I start it the DB is overwritten so I loose any changes. So I want to want to map somehow internal container storage on my local host folder.
Should I write Dockerfile or/and docker-compose.yaml? I suppose this is simple question but being new in docker I can't understand what to read to get full understanding.
You do not need to write Dockerfile and make thing complex, just use offical image as mentioned in command or compose file.
You can use both options either docker run or docker-compose but the path should be correct in mapping to keep data persistent.
Here is way
Create a data directory on a suitable volume on your host system, e.g. /my/own/datadir.
Start your mongo container like this:
$ docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo
The -v /my/own/datadir:/data/db part of the command mounts the
/my/own/datadir directory from the underlying host system as /data/db
inside the container, where MongoDB by default will write its data
files.
mongo docker volume
with docker-compose
version: "2"
services:
mongo:
image: mongo:latest
restart: always
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=pastime
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_password
volumes:
- /my/own/datadir:/data/db
I want to have a MongoDB service running in a Docker in order to serve a Flask app. What I've tried is create a container using docker-compose.yml:
my_mongo_service:
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_ROOT_USER}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MY_DATABASE_NAME}
ports:
- "27017:27017"
volumes:
- "/data/db:/data/db"
command: mongod
Imagine we have an .env file like this:
MONGO_ROOT_USER=my_fancy_username
MONGO_ROOT_PASSWORD=my_fancy_password
MY_DATABASE_NAME=my_fancy_database
What I would expect (reading the doc) is that a database matching MY_DATABASE_NAME value is created and an user matching MONGO_ROOT_USER is created too and I could authenticate with the pair (MONGO_ROOT_USER,MONGO_ROOT_PASSWORD).
Ok, I launch my container with docker-compose up and enter on it with docker exec -it <container-id> bash. I put mongo on the console and when I try to authenticate it crashes:
> use my_fancy_database
switched to db my_fancy_database
> db.auth('my_fancy_username','my_fancy_password')
Error: Authentication failed.
0
On the log, the error I find is the following
[...] authentication failed for my_fancy_username on my_fancy_database from client [...] ; UserNotFound: Could not find user my_fancy_username#my_fancy_database
The docker-compose.yml configuration (as it was posted on official documentation) is not working. What I'm doing wrong?
Thanks in advance.
I don't get it. Are you using environmental variables, which are not in the environment? It sure looks so.
If you do echo $MY_DATABASE_NAME in your terminal and see empty output, then here is the answer to your question. You either first have to define the variable with export (or source for a file) or redefine your docker-compose.yml.
For that, it's best to use env_file directive:
my_mongo_service:
image: mongo
env_file:
- .env
ports:
- "27017:27017"
volumes:
- "/data/db:/data/db"
And set your .env as this:
MONGO_INITDB_ROOT_USERNAME=my_fancy_username
MONGO_INITDB_ROOT_PASSWORD=my_fancy_password
MONGO_INITDB_DATABASE=my_fancy_database
Side note: using command: mongod is not necessary, the base image is already using it.