Properly escaping quotes when running a command in kubernetes - mongodb

I want to run a mongodb command in Kubernetes deployment.
In my yaml file, I want to run the following:
command: ["mongo --port ${MONGODBCACHE_PORT} --host ${MONGODBCACHE_BIND_IP} \
--eval "rs.initiate('{ _id: \"test\", members: [ { _id: 0, host: \"${MONGODBCACHE_BIND_IP}:${MONGODBCACHE_BIND_IP}\" },]}')" && \
./mycommand "]
I checked that the environment variables are present correctly. How do I escape the characters when running this command?

Use only mongo in command and the others in args field which is an array. Like,
command: ["/bin/bash", "-c"]
args:
- mongo
- --port
- ${MONGODBCACHE_PORT}
- --host
- ${MONGODBCACHE_BIND_IP}
- --eval
- rs.initiate('{ _id: "test", members: [ { _id: 0, host: "${MONGODBCACHE_BIND_IP}:${MONGODBCACHE_BIND_IP}" } ] }') && ./mycommand
Hope this will help.

got it working with a slightly modified configuration in manifest :
command: ["/bin/bash", "-c"]
args:
- /usr/bin/mysql -u root -p$DB_ROOT_PASS -h $DB_HOST -e "CREATE USER IF NOT EXISTS $DB_USER#'%' IDENTIFIED BY '$DB_PASS';"
Although it's mysql cli client this should work for any other command.
ENV Variables must exists of course.

Related

Docker 'backup' process container not seeing Database container postgres

I have a simple docker-compose.yml & associated Dockerfiles that give me a simple dev and prod environment for a nginx-uvicorn-django-postgres stack. I want to add an optional 'backup' container that just runs cron to periodically connect to the 'postgres' container.
# backup container - derived from [this blog][1]
ARG DOCKER_REPO
ARG ALPINE_DOCKER_IMAGE # ALPINE
ARG ALPINE_DOCKER_TAG # LATEST
FROM ${DOCKER_REPO}${ALPINE_DOCKER_IMAGE}:${ALPINE_DOCKER_TAG}
ARG DB_PASSWORD
ARG DB_HOST # "db"
ARG DB_PORT # "5432"
ARG DB_NAME # "ken"
ARG DB_USERNAME # "postgres"
ENV PGPASSWORD=${DB_PASSWORD} HOST=${DB_HOST} PORT=${DB_PORT} PSQL_DB_NAME=${DB_NAME} \
USERNAME=${DB_USERNAME}
RUN printenv
RUN mkdir /output && \
mkdir /output/backups && \
mkdir /scripts && \
chmod a+x /scripts
COPY ./scripts/ /scripts/
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/15min/${DB_NAME}_15
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/daily/${DB_NAME}_day
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/weekly/${DB_NAME}_week
COPY ./scripts/in_docker/pg_dump.sh /etc/periodic/monthly/${DB_NAME}_month
RUN apk update && \
apk upgrade && \
apk add --no-cache postgresql-client && \
chmod a+x /etc/periodic/15min/${DB_NAME}_15 && \
chmod a+x /etc/periodic/daily/${DB_NAME}_day && \
chmod a+x /etc/periodic/weekly/${DB_NAME}_week && \
chmod a+x /etc/periodic/monthly/${DB_NAME}_month
The django container is derived from the official Python image and connects (through psycopg2) with values (as ENV value) for host, dbname, username, password and port. The 'backup' container has these same values, but I get this error from the command line:
> pg_dump --host="$HOST" --port="$PORT" --username="$USERNAME" --dbname="$PSQL_DB_NAME"
pg_dump: error: could not translate host name "db" to address: Name does not resolve
Is Alpine missing something relevant that is present in the official Python?
Edit:
I am running with a system of shell scripts that take care of housekeeping for different configurations. so
> ./ken.sh dev_server
will set up the environment variables and then run docker-compose for the project and the containers
docker-compose.yml doesn't explicitly create a network.
I don't know what "db" should resolve to beyond just 'db://'? - its what the django container gets and it is able to resolve a connection to the 'db' service.
service:
db:
image: ${DOCKER_REPO}${DB_DOCKER_IMAGE}:${DB_DOCKER_TAG} #postgres: 14
container_name: ${PROJECT_NAME}_db
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- PGPASSWORD
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
command: ["postgres", "-c", "log_statement=all"]
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -h db"]
interval: 2s
timeout: 5s
retries: 25
This is the 'dev_server' script run by the parent ken.sh script
function dev_server() {
trap cleanup EXIT
wait_and_launch_browser &
docker-compose -p "${PROJECT_NAME}" up -d --build db nginx web pgadmin backup
echo "Generate static files and copy them into static and file volumes."
source ./scripts/generate_static_files.sh
docker-compose -p "${PROJECT_NAME}" logs -f web nginx backup
}
Update: Worked through "Reasons why docker containers can't talk to each other" and found that all the containers are on a ken_default network, from 170.20.0.2 to 170.20.0.6.
I can docker exec ken_backup backup ken_db -c2, but not from db to backup, because the db container doesn't include ping.
From a shell on backup I cannot ping ken_db - ken_db doesn't resolve, nor does 'db'.
I can't make much of that and I'm not sure what to try next.
You are running the backup container as a separate service.
Docker-compose creates a unique network for each service (docker-compose.yml file).
You need to get the DB and your backup container on the same docker network.
See this post

Bash script to create a replica set

I am working on a replicaSet mongo in aws, my goal is to set the replica in run time with a bash script command.
my bash look like this:
mongo mongodb://10.0.1.100 --eval "rs.initiate( { _id : 'rs0', members: [{ _id: 0, host: '10.0.1.100:27017' }]})"
mongo mongodb://10.0.1.100 --eval "rs.add( '10.0.2.100:27017' )"
mongo mongodb://10.0.1.100 --eval "rs.add( '10.0.3.100:27017' )"
mongo mongodb://10.0.1.100 --eval "db.isMaster().primary"
mongo mongodb://10.0.1.100 --eval "rs.slaveOk()"
but when i log in my instance and run rs.status(), i get the error that no config could be found.
So i tried in a different way. I accessed my fresh mongo instance, and through themongo command line i inserted the var config such as:
var config={_id:"rs0",members:[{_id:0,host:"10.0.1.100:27017"}, {_id:1,host:"10.0.2.100:27017"}, {_id:2,host:"10.0.3.100:27017"}]};
> rs.initiate(config);
if i run rs.status, it works.
i would like to run the same command through a linux bash command script to initiate the config, but i cant find a solution. any help please?

Mongo is pending on password despit the --password option

Context
Mongod version v3.2.11
MongoDB shell version: 3.2.11
Debian stretch
ppc64 architecture
Ansible 2.7.6
Issue
I am executing a mongo command through ansible using mongo shell.
Here is the lines i use in my task :
- name: Add the shard to the mongos
shell: /usr/bin/mongo localhost:{{ mongos_port }}/admin -u admin -p {{ mongo_admin_password }} /tmp/shard_init.js
delegate_to: '{{ item }}'
with_items: "{{ groups['mongos_servers'] }}"
But the command is pending at :
TASK [Add the shard to the mongos] ************************************************
When i execute the command line on the remote machine, despite the -p option set, it still asks me to enter the password ...
$ /usr/bin/mongo localhost:2700/admin -u admin -p "XXXX" /tmp/shard_init.js
MongoDB shell version: 3.2.11
Enter password:
It there a way to execute a mongo command through ansible on a database that reaquires authentication ?
Try with the mysql CLI method :
/usr/bin/mongo localhost:2700/admin -u admin -pXXXX /tmp/shard_init.js
Where the password is just near the -p option.
Like the usage option show this isn't the right way to do but in this case the command seems to work.

Mongodump of docker mongoDB instance to single timestamp named file

I'm doing backup dump of a mongoDb docker instance (mongo_db) via docker-compose (thanks to Matt for that snippet so far):
version: "3"
services:
mongo_db_backup:
image: 'mongo:3.4'
volumes:
- '/backup:/backup'
command: |
mongodump --host mongo_db --out /backup/ --db specific
Executing the command
$ docker-compose run mongo_db_backup
gives me all collections of specific db and stores them in /backup/specific.
Is it possible to get only one single (compressed) dump file, which is named as current time?
I'm using --out to get the files in the folder. The docs are saying I cannot use --archive together with --out.
Further more I need to use a env variable to set the archive output. Something like this:
mongo_db_backup:
image: 'mongo:3.4'
volumes:
- '/backup:/backup'
command:
- sh
- -c
- |
mongodump
--host mongo_db
--gzip
--db specific
$$(
if [ $TYPE = "hour" ]
then echo "--archive=/backup/hour/$$(date +"%H").gz"
elif [ $TYPE = "day" ]
then echo --archive=/backup/day/$$(date +"%d").gz
fi
)
Executing with $ docker-compose run -e TYPE=day mongo_db_backup
You can change your compose to below
version: "3"
services:
mongo_db_backup:
image: 'mongo:3.4'
volumes:
- '/backup:/backup'
command: sh -c "mongodump --host mongo_db --gzip --archive=/backup/$$(date +'%Y%m%d_%H%M%S') --db $${DB:=specific}"
Now if you want to change the DB you can run it like below
docker-compose run -e DB=abc mongo_db_backup
If you want to use it like docker-compose run mongo_db_backup abc then would need to create entrypoint.sh script handle the arguments in that. So it is easier to do it using environment variables
Edit-1 - Default behavior on missing environment variable
If you need to change the command based on environment variable being specified or not, you can change the command to below
command: sh -c "mongodump --host mongo_db --gzip --archive=/backup/$$(date +'%Y%m%d_%H%M%S') $$(if [ -z $DB ]; then echo '--db default_db'; else echo --collection $DB; fi)"
Edit-2: Multiline line command in compose with if else
To solve the issue of using multiline commands in compose you need to use a combination of array and multiline
command:
- sh
- -c
- |
multi line shell script
Below is the command I worked out for your update
command:
- bash
- -c
- |
TYPE=$${TYPE:=day}
if [ ! -d /backup/hour ]; then mkdir /backup/hour; fi
if [ ! -d /backup/day ]; then mkdir /backup/day; fi
mongodump --host mongo_db --gzip \
--db test \
$$( \
if [ "$$TYPE" == "hour" ]; then \
echo "--archive=/backup/hour/$$(date +'%H').gz"; \
elif [ "$$TYPE" == "day" ]; then \
echo "--archive=/backup/day/$$(date +'%d').gz"; \
fi \
)
Since docker-compose processes variables we need to escape each $ using $$. So $TYPE becomes $$TYPE. Also mongodump is a single command, so if you split it into multiple lines you need to use \ for multiline command continuation

Docker mongodb config file

There is a way to link /data/db directory of the container to your localhost. But I can not find anything about configuration. How to link /etc/mongo.conf to anything from my local file system. Or maybe some other approach is used. Please share your experience.
I'm using the mongodb 3.4 official docker image. Since the mongod doesn't read a config file by default, this is how I start the mongod service:
docker run -d --name mongodb-test -p 37017:27017 \
-v /home/sa/data/mongod.conf:/etc/mongod.conf \
-v /home/sa/data/db:/data/db mongo --config /etc/mongod.conf
removing -d will show you the initialization of the container
Using a docker-compose.yml:
version: '3'
services:
mongodb_server:
container_name: mongodb_server
image: mongo:3.4
env_file: './dev.env'
command:
- '--auth'
- '-f'
- '/etc/mongod.conf'
volumes:
- '/home/sa/data/mongod.conf:/etc/mongod.conf'
- '/home/sa/data/db:/data/db'
ports:
- '37017:27017'
then
docker-compose up
When you run docker container using this:
docker run -d -v /var/lib/mongo:/data/db \
-v /home/user/mongo.conf:/etc/mongo.conf -p port:port image_name
/var/lib/mongo is a host's mongo folder.
/data/db is a folder in docker container.
I merely wanted to know the command used to specify a config for mongo through the docker run command.
First you want to specify the volume flag with -v to map a file or directory from the host to the container. So if you had a config file located at /home/ubuntu/ and wanted to place it within the /etc/ folder of the container you would specify it with the following:
-v /home/ubuntu/mongod.conf:/etc/mongod.conf
Then specify the command for mongo to read the config file after the image like so:
mongo -f /etc/mongod.conf
If you put it all together, you'll get something like this:
docker run -d --net="host" --name mongo-host -v /home/ubuntu/mongod.conf:/etc/mongod.conf mongo -f /etc/mongod.conf
For some reason I should use MongoDb with VERSION:3.0.1
Now : 2016-09-13 17:42:06
That is what I found:
#first step: run mongo 3.0.1 without conf
docker run --name testmongo -p 27017:27017 -d mongo:3.0.1
#sec step:
docker exec -it testmongo cat /entrypoint.sh
#!/bin/bash
set -e
if [ "${1:0:1}" = '-' ]; then
set -- mongod "$#"
fi
if [ "$1" = 'mongod' ]; then
chown -R mongodb /data/db
numa='numactl --interleave=all'
if $numa true &> /dev/null; then
set -- $numa "$#"
fi
exec gosu mongodb "$#"
fi
exec "$#"
I find that there are two ways to start a mongod service.
What I try:
docker run --name mongo -d -v your/host/dir:/container/dir mongo:3.0.1 -f /container/dir/mongod.conf
the last -f is the mongod parameter, you can also use --config instead.
make sure the path like your/host/dir exists and the file mongod.conf in it.