mongo disconnect after connecting to executing thru shell in docker - mongodb

I want to close the mongo shell after executing the following in a docker command:
#!/bin/bash
docker run -it --link sonams-mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
if [ $? -eq 0 ]; then \
echo "connected to mongo successful"; \
else \
echo "mongo connection NOT successful"; \
fi; \
When it connects it goes to a shell prompt within mongo. Is there a way to pass a shell command to do an exit right in or after the docker command?
thanks

Usually (of course it depends on the base image you're using) you wouldn't need to invoke "sh -c". Also, the -it combination is usually what makes the shell open and wait for input. Try to change your command a little bit, like below, without -it and sh -c:
docker run --link sonams-mongo:mongo --rm mongo mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"
if that doesn't help, try this:
echo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test" | docker run --link sonams-mongo:mongo --rm mongo mongo

Related

How to combine exec commands in shell script

I am running postgreSQL on docker.
In order to access the container I run
winpty docker exec -it postgres_db bash
Now that I am inside the docker container, I run this to access as user 'postgres' in postgres server
psql -U postgres
Then what I want to do is create a database
create database test;
I can I do this sequentially in a script?
I want something like this:
#!/bin/bash
winpty docker run -p 8005:5432 --name postgres_db -e POSTGRES_PASSWORD=password -d postgres
sleep 5
winpty docker exec -it postgres_db bash -c "psql -U postgres" [-c "create database test;"]
create database cannot be executed if im not inside "psql -U postgres". Obviously the last line -c "create database test;" is wrong, just to make you understand what I want to do.
winpty docker exec -it postgres_db bash -c "psql -U postgres -c 'CREATE DATABASE test;'"
Both bash -c and psql -c take one shell word as input. A single or double quoted string is one word. You need to nest the quotes, just like the -c commands are nested.
Alternative:
winpty docker exec -it postgres_db bash -c "psql -U postgres -c \"CREATE DATABASE test;\""

How to pass arguments to spark-submit using docker

I have a docker container running on my laptop with a master and three workers, I can launch the typical wordcount example by entering the ip of the master using a command like this:
bash-4.3# spark/bin/spark-submit --class com.oreilly.learningsparkexamples.mini.scala.WordCount --master spark://spark-master:7077 /opt/spark-apps/learning-spark-mini-example_2.11-0.0.1.jar /opt/spark-data/README.md /opt/spark-data/output-5
I can see how the files have been generated inside output-5
but when I try to launch the process from outside, using the command:
docker run --network docker-spark-cluster_spark-network -v /tmp/spark-apps:/opt/spark-apps --env SPARK_APPLICATION_JAR_LOCATION=$SPARK_APPLICATION_JAR_LOCATION --env SPARK_APPLICATION_MAIN_CLASS=$SPARK_APPLICATION_MAIN_CLASS -e APP_ARGS="/opt/spark-data/README.md /opt/spark-data/output-5" spark-submit:2.4.0
Where
echo $SPARK_APPLICATION_JAR_LOCATION
/opt/spark-apps/learning-spark-mini-example_2.11-0.0.1.jar
echo $SPARK_APPLICATION_MAIN_CLASS
com.oreilly.learningsparkexamples.mini.scala.WordCount
And when I enter the page of the worker where the task is attempted, I can see that in line 11, the first of all, where the path for the first argument is collected, I have an error like this:
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at com.oreilly.learningsparkexamples.mini.scala.WordCount$.main(WordCount.scala:11)
It is clear, in the zero position is not collecting the path of the first parameter, the one of the input file of which I want to do the wordcount.
The question is, why is docker not using the arguments passed through -e APP_ARGS="/opt/spark-data/README.md /opt/spark-data-output-5" ?
I already tried to run the job in a traditional way, loging to driver spark-master and running spark-submit command, but when i try to run the task with docker, it doesn't work.
It must be trivial, but i still have any clue. Can anybody help me?
SOLVED
I have to use a command like this:
docker run --network docker-spark-cluster_spark-network -v /tmp/spark-apps:/opt/spark-apps --env SPARK_APPLICATION_JAR_LOCATION=$SPARK_APPLICATION_JAR_LOCATION --env SPARK_APPLICATION_MAIN_CLASS=$SPARK_APPLICATION_MAIN_CLASS --env SPARK_APPLICATION_ARGS="/opt/spark-data/README.md /opt/spark-data/output-6" spark-submit:2.4.0
Resuming, i had to change -e APP_ARGS to --env SPARK_APPLICATION_ARGS
-e APP_ARGS is the suggested docker way...
This is the command that solves my problem:
docker run --network docker-spark-cluster_spark-network -v /tmp/spark-apps:/opt/spark-apps --env SPARK_APPLICATION_JAR_LOCATION=$SPARK_APPLICATION_JAR_LOCATION --env SPARK_APPLICATION_MAIN_CLASS=$SPARK_APPLICATION_MAIN_CLASS --env SPARK_APPLICATION_ARGS="/opt/spark-data/README.md /opt/spark-data/output-6" spark-submit:2.4.0
I have to use --env SPARK_APPLICATION_ARGS="args1 args2 argsN" instead of -e APP_ARGS="args1 args2 argsN".

Running a script from a mongodb docker-container

I have script that restores the database restore.sh:
mongorestore --port 27017 --db myapp `pwd`/db-dump/myapp
I want to run this in a short lived docker-container using the image mvertes/alpine-mongo.
To run a shortlived container the --rmis used:
docker run --rm --name mongo -p 27017:27017 \
-v /data/db:/data/db \
mvertes/alpine-mongo
But how do I execute my script in the same command?
Check out the docker run reference:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You can pass in the command you wish to execute. In your case, this could be the restore script. You must consider two things, though.
The script is not part of the container, so you need to mount into the container.
Specifying a command overwrites the CMD directive in the Dockerfile.
If you look at the Dockerfile, you see this as its last line:
CMD [ "mongod" ]
This means the default command that the container executes is mongod. When you specify a command for docker run, you "replace" this with the command you pass in. In your case: Passing in the restore script will overwrite mongod, which means Mongo never starts and the script will fail.
You have two options:
Start one container with the database and another one with the restore script.
Try to chain the commands.
Since you want to run this in a short-lived container, option 2 might be better suited for you. Just remember to start mongod with the --fork flag to run it in daemon mode.
$ docker run --rm --name mongo -p 27017:27017 \
-v /data/db:/data/db \
-v "$(pwd)":/mnt/pwd \
mvertes/alpine-mongo "mongod --fork && /mnt/pwd/restore.sh"
Hopefully, this is all it takes to solve your problem.

docker-machine ssh command for mongodump

Setup for the problem:
Create a data volume container
$ docker create --name dbdata -v /dbdata mongo /bin/true
Start mongo in a container linked to the data volume container
$ docker run -d --name mongo --volumes-from dbdata mongo
Verify you can connect to mongo using the mongo client
$ docker run -it --link mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
The problem:
The docker-machine ssh takes a host and a command argument to execute on the host. I'd like to execute the following mongodump command, which works once I ssh into the docker host:
$ docker-machine ssh demo
root#demo:~# docker run --rm --link mongo:mongo -v $HOME:/backup mongo bash -c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR'
2015-09-15T16:34:02.676+0000 writing test.samples to /backup/test/samples.bson
2015-09-15T16:34:02.678+0000 writing test.samples metadata to /backup/test/samples.metadata.json
2015-09-15T16:34:02.678+0000 done dumping test.samples (1 document)
2015-09-15T16:34:02.679+0000 writing test.system.indexes to /backup/test/system.indexes.bson
However, using the docker-machine ssh command to execute the above command in a single step doesn't work for me:
$ docker-machine ssh demo -- docker run --rm --link mongo:mongo -v $HOME:/backup mongo bash -c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR'
SSH cmd error!
command: docker run --rm --link mongo:mongo -v /Users/tony:/backup mongo bash -c mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR
err : exit status 1
output : 2015-09-15T16:53:07.717+0000 Failed: error connecting to db server: no reachable servers
So if the container to run the mongodump command can't connect to the mongo container, I figure there's probably an issue with --host $MONGO_PORT_27017_TCP_ADDR (it should be passed as is into the container, so premature expansion causing an empty string?), but a bit stumped trying to get it right. Any ideas are appreciated.
Update: I'm one step closer. The following appears to execute the command correctly, although the data isn't written to the system and the session hangs:
$ docker-machine ssh demo -- $(docker run --rm --link mongo:mongo -v $HOME:/backup mongo bash -c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR')
2015-09-15T18:02:03.347+0000 writing test.samples to /backup/test/samples.bson
2015-09-15T18:02:03.349+0000 writing test.samples metadata to /backup/test/samples.metadata.json
2015-09-15T18:02:03.349+0000 done dumping test.samples (1 document)
2015-09-15T18:02:03.350+0000 writing test.system.indexes to /backup/test/system.indexes.bson
The question asked for a solution based on docker ssh, but since no one responded, I'll answer the question myself with what is a better solution anyway.
As suggested by Nathan LeClaire (#upthecyberpunks) to me over twitter, the better solution is to avoid the hassle altogether and simply run a container to execute the mongodump command.
$ docker run \
--rm \
--link mongo:mongo \
-v /root:/backup mongo bash \
-c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR'
Not technically required for the answer, but the resulting test db backup file can then be transferred from the docker host machine to your current directory via docker scp:
$ docker-machine scp -r dev:/root/test .
Since I cannot add a comment to the orginal nice answer, just add a little explanation here, $MONGO_PORT_27017_TCP_ADDR should be the ip of our machine, for example, the virtual machine's ip in my virtualbox is 100.100.100.10, so the last line shoulb be:
-c 'mongodump --out /backup --host 100.100.100.10' or
-c 'mongodump --out /backup --host 100.100.100.10:27017'.
If not add the host field, great chances are that we will encounter some error like:
*** Failed: error connecting to db server: no reachable servers.
And thanks again to the orginal answer ^_^.

Why can't you start postgres in docker using "service postgres start"?

All the tutorials point out to running postgres in the format of
docker run -d -p 5432 \
-t <your username>/postgresql \
/bin/su postgres -c '/usr/lib/postgresql/9.2/bin/postgres \
-D /var/lib/postgresql/9.2/main \
-c config_file=/etc/postgresql/9.2/main/postgresql.conf'
Why can't we in our Docker file have:
ENTRYPOINT ["/etc/init.d/postgresql-9.2", "start"]
And simply start the container by
docker run -d psql
Is that not the purpose of Entrypoint or am I missing something?
the difference is that the init script provided in /etc/init.d is not an entry point. Its purpose is quite different; to get the entry point started, in the background, and then report on the success or failure to the caller. that script causes a postgres process, usually indirectly via pg_ctl, to be started, detached from the controlling terminal.
for docker to work best, it needs to run the application directly, attached to the docker process. that way it can usefully and generically terminate it when the user asks for it, or quickly discover and respond to the process crashing.
Exemplify that IfLoop said.
Using CMD into Dockerfiles:
USE postgres
CMD ["/usr/lib/postgresql/9.2/bin/postgres", "-D", "/var/lib/postgresql/9.2/main", "-c", "config_file=/etc/postgresql/9.2/main/postgresql.conf"]
To run:
$docker run -d -p 5432:5432 psql
Watching PostgeSQL logs:
$docker logs -f POSTGRES_CONTAINER_ID