Setup for the problem:
Create a data volume container
$ docker create --name dbdata -v /dbdata mongo /bin/true
Start mongo in a container linked to the data volume container
$ docker run -d --name mongo --volumes-from dbdata mongo
Verify you can connect to mongo using the mongo client
$ docker run -it --link mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
The problem:
The docker-machine ssh takes a host and a command argument to execute on the host. I'd like to execute the following mongodump command, which works once I ssh into the docker host:
$ docker-machine ssh demo
root#demo:~# docker run --rm --link mongo:mongo -v $HOME:/backup mongo bash -c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR'
2015-09-15T16:34:02.676+0000 writing test.samples to /backup/test/samples.bson
2015-09-15T16:34:02.678+0000 writing test.samples metadata to /backup/test/samples.metadata.json
2015-09-15T16:34:02.678+0000 done dumping test.samples (1 document)
2015-09-15T16:34:02.679+0000 writing test.system.indexes to /backup/test/system.indexes.bson
However, using the docker-machine ssh command to execute the above command in a single step doesn't work for me:
$ docker-machine ssh demo -- docker run --rm --link mongo:mongo -v $HOME:/backup mongo bash -c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR'
SSH cmd error!
command: docker run --rm --link mongo:mongo -v /Users/tony:/backup mongo bash -c mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR
err : exit status 1
output : 2015-09-15T16:53:07.717+0000 Failed: error connecting to db server: no reachable servers
So if the container to run the mongodump command can't connect to the mongo container, I figure there's probably an issue with --host $MONGO_PORT_27017_TCP_ADDR (it should be passed as is into the container, so premature expansion causing an empty string?), but a bit stumped trying to get it right. Any ideas are appreciated.
Update: I'm one step closer. The following appears to execute the command correctly, although the data isn't written to the system and the session hangs:
$ docker-machine ssh demo -- $(docker run --rm --link mongo:mongo -v $HOME:/backup mongo bash -c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR')
2015-09-15T18:02:03.347+0000 writing test.samples to /backup/test/samples.bson
2015-09-15T18:02:03.349+0000 writing test.samples metadata to /backup/test/samples.metadata.json
2015-09-15T18:02:03.349+0000 done dumping test.samples (1 document)
2015-09-15T18:02:03.350+0000 writing test.system.indexes to /backup/test/system.indexes.bson
The question asked for a solution based on docker ssh, but since no one responded, I'll answer the question myself with what is a better solution anyway.
As suggested by Nathan LeClaire (#upthecyberpunks) to me over twitter, the better solution is to avoid the hassle altogether and simply run a container to execute the mongodump command.
$ docker run \
--rm \
--link mongo:mongo \
-v /root:/backup mongo bash \
-c 'mongodump --out /backup --host $MONGO_PORT_27017_TCP_ADDR'
Not technically required for the answer, but the resulting test db backup file can then be transferred from the docker host machine to your current directory via docker scp:
$ docker-machine scp -r dev:/root/test .
Since I cannot add a comment to the orginal nice answer, just add a little explanation here, $MONGO_PORT_27017_TCP_ADDR should be the ip of our machine, for example, the virtual machine's ip in my virtualbox is 100.100.100.10, so the last line shoulb be:
-c 'mongodump --out /backup --host 100.100.100.10' or
-c 'mongodump --out /backup --host 100.100.100.10:27017'.
If not add the host field, great chances are that we will encounter some error like:
*** Failed: error connecting to db server: no reachable servers.
And thanks again to the orginal answer ^_^.
Related
This is the setup i have
Created mongodb instance with docker
sudo docker run -p 27017:27017 -e MONGODB_DATABASE=DEV -e MONGODB_USER=dev -e MONGODB_PASSWORD=dev123 -e MONGODB _ADMIN_PASSWORD=dev123 -e MONGODB_ROLE=readWriteAnyDatabase --name mymongo -v testdb:/var/lib/mongodb/data -d mongo
Entered container using
sudo docker exec -it container-id /bin/bash
Executed command
mongodump -d DEV -u dev -p dev123 ( works perfectly )
Now the ISSUE happens while restoring to different database
mongorestore --db test ./dump/DEV -- throws below error
Failed: test.duke: error reading database: not authorized on test to execute command { listCollections: 1, cursor: { batchSize: 0 } }
Stuck for 3 days now any help would be appreciated ( beginner to both docker and mongodb)
If your other mongo database has authentication then you should use :
mongorestore -u <username> -p <password> --authenticationDatabase=<database name> --db=test ./dump/DEV
Other advice would be to create dumps like :
mongodump --port 55555 -d testdb --gzip --archive=testdb.tar
and then restore like:
mongorestore --port 55555 --gzip --archive=testdb.tar
I use Docker to develop.
docker exec -it <My-Name> mongo
I want to import the data to MongoDB from a JSON file, but it fails.
The command is
mongoimport -d <db-name> -c <c-name> --file xxx.json
What can I do?
With your description it seems that
you have a datafile in json format in your host machine and you want to import this data into mongodb which is running in a container.
You can follow following steps to do so.
#>docker exec -it <container-name> mongo
#>docker cp xxx.json <container-name-or-id>:/tmp/xxx.json
#>docker exec <container-name-or-id> mongoimport -d <db-name> -c <c-name> --file /tmp/xxx.json
In the last step you have to use file path that is available in the container.
To debug more if required you can login into the container and execute the way you do on the Linux machines.
#>docker exec -it <container-name-or-id> sh
sh $>cat /tmp/xxx.json
sh $>mongoimport -d <db-name> -c <c-name> --file /tmp/xxx.json
Run without copy file:
docker exec -i <container-name-or-id> sh -c 'mongoimport -c <c-name> -d <db-name> --drop' < xxx.json
Step 1: Navigate to the directory where the JSON file located from your host terminal.
Step 2: Use command "docker cp xxx.json mongo:/tmp/xxx.json” to copy the JSON file from current host directory to container "tmp" directory.
Step 3: Navigate to container command shell by using command “docker container exec -it mongo bash”.
Step 4: Import the collection from "tmp" folder to container database by using command : "mongoimport --uri="<mongodb connection uri>" —collection=<c-name> --file /tmp/xxx.json"
What we have:
mongodb running in docker container.
json file to import from local machine
mongoimport.exe on local machine
What to do to import this json file as a collection:
mongoimport --uri=<connection-string> --collection=<collection-name> --file=<path-to-file>
Example:
mongoimport --uri="mongodb://localhost:27017/test" --collection=books --file:"C:\Data\books.json"
More details regarding mongoimport here: https://www.mongodb.com/docs/database-tools/mongoimport/
I want to close the mongo shell after executing the following in a docker command:
#!/bin/bash
docker run -it --link sonams-mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
if [ $? -eq 0 ]; then \
echo "connected to mongo successful"; \
else \
echo "mongo connection NOT successful"; \
fi; \
When it connects it goes to a shell prompt within mongo. Is there a way to pass a shell command to do an exit right in or after the docker command?
thanks
Usually (of course it depends on the base image you're using) you wouldn't need to invoke "sh -c". Also, the -it combination is usually what makes the shell open and wait for input. Try to change your command a little bit, like below, without -it and sh -c:
docker run --link sonams-mongo:mongo --rm mongo mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"
if that doesn't help, try this:
echo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test" | docker run --link sonams-mongo:mongo --rm mongo mongo
I tried:
$ alias psql="docker exec -ti pg-hello-phoenix sh -c 'exec psql -h localhost -p 5432 -U postgres'"
$ mix ecto.create
but got:
** (RuntimeError) could not find executable psql in path, please guarantee it is available before running ecto commands
lib/ecto/adapters/postgres.ex:106: Ecto.Adapters.Postgres.run_with_psql/2
lib/ecto/adapters/postgres.ex:83: Ecto.Adapters.Postgres.storage_up/1
lib/mix/tasks/ecto.create.ex:34: anonymous fn/2 in Mix.Tasks.Ecto.Create.run/1
(elixir) lib/enum.ex:604: Enum."-each/2-lists^foreach/1-0-"/2
(elixir) lib/enum.ex:604: Enum.each/2
(mix) lib/mix/cli.ex:58: Mix.CLI.run_task/2
(elixir) lib/code.ex:363: Code.require_file/2
Also I tried to create symlink /usr/local/bin/psql:
#!/usr/bin/env bash
docker exec -ti pg-hello-phoenix sh -c "exec psql -h localhost -p 5432 -U postgres $#"
and then:
$ sudo chmod +x /usr/local/bin/psql
check:
$ which psql
/usr/local/bin/psql
$ psql --version
psql (PostgreSQL) 9.5.1
run again:
$ mix ecto.create
** (Mix) The database for HelloPhoenix.Repo couldn't be created, reason given: cannot enable tty mode on non tty input
.
Container with PostgreSQL is launched:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
013464d7227e postgres "/docker-entrypoint.s" 47 minutes ago Up 47 minutes 5432/tcp pg-hello-phoenix
I was able to do this by going into /config/.exs In my case it was development, so /config/dev.exs and left the hostname as localhost but added another setting for port: 32768 because that's the port that docker exposed.
Make sure to put a space between the port: and the number (not string). Otherwise it won't work.
Worked as usual after that. The natural assumption is that the username / password matches on the container as well.
To me I did the following:
sudo docker exec -it postgres-db bash
After I got the interactive shell
psql -h localhost -p 5432 -U postgres
Then I create my db manually:
CREATE DATABASE cars_dev;
Then finally:
mix ecto.migrate
Everything worked fine after that :) hope it helps.
I would like to have a way to enter into the Postgresql container and get a data dump from it.
Use the following command from a UNIX or a Windows terminal:
docker exec <container_name> pg_dump <schema_name> > backup
The following command will dump only inserts from all tables:
docker exec <container_name> pg_dump --column-inserts --data-only <schema_name> > inserts.sql
I have container named postgres with mounted volume -v /backups:/backups
To backup gziped DB my_db I use:
docker exec postgres pg_dump -U postgres -F t my_db | gzip >/backups/my_db-$(date +%Y-%m-%d).tar.gz
Now I have
user#my-server:/backups$ ls
my_db-2016-11-30.tar.gz
Although the mountpoint solution above looked promising, the following is the only solution that worked for me after multiple iterations:
docker run -it -e PGPASSWORD=my_password postgres:alpine pg_dump -h hostname -U my_user my_db > backup.sql
What was unique in my case: I have a password on the database that needs to be passed in; needed to pass in the tag (alpine); and finally the hosts version of the psql tools were different to the docker versions.
This one, using container_name instead of database_scheme's one, works for me:
docker exec {container_name} pg_dump -U {user_name} > {backup_file_name}
In instance, for me, database name, user and password are supposed declared in docker-compose.yaml
I wish it could help someone
for those who suffered with permissions, I used this following command with success to perform my dump:
docker exec -i MY_CONTAINER_NAME /bin/bash -c "PGPASSWORD=MY_PASSWORD pg_dump -Fc -h localhost -U postgres MY_DB_NAME" > /home/MY_USER/db-$(date +%d-%m-%y).backup
This will mount the pwd and include your environment variables
docker run -it --rm \
--env-file <(env) \
-w /working \
--volume $(pwd):/working \
postgres:latest /usr/bin/pg_dump -Fc -h localhost -U postgres MY_DB_NAME" > /working/db-$(date +%d-%m-%y).backup
Another workaround method is to start postgre sql with a mountpoint to the location of the dump in docker.
like docker run -v <location of the files>.
Then perform a docker inspect on the docker running container
docker inspect <image_id>
you can find "Volumes" tag inside and a corresponding location.Go to the location and you can find all the postgresql/mysql files.It worked for me.Let us know if that worked for you also.
Good luck
To run the container that has the Postgres user and password, you need to have preconfigured variables as container environment variable.
For example:
docker run -it --rm --link <container_name>:<data_container_name> -e POSTGRES_PASSWORD=<password> postgres /usr/bin/pg_dump -h <data_container_name> -d <database_name> -U <postgres_username> > dump.sql