repair command in mongodb in docker container - mongodb

My client has mongodb running in docker container, one morning the db crashed and logs says "unclean shutdown detected". I tried for repairing mongodb but not able to do as it is running under docker container. The reason is that as i have to shutdown mongodb for running repair command and once i shutdown the mongodb then docker throws me out of the container because only one process was running and as it was stopped container was also stopped, now i am not able to run any mongod commands as i am out of container and container is stopped along with mongod instance.
So, can anyone help me out of this, how can i run repair command from outside of docker container ?
Can i run repair command without stopping mongod ?
Please help !!!
Thanks
Irshad

The example command below fixed my issue with corrupted Wt files.
docker run -it -v <path>:/data/db <image-name> mongod --repair
Just replace path and image-name with yours

I had this same thing when my Mac decided to restart and install an update this morning, grr.
Normally my mongo container is started with a docker-compose up command, which uses a docker-compose.yml file for all the settings, something like...
version: "3"
services:
mongo:
image: mongo:3.6.12
container_name: mongo
ports:
- 27017:27017
volumes:
- ~/mongo/data/db:/data/db
- ~/mongo/db/configdb:/data/configdb
To run a basic docker command with those settings I did...
# Kill any existing mongo containers
docker rm mongo
# Repair the db
docker run -it -v ~/mongo/data/db:/data/db -v ~/mongo/db/configdb:/data/configdb --name mongo -d mongo:3.6.12 mongod --repair
# Has it finished the repair yet?
docker ps -a
After about a minute the container exited, the database was repaired and I was able to run mongo using docker-compose again.

version: '3.9'
services:
mongo:
image: mongo
command: --repair
This is how we pass the --repair in docker-compose.

Related

No internal docker IP address when I run mongdb image

I'm starting a container from an official mongodb image using command
sudo docker run -d --name mongodb mongo
Other containers I spin get IP address but not mongo 
When I run sudo docker inspect mongodb all the fields are blank
I'm running it on Ubuntu with virtual box and network interface is set to NAT
This is the output from inspect command.
{
"NetworkSettings":{
"Bridge":"",
"SandboxID":"16b808df46537e04ab2bf96e05dc41fd4660a270c927634c2a94a1639d32f693",
"HairpinMode":false,
"LinkLocalIPv6Address":"",
"LinkLocalIPv6PrefixLen":0,
"Ports":{
},
"SandboxKey":"/var/run/docker/netns/16b808df4653",
"SecondaryIPAddresses":null,
"SecondaryIPv6Addresses":null,
"EndpointID":"",
"Gateway":"",
"GlobalIPv6Address":"",
"GlobalIPv6PrefixLen":0,
"IPAddress":"",
"IPPrefixLen":0,
"IPv6Gateway":"",
"MacAddress":"",
"Networks":{
"bridge":{
"IPAMConfig":null,
"Links":null,
"Aliases":null,
"NetworkID":"f66bff6e962312af4d9af54ed9e5ba337d3d9466a5702ae8430660bfda690833",
"EndpointID":"",
"Gateway":"",
"IPAddress":"",
"IPPrefixLen":0,
"IPv6Gateway":"",
"GlobalIPv6Address":"",
"GlobalIPv6PrefixLen":0,
"MacAddress":"",
"DriverOpts":null
}
}
}
I solved my problem, however don't know is it a bug or my lack of knowledge
So far I was trying with mongo image (:latest)
I started trying older versions.
Went with mongo:focal - same result, no IP.
However, when I tried with mongo: 4.4.6-bionic
Everything went fine and I have IP assigned to the mongodb container :)
Try running it directly on Ubuntu with Docker, but without virtualbox.
You could try run it like this:
docker run -d -p 27017:27017 --name mongodb dockerfile/mongodb
Alternatively you can try run it with a docker-compose.yml file.
Put this in docker-compose.yml:
version: '3'
services:
mongodb-service:
image: mongo
ports:
- 27017:27017
restart: always
Then run:
docker-compose up
You might want to think of defining some volumes for persistent storage of your data too, but later on once you've solved this issue.

Run mongo script in deployed docker swarm container

I have deployed a database using docker swarm
docker stack deploy -c docker-compose.yml app
docker-compose.yml
version: '3.1'
services:
database:
image: mongo:latest
I'd like to run a JavaScript file script.js from my host in the deployed database container:
docker exec \
app_database.1.$(docker service ps -f 'name=app_database.1' app_database -q) \
mongo script.js
However, the file script.js does not exist in the container (only on my host). How can I run it without restarting the database service?
A script can be sent on stdin to the mongo client.
Create a script locally:
print('The mongodb version is: '+db.version())
Then run
$ cat script.js | docker exec $container mongo --quiet
The mongodb version is: 3.6.3

Can't start mongo with docker command, but can with /bin/bash inside container (with data volume)

This docker-compose.yml:
services:
database:
image: mongo:3.2
ports:
- "27017"
command: "mongod --dbpath=/usr/database"
networks:
- backend
volumes:
- dbdata:/usr/database
volumes:
dbdata:
results in this error (snipped):
database_1 | 2016-11-28T06:30:29.864+0000 I STORAGE [initandlisten] exception in initAndListen: 98 Unable to create/open lock file: /usr/database/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
Ditto for just trying to run the command in a container using that image directly:
$ docker run -v /usr/database mongo:3.2 mongod --dbpath=/usr/database
But, if I run /bin/bash when starting the container, and THEN start mongo, we're OK:
$ docker run -it -v /usr/database mongo:3.2 /bin/bash
root#8aab722fad89:/# mongod --dbpath=/usr/database
Based on the output, the difference seems to be that in the second scenario, the command is run as root.
So, my questions are:
Why does the /bin/bash method work, when the others do not?
How can I replicate that reason, in the docker-compose?
Note: On OSX, since that seems to effect whether you can mount a host directory as a volume for Mongo to use - not that I'm doing that.
To clarify, this image  hub.docker.com/_/mongo is an official MongoDB docker image from DockerHub, but NOT an official docker image from MongoDB.
Now to answer your questions,
Why does the /bin/bash method work, when the others do not?
This answer is based on Dockerfile v3.2. First to point out that your volume mount command -v /usr/database , is essentially creating a directory in the container with the root ownership permission.
Your command below failed with permission denied because the the docker image is running the command as user mongodb (see this dockerfile line) . As the directory /usr/database is owned by root.
$ docker run -v /usr/database mongo:3.2 mongod --dbpath=/usr/database
While if you execute below /bin/bash then manually run mongod:
$ docker run -it -v /usr/database mongo:3.2 /bin/bash
Your are logged in as root and executing mongod as root, and it has the permission to create database files in /usr/database/.
Also, if you're executing the line below, it works because you're pointing to a directory /data/db which the permission has been corrected for user mongodb (see this dockerfile line)
$ docker run -v db:/data/db mongo:3.2
How can I replicate that reason, in the docker-compose?
The easiest solution is to use command: "mongod --dbpath=/data/db" because the permission ownership has been corrected in the Dockerfile.
If you are intending to use a host volume, you probably would have to add mongodb user on your host OSX and change appropriate directories permission. Modifying the permission ownership of a volume mount is outside the scope of docker-compose.

Expose mongo port in other container

I have this (custom) container which runs a java program which requires mongo locally. Now, with docker I would like to setup mongo in its own container. So I guess, in order to expose this 27017 port locally in this java-container I need to setup an SSH-tunnel, right ? If there is a easier way please let me know.
So, there is this official mongo image image, but I get the impression ssh is not install or running. What would be the best approach to do this?
UPDATE: I've rephrased the question more focussed on port-forwarding here
You have to make your container run on the same network. No need to ssh into your mongo or app container.
https://docs.docker.com/engine/userguide/networking/
First define a network
docker network create --driver bridge isolated_nw
Start you containers using that newly network
docker run -p 27017:27017 --network=isolated_nw -itd --name=mongo-cont mongo
docker run --network=isolated_nw -itd --name=app your_image
The image of mongo includes EXPOSE 27017 so from your app container, you should be able to access to the mongo container using its name mongo-cont
You can build your custom image on top of mongodb official image, which gives you the flexibility to install additional required packages.
FROM mongo:latest
RUN apt-get install ssh
Also try to use docker-compose to build and link your containers together, it will ease the process greatly.
version: '2'
services:
mongo:
image: mongo:latest
ports:
- "27017"
custom_project:
build:
context: . # Parent directory address of Dockerfile
dockerfile: Dockerfile-Custom # Name of Dockerfile
command: /root/docker-entrypoint.sh
This is the image used for mongodb official image.
You are trying to SSH into your container to gain access to it, but that isn't how you connect. Docker provides functionality to securely connect via the following methods.
Connect into a running container - Docs:
docker exec -it <container name> bash
$ root#665b4a1e17b6:/#
Start a container from image, and connect to it - Docs:
docker run -it <image name> bash
$ root#665b4a1e17b6:/#
Note: If it is an Alpine based image, it may not have Bash installed. In that case using sh instead of bash in your commands should work. Mongo's Dockerfile looks to use debian:jessie which will have bash support.

how to remove non existent docker image

I am trying to run repair command for mongod, but daemon gave conflict error, so i removed the container and again run the repair command and daemon gave again same conflict error, this time i removed container with the container id displayed in the error and daemon says "no such id".
So, can anyone let me know that how can i remove this container so that i can successfully run the repair command.
I am displaying my docker commands as below for reference
Below is my docker ps result
root#ip-172-31-6-252:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
1f7bdd83dac0 mongo:latest "/entrypoint.sh mong 23 hours ago Up 23 hours 27017/tcp
cpx.db
11e2123f7e2a centralpx/cpx.server:latest "/run.sh" 2 weeks ago Up 2 weeks 0.0.0.0:80->80/tcp
cpx.server.live
4008c7772f63 centralpx/cpx-ftp "/bin/sh -c '/usr/sb 7 months ago Up 4 months 0.0.0.0:21->21/tcp, 0.0.0.0:30000-30009->30000-30009/tcp cpx.ftp
Below is my docker images result
root#ip-172-31-6-252:~# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mongo latest 21e69f355287 8 days ago 366.4 MB
centralpx/cpx.server latest 894a3c5fce73 2 weeks ago 429 MB
centralpx/cpx-ftp latest e35ba5efa239 9 months ago 425.5 MB
Now i get attached to cpx.db container and run shutdown command (i need to shutdown before running repair command)
root#ip-172-31-6-252:~# docker exec -it cpx.db /bin/bash
root#1f7bdd83dac0:/# mongod --shutdown
killing process with pid: 1
FATA[0026] Error response from daemon: Container 1f7bdd83dac037293d5086e86a3df7117b4b6eb2a3478d65848643eff9c4d568 is not running: Exited (0) Less than a second ago
root#ip-172-31-6-252:~#
Now below is my repair command
root#ip-172-31-6-252:~# sudo docker run -it -p 28001:27017 --name cpx.db mongo:latest mongod --dbpath /data/db --repair
FATA[0000] Error response from daemon: Conflict. The name "cpx.db" is already in use by container 8b2a8c98971c. You have to delete (or rename) that container to be able to reuse that name.
The above command gives conflict error, so we removed the container "cpx.db", below is the docker commands
root#ip-172-31-6-252:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1f7bdd83dac0 mongo:latest "/entrypoint.sh mong 23 hours ago Exited (0) 3 minutes ago cpx.db
11e2123f7e2a centralpx/cpx.server:latest "/run.sh" 2 weeks ago Up 2 weeks 0.0.0.0:80->80/tcp cpx.server.live
4008c7772f63 centralpx/cpx-ftp "/bin/sh -c '/usr/sb 7 months ago Up 4 months 0.0.0.0:21->21/tcp, 0.0.0.0:30000-30009->30000-30009/tcp cpx.ftp
root#ip-172-31-6-252:~# docker rm cpx.db
cpx.db
root#ip-172-31-6-252:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
11e2123f7e2a centralpx/cpx.server:latest "/run.sh" 2 weeks ago Up 2 weeks 0.0.0.0:80->80/tcp cpx.server.live
4008c7772f63 centralpx/cpx-ftp "/bin/sh -c '/usr/sb 7 months ago Up 4 months 0.0.0.0:21->21/tcp, 0.0.0.0:30000-30009->30000-30009/tcp cpx.ftp
Then we again run repair command as the conflicting container is removed and below is our command and output
root#ip-172-31-6-252:~# sudo docker run -it -p 28001:27017 --name cpx.db mongo:latest mongod --dbpath /data/db --repair
FATA[0000] Error response from daemon: Conflict. The name "cpx.db" is already in use by container 8b2a8c98971c. You have to delete (or rename) that container to be able to reuse that name.
Daemon again gave conflict error
We again removed the container with id shown in conflict message as below, and it shows no such id. Please refer below
root#ip-172-31-6-252:~# docker rm 8b2a8c98971c
Error response from daemon: no such id: 8b2a8c98971c
FATA[0000] Error: failed to remove one or more containers
So, can any one help us to remove this container which doesn't exists or can anyone help us to get rid of this issue.
As you pointed the problem in command of instantiating the stopped container so i tried to correct my mistake in the command and run this command
sudo docker mongod --dbpath /data/db --repair
this command gives error
docker: 'mongod' is not a docker command. See 'docker --help'
and if i remove docker from command and run this command
sudo mongod --dbpath /data/db --repair
then it gives error
sudo: mongod: command not found
Can i ask that if my below command is wrong as i am using the stopped container
root#ip-172-31-6-252:~# sudo docker run -it -p 28001:27017 --name cpx.db mongo:latest mongod --dbpath /data/db --repair
Then if i removed the container and running this command then why this command is not executing and giving conflict error?
EDIT:
Based on recent reply and explanation i had updated my command as below just check it and let me know if there is any correction from your side
sudo docker run -it -p 28000:27017 --name cpx.db1 -v /home/ubuntu/data/cpx.db:/data/db -d mongo:latest mongod --dbpath /data/db --repair
My new process will be as below:
1) go to the running container and stop the mongodb and this will automatically stop the running container
2) run the updated repair command as below
sudo docker run -it -p 28000:27017 --name cpx.db1 -v /home/ubuntu/data/cpx.db:/data/db -d mongo:latest mongod --dbpath /data/db --repair
This command will create a new container named as "cpx.db1" and it will use volume to mount the db and run repair command
3) I'll remove this new container "cpx.db1" as i want to use the old one.
Let me know if i am wrong
Thanks a lot in advance.
Thanks
EDIT:
I run the command and i think it worked as it doesn't give any error but it executed very fast so i am confused, i am hereby stating my commands and output for your reference
I entered in db container
docker exec -it cpx.db /bin/bash
I run shutdown command for mongodb
mongod --shutdown
This was the output (as only process was running in container so after killing this one process i was out of the container)
killing process with pid: 1
FATA[0015] Error response from daemon: Container bd910137a3957c79b304dbbbd221317c909e6779de01ed6f780857e3914c577c is not running: Exited (0) Less than a second ago
Then i run the repair command as below
sudo docker run -it -p 28000:27017 --name cpx.db1 -v /home/ubuntu/data/cpx.db:/data/db -d mongo:latest mongod --dbpath /data/db --repair
This was the output
d6b61222c7145f178e95974c87f95cb06fc8aa5c0c1adc929050ca172ab5f73f
Then i start the old container
docker start cpx.db
And db get started
There was no error but i am confused that whether repair command run successfully or not? Can you check my edited post and let me know your views .
Your current problem :
sudo docker run -it -p 28001:27017 --name cpx.db mongo:latest mongod --dbpath /data/db --repair
Here is the problem.
You're not actually reusing the same container but instanciating a new one. Hence the conflict since you're trying to give to a new container an already taken name.
The container you previously stopped is still stopped, and you don't use it with this command line.
This command line also sums up your problem : you don't understand the difference between images and containers. You're trying to manipulate an image where you should actually manipulate a container (the one you stopped).
What you could do :
As far as I know, it is not possible to restart a container with a different process that the one initially used as entrypoint (If I'm wrong I hope someone else will write an answer to explain how to do so).
But first be aware that storing a whole database in a container is a bad design since you can't access it easily, and you lose it at the container removing.
How you started your container isn't clear to me, but if you didn't do so you should store your database in a mounted volume.
This way your database is persistent, you can remove or stop your container (or even not) and still have access to your database (even from the host for instance).
If you use volumes you can stop your container, execute your repair operation from your host -or from another container if you need the same environment- and just then restarting your first container should be enough.
To use volumes :
here is a short example of the volume usage :
docker run -ti -P -v /host/path:/container/path image sh
This way you run a shell and the /host/path directory from your host will be mounted on the /container/path location inside your container : A change in the one (from host or container) appears in the other.
There are more informations in the link I gave.
Containers vs. images, a nuance important to understand :
You seem to be mistaking images and containers (containers are used to run processes), and you really need to understand the difference.
With docker, an image is what you instanciate your containers from.
You can have several containers instanciated from the same image. If an image was a baking tin, then a container would be a cake.
You probably don't have to remove an image but a running container (assuming you have a conflict because you try to run a container using a port already used by an other running container).
How to remove a container :
To list the containers :
docker ps -a (-a permits to list stopped containers as well).
Once you get the container's id, you can pass it to the docker stop command and then remove the stopped container with the docker rm command (You can use the container's name as well).
Removing containers (docker rm) and removing images (docker rmi) are two whole different things.
EDIT :
1.
sudo docker mongod --dbpath /data/db --repair
mongod is not a docker command. (run, build, ps, etc. are on the other hand).
2.
sudo mongod --dbpath /data/db --repair
Here, you don't even use docker. You simply try to run mongod directly on your host.
3.
sudo docker run -it -p 28001:27017 --name cpx.db mongo:latest mongod --dbpath /data/db --repair
From the manual :
docker run :
Run a command in a new container
So you're not reusing the stopped container, just instantiating a new one.
Keep in mind that REMOVING and STOPPING containers aren't the same.
Assuming you already used volumes to store your database files in first place your host, just :
Stop the container.
Launch a new container with a different name to proceed to your repair operation (still use volumes).
This container will exit at the end of the process, you may remove it.
Restart the container from (1), or even launch a new one (you'll need to delete the old one if you want to reuse the same name in such case, it's probably better to do so if you want to use a fresh one).
Also keep in mind that if you didn't use volumes in first place, the database content is probably not directly accessible from your host filesystem (it's contained in your container, so it's not persistent).
If so, the simplest way is probably to create a new database that would be stored on your host so you can have persistent and easily accessible data.
If you do so, use volumes to access this database (the one on your host) from within your container.