How to recover a standalone MongoDB after an unexpected shutdown on k8s - mongodb

After an unexpected power failure, the mongodb service I deployed on k8s could not be restarted normally. The log of mongodb showed that there was a problem with its data and could not be started.
I did not record the specific error log.

Here is my fix:
First
change k8s deployment.yaml config.
Because we want to repair the data file of mongodb, the first step is to make the mongo pod run, we run the command in the pod.
Now change the startup command of the container:
containers:
- name: mongodb
image: mongo:latest
command: ["sleep"]
args: ["infinity"]
imagePullPolicy: IfNotPresent
ports:
# .......
After apply it.
If I guessed correctly, the mongo pod should be up and running.
Second
Use mongod command to repair data.
kubectl exec -it <YOURPODNAME> -- mongod --dbpath <YOURDBPATH> --repair --directoryperdb
I have to exec it with --directoryperdb, if you run it error, you can try remove it.
If I guessed correctly, So far everything is fine.
Third
Recover k8s deployment.yaml, back to the way they were.
Now reapply it.
---------- Manual split line
The above is my repair process. Its worked for me. I just record it. You can refer to it to fix your mongodb. Thank you.

Related

How to run schema scripts after running couchbase via docker compose?

I have a schema script /data/cb-create.sh that I have made available on a container volume. When I run docker-compose up, my server is not initialized at the time command is executed. So those commands fail because the server isn't launched just yet. I do not see a Starting Couchbase Server -- Web UI available at http://<ip>:8091 log line when the .sh script is running to initialize the schema. This is my docker compose file. How can I sequence it properly?
version: '3'
services:
couchbase:
image: couchbase:community-6.0.0
deploy:
replicas: 1
ports:
- 8091:8091
- 8092:8092
- 8093:8093
- 8094:8094
- 11210:11210
volumes:
- ./:/data
command: /bin/bash -c "/data/cb-create.sh"
container_name: couchbase
volumes:
kafka-data:
First: You should choose either an entrypoint or a command statement.
I guess an option is to write a small bash script where you put these commands in order.
Then in the command you specify running that bash script.

Write command args in kubernetes deployment

Can anyone help with this, please?
I have a mongo pod assigned with its service. I need to execute some commands while starting the container in the pod.
I found a small examples like this:
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
But I want to execute these commands while starting the pod:
use ParkInDB
db.roles.insertMany( [ {name :"ROLE_USER"}, {name:"ROLE_MODERATOR"}, {name:"ROLE_ADMIN"} ])
you need to choice one solution :
1- use init-container to deployment for change and execute some command or file
2- use command and args in deployment yaml
for init-container visit this page and use.
for comnad and args use this model in your deployment yaml file:
- image:
name:
command: ["/bin/sh"]
args: ["-c" , "PUT_YOUR_COMMAND_HERE"]
if you are looking forward to run the command before the container start or container stop you can use the container life cycle hooks.
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
however, if you can add a command in the shell script file and edit MongoDB image as per requirement
command: ["/bin/sh", "-c", "/usr/src/script.sh"]
you also edit the yaml with
args:
- '-c'
- |
ls
rm -rf sql_scripts
When you use the official Mongo image, you can specify scripts to use on container startup. The answer accepted here provides you with some information on how this work.
Kubernetes
When it comes to Kubernetes, there are some pre-work you need to do.
What you can do is to write a script like my-script.sh that creates a userDB and insert an item into the users collection:
mongo userDB --eval 'db.users.insertOne({username: "admin", password: "12345"})'
and then write a Dockerfile based on the official mongo image, to copy your script into the folder where custom scripts are run on database initialization.
FROM mongo:latest
COPY my-script.sh /docker-entrypoint-initdb.d/
CMD ["mongod"]
Within the same directory containing your script and dockerfile, build the docker image with
docker build -t dockerhub-username/custom-mongo .
Push the image to docker hub or any repository of your choice, and use it in your deployment yaml.
deployment.yaml
...
spec:
containers:
- name: mongodb-standalone
image: dockerhub-username/custom-mongo
ports:
- containerPort: 27017
Verify by going to your pod and check the logs. You will be able to see that mongo has initialized the db that you have specified in your script in the directory /docker-entrypoint-initdb.d/.

How to kill a specific docker container using docker-compose?

How to kill a specific docker container using docker-compose, I have tried the below but its didn't work appreciate your help.
root#docker:/opt/dockercompose# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
792663a9f2de nginx "nginx -g 'daemon of…" About an hour ago Up 6 minutes 0.0.0.0:8005->80/tcp dockercompose_webapp2_1
1f94ff0e70cf nginx "nginx -g 'daemon of…" About an hour ago Up 6 minutes 0.0.0.0:8000->80/tcp dockercompose_webapp1_1
root#docker:/opt/dockercompose# docker-compose kill 792663a9f2de
ERROR: No such service: 792663a9f2de
root#docker:/opt/dockercompose#
All of the docker-compose commands take the service names as specified in the docker-compose.yml file. The docker ps output you show could be created from a docker-compose.yml file like:
version: '3.8'
services:
webapp1:
image: nginx
ports: ['8000:80']
webapp2:
image: nginx
ports: ['8005:80']
If you want to kill off a specific Compose-managed container, you can docker-compose kill webapp2; it will find it in the docker-compose.yml and match it up with some hidden container metadata.
For most practical things, if you're in a Compose-managed environment, you can use exclusively docker-compose commands: docker-compose ps to list the containers, docker-compose logs to see a container's output, and so on. All of these again take the Compose service name, not the Docker container name or ID.
Below command works-
root#docker:/opt/dockercompose# docker-compose -f docker-compose.yaml -p 792663a9f2de kill

Avoid persisting mongo data in docker-compose

I'm using docker compose to run tests for my application. The configuration looks like:
version: '2'
services:
web:
build: .
image: myapp:web
ports:
- "3000:3000"
depends_on:
- mongo
links:
- mongo
mongo:
image: mongo:3.2.6
Right now, when I run docker-compose up, there is a volume created automatically (by docker-compose or the mongo image?) which maps the Mongo storage data to path like: /var/lib/docker/volumes/c297a1c91728cb225a13d6dc1e37621f966067c1503511545d0110025479ea65/_data.
Since I am running tests rather than production code, I'd actually like to avoid this persistence (the mongo data should go away when the docker-compose exits) -- is this possible? If so, what's the best way to do it?
After the containers exit (or you stop them with a down command), clean up the old containers and volumes with
docker-compose rm -v
The -v tells it to also remove the volumes (container volumes and named volumes created with docker-compose, but not host volumes).

docker-compose: postgres data not persisting

I have a main service in my docker-compose file that uses postgres's image and, though I seem to be successfully connecting to the database, the data that I'm writing to it is not being kept beyond the lifetime of the container (what I did is based on this tutorial).
Here's my docker-compose file:
main:
build: .
volumes:
- .:/code
links:
- postgresdb
command: python manage.py insert_into_database
environment:
- DEBUG=true
postgresdb:
build: utils/sql/
volumes_from:
- postgresdbdata
ports:
- "5432"
environment:
- DEBUG=true
postgresdbdata:
build: utils/sql/
volumes:
- /var/lib/postgresql
command: true
environment:
- DEBUG=true
and here's the Dockerfile I'm using for the postgresdb and postgresdbdata services (which essentially creates the database and adds a user):
FROM postgres
ADD make-db.sh /docker-entrypoint-initdb.d/
How can I get the data to stay after the main service has finished running, in order to be able to use it in the future (such as when I call something like python manage.py retrieve_from_database)? Is /var/lib/postgresql even the right directory, and would boot2docker have access to it given that it's apparently limited to /Users/?
Thank you!
The problem is that Compose creates a new version of the postgresdbdata container each time it restarts, so the old container and its data gets lost.
A secondary issue is that your data container shouldn't actually be running; data containers are really just a namespace for a volume that can be imported with --volumes-from, which still works with stopped containers.
For the time being the best solution is to take the postgresdbdata container out of the Compose config. Do something like:
$ docker run --name postgresdbdata postgresdb echo "Postgres data container"
Postgres data container
The echo command will run and the container will exit, but as long as don't docker rm it, you will still be able to refer to it in --volumes-from and your Compose application should work fine.