Add volume to mongoDB Percona operator - mongodb

I'm trying to add Filebeat container to the rs0 StatefulSet to collect the logs of my mongoDB. I added a filebeat sidecar container to the operator (according to the docs), and now I'm stuck. How can I create an emptyDir volume that would be mounted and accessible on both the mongod container and the filebeat container?

The percona guys helped my find a solution, a bit of a hack but it worked for me
you can't create a new volume for the mongod container but you can map one of the existing volumes to the sidecar container, for example:
sidecars:
- image: ...
command: ...
args: ...
name: rs-sidecar-1
volumeMounts:
- mountPath: /mytmp
name: mongod-data
link for the original post on percona community forum

Related

Migrate data from an existing database to a newly created mongodb pod in Kubernetes

I have created a mongodb pod in my Minikube local cluster with the following configuration, now I would like to migrate the data of my existing mongodb database(run in AWS EC2 instance) to this database, how can I accomplish this?
apiVersion: v1
kind: Pod
metadata:
name: mongodb
labels:
app: mongodb
spec:
volumes:
- name: mongo-vol
persistentVolumeClaim:
claimName: mongo-pvc
containers:
- image: mongo
name: container1
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-vol
mountPath: /data/db
Can't added a comment because I do not have more than 50 reputation but here is the thing:
You created a Pod using MongoDB and I see the volumeMount set to /data/db. What is this path? Is this volume a hostPath, NFS, external storage?
The migration itself could be accomplished in two ways:
by running a mongodump on your EC2 instance:
mongodump -host hostname --port 27017 --out /tmp/mongodb-dump
Then a mongorestore on your /data/db path.
You can manually log in on the pod
#copy your dump from host to pod using
kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
#log on the pod
kubectl exec mongodb -it /bin/bash
#run restore
mongorestore /tmp/mongodb-dump
or
You can copy all files/the entire /data from your EC2 instance and dropped on the volumeMount your pod is mounting /data/db.

Azure Container Instances - CLI and YAML file giving different outcomes

I'm trying to deploy Mongo onto Azure Container Instances as part of a container group. To do this, I use a Storage Account with a file share to persist the mongo data. It's impossible to mount the volume in the /data/db default location, so I mount it elsewhere and start mongod using the --db-path flag. This all works fine using the CLI, full command below.
However, when I want to translate all of these commands into my YAML config file it doesn't work. Mongo crashes out with an unknown file or directory error. If I start the container without the --db-path flag but still mount the volume, I can exec into the running container and see that the volume is there and is attached. I can even manually create folders in the share via the Azure Portal and see them appear in the container.
Documentation and examples are a little thin on the ground, especially YAML based examples. The biggest difference with the container group is having to define a named volume separate from the container which is used by the volumeMounts property. Is it just a syntax error? Are the CLI command and the YAML not equivalent in some way?
CLI Command
az container create
--resource-group rsenu-hassPilots-group
--name mongo
--image mongo
--azure-file-volume-account-name <account>
--azure-file-volume-account-key "<key>" --azure-file-volume-share-name mongodata
--azure-file-volume-mount-path "/data/mongoaz"
--ports 27017
--cpu 1
--ip-address public
--memory 1.5
--os-type Linux
--protocol TCP
--command-line "mongod --dbpath=/data/mongoaz"
YAML Config
apiVersion: 2018-10-01
location: uksouth
name: trustNewArtGroup
properties:
containers:
- name: mongo
properties:
image: mongo:4.2.3
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 27017
volumeMounts:
- name: database
mountPath: /data/azstorage
environmentVariables:
- name: 'MONGO_INITDB_DATABASE'
value: 'trust-new-art-db'
command:
- "mongod --dbpath=/data/azstorage"
osType: Linux
ipAddress:
type: Public
dnsNameLabel: trustnewart
ports:
- protocol: tcp
port: '27017'
volumes:
- name: database
azureFile:
sharename: mongodata
storageAccountName: <account>
storageAccountKey: <key>
tags: null
type: Microsoft.ContainerInstance/containerGroups
With a bit of help from this page in the documentation, I've discovered it was a syntax issue. The correct way to override the entrypoint in a YAML config file is as follows:
command: ['mongod', '--dbpath', '/data/azstorage']

what is /data/configdb path used for in mongodb?

I am running a docker of mongodb, and notice a volume created and mounted to /data/configdb. this is in addition to another volume mapped to /data/db which I know is where my actual data is stored.
I am trying to find out what is stored in /data/configdb, and searched google for it. surprisingly enough I didn't find anything explaining what is stored there.
what is stored there (/data/configdb), and can it be discarded everytime I restart my mongodb container?
To summarize from the docs, config servers store the metadata for a sharded cluster, and /data/configdb is the default path where a config server stores its data files. So if you're not dealing with sharded clusters, removing any references to it should be ok.
From the docs:
--configsvr
Declares that this mongod instance serves as the config server of a sharded cluster. When running with this option, clients (i.e. other cluster components) will not be able to write data to any database other than config and admin. The default port for a mongod with this option is 27019 and the default --dbpathdirectory is /data/configdb, unless specified.
References:
https://docs.mongodb.com/manual/core/sharded-cluster-config-servers/
https://docs.mongodb.com/v3.4/reference/program/mongod/#cmdoption-configsvr
Hope this helps!
I have deployed docker image of mongodb https://hub.docker.com/_/mongo on OpenShift it generated out of the box the deployment.yaml file which contains part with two volumeMounts:
- mountPath: /data/configdb
name: mongo-5-0-1
- mountPath: /data/db
name: mongo-5-0-2
When I want to use single not sharded database server and store my data into volume, this is how this part of deployment.yaml should look like
volumes:
- name: mongo-5-0-1
emptyDir: {}
- name: mongo-5-0-2
persistentVolumeClaim:
claimName: myVolumeName
In image documentation it's mentioned that:
This image also defines a volume for /data/configdb for use with
--configsvr (see docs.mongodb.com for more details).

How to create kubernetes pod that contains Docker with mongodb

How to create a pod in Kubernetes that contains an immage Docker that contain mongodb?
Easiest way to do that is use Helm - package manager for Kubernetes.
How to start using Helm
MongoDB Helm Chart
Ziliani, this question is a little unclear. We do not know if your goal is to put mongodb into a Pod or you would like to run mongodb in Kubernetes. In future try to be more clear with what you want to achieve and what have you already tried so we can know how to help you.
If you want to easily deploy Mongodb in Kubernetes you can use helm charts as Vasily mentions or you can also check this guide on mongodb github. Also you could read this article to figure out what you should be taking notice of.
This tutorial on the other hand explains full process of running mongodb in Google Kubernetes Engine using StatefulSet.
If you just want a running Pod with mongodb inside, like you have wrote below:
Because i would an file yaml. That include this instruction of Docker
docker run -p 27017:27017 --name mongodb bitnami/mongodb:latest
You can use this pod yaml as a reference:
apiVersion: v1
kind: Pod
metadata:
name: mongoDB
spec:
volumes:
- name: mongodb-pod
hostPath:
path: /tmp/mongodb
containers:
- image: bitnami/mongodb:latest
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP

OpenShift's YAML execution precedence regarding volume mounting and commands

As a beginner in container administration, I can't find a clear description of OpenShift's deployment stages and related YAML statements, specifically when persistent volume mounting and shell commands execution are involved. For example, in the RedHat documentation there is a lot of examples. A simple one is 16.4. Pod Object Definition:
apiVersion: v1
kind: Pod
metadata:
name: busybox-nfs-pod
labels:
name: busybox-nfs-pod
spec:
containers:
- name: busybox-nfs-pod
image: busybox
command: ["sleep", "60000"]
volumeMounts:
- name: nfsvol-2
mountPath: /usr/share/busybox
readOnly: false
securityContext:
supplementalGroups: [100003]
privileged: false
volumes:
- name: nfsvol-2
persistentVolumeClaim:
claimName: nfs-pvc
Now the question is: does the command sleep (or any other) execute before or after the mount of nfsvol-2 is finished? In other words, is it possible to use the volume's resources in such commands? And if it's not possible in this config, then which event handlers to use instead? I don't see any mention about an event like volume mounted.
does the command sleep (or any other) execute before or after the
mount of nfsvol-2 is finished?
To understand this, lets dig into the underlying concepts for Openshift.
OpenShift is a container application platform that brings docker and Kubernetes to the enterprise. So Openshift is nothing but an abstraction layer on top of docker and kubernetes along with additional features.
Regarding the volumes and commands lets consider the following example:
Let's run the docker container by mounting a volume, which is the home directory of host machine to the root path of the container(-v is option to attach volume).
$ docker run -it -v /home:/root ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
50aff78429b1: Pull complete
f6d82e297bce: Pull complete
275abb2c8a6f: Pull complete
9f15a39356d6: Pull complete
fc0342a94c89: Pull complete
Digest: sha256:f871d0805ee3ce1c52b0608108dbdf1b447a34d22d5c7278a3a9dd78fc12c663
Status: Downloaded newer image for ubuntu:latest
root#1f07f083ba79:/# cd /root/
root#1f07f083ba79:~# ls
lost+found raghavendralokineni raghu user1
root#1f07f083ba79:~/raghavendralokineni# pwd
/root/raghavendralokineni
Now execute the sleep command in the container and exit.
root#1f07f083ba79:~/raghavendralokineni# sleep 10
root#1f07f083ba79:~/raghavendralokineni#
root#1f07f083ba79:~/raghavendralokineni# exit
Check the files available in the /home path which we have mounted to the container. This content is same as that of /root path in the container.
raghavendralokineni#iconic-glider-186709:/home$ ls
lost+found raghavendralokineni raghu user1
So when a volume is mounted to the container, any changes in the volume will be effected in the host machine as well.
Hence the volume will be mounted along with the container and commands will be executed after container is started.
Coming back to the your YAML file,
volumeMounts:
- name: nfsvol-2
mountPath: /usr/share/busybox
It says ,mount the volume nfsvol-2 to the container and the information regarding the volume is mentioned in volumes:
volumes:
- name: nfsvol-2
persistentVolumeClaim:
claimName: nfs-pvc
So mount the volume to the container and execute the command which is specifed:
containers:
- name: busybox-nfs-pod
image: busybox
command: ["sleep", "60000"]
Hope this helps.