MongoDB docker volume error - DBPathInUse: Unable to lock the lock file - mongodb

I'm trying to run MongoDB replica set in docker consisting of three mongo containers.
When I first tried to run replica set without any volumes mounted to containers, it worked well. However, after attempting to mount volume, second container won't start.
> docker volume inspect mongo_repl
[
{
"CreatedAt": "2023-02-14T09:37:17Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/mongo_repl/_data",
"Name": "mongo_repl",
"Options": {},
"Scope": "local"
}
]
> docker run -d --rm -p 27017:27017 -v mongo_repl:/data/db --name mongo1 --network mongoCluster mongo:latest mongod --replSet myReplicaSet --bind_ip localhost,mongo1
006dc76157d85f6c249168c0582fa3a3cf957ccff2d20fe6de7e5aa46101dac8
> docker run -d --rm -p 27018:27017 --volumes-from mongo1 --name mongo2 --network mongoCluster mongo:latest mongod --replSet myReplicaSet --bind_ip localhost,mongo2
2b02c21a5e11634ee7c9c219f78ac28c2572fb57f9b0af6a68509353cd52c435
I looked into container log, and found this message.
2023-02-14 18:39:29 {"t":{"$date":"2023-02-14T09:39:29.869+00:00"},"s":"E", "c":"CONTROL", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory"}}
So, it's got to be a problem from sharing the same volume, and when the container is initializing it tries to do stuffs in mongo1's /data/db, but the stuffs are already made there.
I can't find out a way to avoid the issue. Any help please?

Related

Connect to MongoDB config server for Sharding on a docker container running on windows10

I am using Windows 10 Operating system.
I have Docker for windows installed on my machine.
I have mongo shell for Windows installed on my machine.
I am creating the config servers using the latest mongo image from docker.
I am trying to create config servers (in a replica set; one primary and two secondaries) in order to set up Sharding for MongoDB. I am able to connect to the mongod servers if I create them as replica sets, without specifying the --configsvr parameter. But when I specify the --configsvr parameter, it fails with below error -
connecting to:
mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server 127.0.0.1:27017, connection attempt
failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused
by :: No connection could be ma de because the target machine actively
refused it. : connect#src/mongo/shell/mongo.js:374:17 #(connect):2:6
exception: connect failed exiting with code 1
Case 1 - Creating 3 mongod servers as a replica set
Step 1:- Creating 3 mongod containers asia, america and europe.
C:\> docker run -d -p 40001:27017 -v C:/mongodata/data/db --name asia mongo mongod --bind_ip=0.0.0.0 --replSet "rs0"
C:\> docker run -d -p 40002:27017 -v C:/mongodata/data/db --name europe mongo mongod --bind_ip=0.0.0.0 --replSet "rs0"
C:\> docker run -d -p 40003:27017 -v C:/mongodata/data/db --name america mongo mongod --bind_ip=0.0.0.0 --replSet "rs0"
Step 2:- Execute docker ps
Step 3:- Using docker exec to connect to container named asia.
C:\> docker exec -it asia mongo
RESULT:- Successfully connected
Step 4:-Connecting to the container asia from mongoshell:-
Case 2 - Creating 3 mongod servers as config servers as part of a replica set
Step 1:- Creating 3 mongod containers asiaCS, americaCS and europeCS as config servers.
C:/> docker run -d -p 30001:27017 -v C:/mongodata/data/db --name asiaCS mongo mongod --configsvr --bind_ip=0.0.0.0 --replSet "rs1"
C:/> docker run -d -p 30002:27017 -v C:/mongodata/data/db --name europeCS mongo mongod --configsvr --bind_ip=0.0.0.0 --replSet "rs1"
C:/> docker run -d -p 30003:27017 -v C:/mongodata/data/db --name americaCS mongo mongod --configsvr --bind_ip=0.0.0.0 --replSet "rs1"
Step 2:- Execute docker ps
Step 3:- Using docker exec to connect to container named asiaCS.
docker exec -it asiaCS mongo
RESULT:- Failed to connect
Step 4:-Connecting to the container asiaCS from mongoshell:-
The only difference here is the --configsvr parameter required to start a mongod instance as a config server for MongoDB sharding. Has anyone encountered such an issue before.
P.S. - I have kept the bind_ip to 0.0.0.0 just to test connection from mongoshell, but tread with caution when doing the same for Production on non-local instances.
It's 27019 for config servers.
When you add --configsvr you need to change port mapping too:
C:/> docker run -d -p 30001:27019 -v C:/mongodata/data/db --name asiaCS mongo mongod --configsvr --bind_ip=0.0.0.0 --replSet "rs1"

Cannot connect from node to mongo replicaset in docker

I've set up a docker network in which I have set up 3 mongo containers.
Summary of what I've done:
created a docker network in which I set up 3 mongo docker containers
open up the mongo shell for first node and set up the config for the replica set
tried to connect from node app, failed
successfully connect from mongo shell to replicaSet
Below I give a more detailed view of what I've tried.
These are the following cmds I have run for docker:
docker network create my-mongo-cluster
docker run -d -p 30001:27017 --name mongo1 --net my-mongo-cluster mongo mongod --replSet my-mongo-set
docker run -d -p 30002:27017 --name mongo2 --net my-mongo-cluster mongo mongod --replSet my-mongo-set
docker run -d -p 30003:27017 --name mongo3 --net my-mongo-cluster mongo mongod --replSet my-mongo-set
docker exec -it mongo1 mongo
config = { "_id": "my-mongo-set", "members": [{"_id": 0, "host": "mongo1:27017"},{"_id": 1,"host": "mongo2:27017"},{"_id": 3,"host": "mongo3:27017" } ]}
rs.initiate(config)
From MongoDB Compass I've connected to the primary node, 192.168.1.3:30001 and created a database test with one collection user.
From node I try the following:
const app = require('express')();
const mongoose = require('mongoose');
//set up mongo connect
const uri = 'mongodb://192.168.1.3:30001,192.168.1.3:30002,192.168.1.3:30003/test'
mongoose.connect(uri, {useNewUrlParser: true, replicaSet: 'my-mongo-set'})
.then(() => console.log("MongoDB Connected"))
.catch(error => console.log(error));
From which I get
Debugger attached.
{ MongoNetworkError: failed to connect to server [mongo3:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo3 mongo3:27017]
at Pool.<anonymous> (C:\Workspaces\intelij\trial&error\spring-reactive-security\mongo-transactional\node_modules\mongodb-core\lib\topologies\server.js:431:11)
at Pool.emit (events.js:198:13)
at connect (C:\Workspaces\intelij\trial&error\spring-reactive-security\mongo-transactional\node_modules\mongodb-core\lib\connection\pool.js:557:14)
at makeConnection (C:\Workspaces\intelij\trial&error\spring-reactive-security\mongo-transactional\node_modules\mongodb-core\lib\connection\connect.js:39:11)
at callback (C:\Workspaces\intelij\trial&error\spring-reactive-security\mongo-transactional\node_modules\mongodb-core\lib\connection\connect.js:261:5)
at Socket.err (C:\Workspaces\intelij\trial&error\spring-reactive-security\mongo-transactional\node_modules\mongodb-core\lib\connection\connect.js:286:7)
at Object.onceWrapper (events.js:286:20)
at Socket.emit (events.js:198:13)
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
at process._tickCallback (internal/process/next_tick.js:63:19)
name: 'MongoNetworkError',
errorLabels: [ 'TransientTransactionError' ],
[Symbol(mongoErrorContextSymbol)]: {} }
Waiting for the debugger to disconnect...
Process finished with exit code 0
but if I try from mongo shell, I am able to connect:
mongo "mongodb://192.168.1.3:30001,192.168.1.3:30002,192.168.1.3:30003/test"
MongoDB shell version v4.0.10
connecting to: mongodb://192.168.1.3:30001,192.168.1.3:30002,192.168.1.3:30003/test?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("89994673-11c2-4d6c-8cb5-04041094c147") }
MongoDB server version: 4.0.10
Server has startup warnings:
2019-07-20T06:22:00.476+0000 I STORAGE [initandlisten]
2019-07-20T06:22:00.476+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-07-20T06:22:00.476+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-07-20T06:22:01.185+0000 I CONTROL [initandlisten]
2019-07-20T06:22:01.185+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-07-20T06:22:01.185+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-07-20T06:22:01.185+0000 I CONTROL [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
my-mongo-set:PRIMARY>
Ok, so I've finally figure out what I was doing wrong. Maybe this will help someone:
you either go in C:\Windows\System32\drivers\etc and put in hosts MACHINE_IP mongo1 mongo2 mongo3, or just replace the config as:
config = { "_id": "my-mongo-set", "members": [{"_id": 0, "host": "MACHINE_IP:30001"},{"_id": 1,"host": "<MACHINE_IP>:30002"},{"_id": 2,"host": "<MACHINE_IP>:30003" } ]}
also I had some problems with mongo figuring out which one is primary so I slightly modified the docker runs, by adding MONGODB_REPLICA_SET_MODE and MONGODB_PRIMARY_HOST as:
docker run -d -p 30001:27017 --name mongo1 -e MONGODB_REPLICA_SET_MODE=primary --net my-mongo-cluster mongo mongod --replSet my-mongo-set
docker run -d -p 30002:27017 --name mongo2 -e MONGODB_REPLICA_SET_MODE=secondary -e MONGODB_PRIMARY_HOST=mongo1 --net my-mongo-cluster mongo mongod --replSet my-mongo-set
docker run -d -p 30003:27017 --name mongo3 -e MONGODB_REPLICA_SET_MODE=secondary -e MONGODB_PRIMARY_HOST=mongo1 --net my-mongo-cluster mongo mongod --replSet my-mongo-set

Can't connect to mongodb in the docker container

I've build a docker container running a mongodb-instance, that should be exposed to the host.
However, when i want to connect from the host into the mongodb-container, the connection will be denied.
This is my Dockerfile:
FROM mongo:latest
RUN mkdir -p /var/lib/mongodb && \
touch /var/lib/mongodb/.keep && \
chown -R mongodb:mongodb /var/lib/mongodb
ADD mongodb.conf /etc/mongodb.conf
VOLUME [ "/var/lib/mongodb" ]
EXPOSE 27017
USER mongodb
WORKDIR /var/lib/mongodb
ENTRYPOINT ["/usr/bin/mongod", "--config", "/etc/mongodb.conf"]
CMD ["--quiet"]
/etc/mongodb.conf:
And this is the config-file for MongoDB, where i bind the IP 0.0.0.0 explicitly as found here on SO, that 127.0.0.1 could be the root cause of my issue (but it isn't)
systemLog:
destination: file
path: /var/log/mongodb/mongo.log
logAppend: true
storage:
dbPath: /var/lib/mongodb
net:
bindIp: 0.0.0.0
The docker container is running, but a connection from the host is not possible:
host$ docker run -p 27017:27017 -d --name mongodb-test mongodb-image
host$ docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6ec958034a6f mongodb-image "/usr/bin/mongod --co" 4 seconds ago Up 3 seconds 0.0.0.0:27017->27017/tcp mongodb-test
Find the IP-Address:
host$ docker inspect 6ec958034a6f |grep IPA
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAMConfig": null,
"IPAddress": "172.17.0.2",
Try to connect:
host$ mongo 172.17.0.2:27017
MongoDB shell version v3.4.0
connecting to: mongodb://172.17.0.2:27017
2016-12-16T15:53:40.318+0100 W NETWORK [main] Failed to connect to 172.17.0.2:27017 after 5000 milliseconds, giving up.
2016-12-16T15:53:40.318+0100 E QUERY [main] Error: couldn't connect to server 172.17.0.2:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:234:13
#(connect):1:6
exception: connect failed
When i ssh into the container, i can connect to mongo and list the test database successfully.
Use host.docker.internal with exposed port : host.docker.internal:27017
Using localhost instead of the ip, allows the connection.
Combine it with the exposed port: localhost:27017
I tested the solution as it was stated in the comments, and it works.

Mongodb won't start in Docker container when remote-mounting a volume from host

On a Mac, I'm running mongodb from docker with the data directory mounted from a volume on my host like this:
docker run -d -p 27017:27017 -v /Users/me/git/world/db:/var/lib/mongodb gzoller/world
(/var/lib/mongodb is where the mongo config file says its storing data.)
I want to have my data persist on the host even if I kill the container running mongodb.
Using this run command tho, mongo doesn't start in the container. The mongo log in the container has this clip:
Mon Sep 14 18:27:59 [initandlisten] options: { command: [ "run" ], config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", journal: "true", logappend: "true", logpath: "/var/log/mongodb/mongodb.log", unixSocketPrefix: "/var/run/mongodb" }
Mon Sep 14 18:27:59 [initandlisten] Assertion: 13651:Couldn't fsync directory '/var/lib/mongodb': errno:22 Invalid argument
I've chmod a+rwx the db directory on the host I'm mounting to.
What I think I've learned so far is that (at least on a Mac) only root in the container can write to a volume on the host and mongo creates its own user, mongodb, which I presume can't write to my mounted volume.
How can I get mongo writing to a mounted host volume on MacOS?
Looks that's not permission issue. If review the origin repository (gzoller/world/go.sh), you should run the command as below:
docker run -d -p 80:80 -p 5672:5672 -p 15672:15672 -p 27017:27017 -p 28017:28017 -p 11211:11211 -v /Users/me/git/world/db:/var/lib/mongodb -e HOST_IP=`docker-machine ip default` gzoller/world
Let me know the result.

Deploy mongodb replicaset servers with Docker on different physical servers

I'm trying to deploy a mongodb replicaset using docker.
I managed to do it on a same server by executing this :
docker run -d --expose 27017 --name mongodbmycompany1 dockerfile/mongodb mongod --replSet rsmycompa
docker run -d --expose 27017 --name mongodbmycompany2 dockerfile/mongodb mongod --replSet rsacommeassure
docker run -d --expose 27017 --name mongodbmycompany3 dockerfile/mongodb mongod --replSet rsacommeassure
MONGODB1=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' mongodbmycompany1)
MONGODB2=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' mongodbmycompany2)
MONGODB3=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' mongodbmycompany3)
echo $MONGODB1
echo $MONGODB2
echo $MONGODB3
echo "Mongodb Replicaset init"
docker exec mongodbmycompany1 mongo 127.0.0.1:27017/mycompany --eval 'if(!rs.conf()) { rs.initiate(); cfg = rs.conf(); cfg.members[0].host = "'$MONGODB1':27017"; rs.reconfig(cfg); rs.add("'$MONGODB2':27017"); rs.add("'$MONGODB3':27017"); } rs.status();'
It's working as expected. My replicaset is initialized and my mongodb resultset config contains my 3 servers identified by their internal IP address.
It's not perfect as I'd prefer to use servers names but I didn't manage to do it.
Docker only populate each /etc/hosts file with servers names passed at image launch with --link parameter. If i add a new server while others are running. Those servers won't ping the new server.
Now I have another question. In production, having a lot of Mongodb docker image running on a same physical server is possible but it's not safe :
- if my physical server falls down, i lose all my Mongodb replicas and my service is down
- if my physical server uses internal storage, all my docker images use the same disk... and I'm going to have IO problems.
So my question is : How can I deploy many mongodb replicas on multiple physical servers ? How those mongodb replicas can communicate with each others (primary and secondaries servers can change) while they are on different servers or even on different datacenters ?
Let's assume:
you have 3 different docker hosts (servers), with IPs 10.1.1.101, 10.1.1.102, 10.1.1.103
want to deploy a single replica set called rsacommeassure
Dockerfiles for mongodb expose port 27017
all servers are in a trusted zone and can talk to each other
First let's start mongodb containers on each server (10.1.1.101 ~$ is used for command prompt):
10.1.1.101 ~$ docker run -d -p 27017:27017 --name mongodbmycompany1 dockerfile/mongodb mongod --replSet rsacommeassure
10.1.1.102 ~$ docker run -d -p 27017:27017 --name mongodbmycompany2 dockerfile/mongodb mongod --replSet rsacommeassure
10.1.1.103 ~$ docker run -d -p 27017:27017 --name mongodbmycompany3 dockerfile/mongodb mongod --replSet rsacommeassure
-p 27017:27017 exposes port 27017 on the host IP so mongo is accessible on servers' host IP address.
Then you need to initiate the replica set, so just run this against a mongodb container (I'll pick server1 here):
your_laptop ~$ > mongo --host 10.1.1.101
MongoDB shell version: 2.6.9
connecting to: test
> rs.initiate()
> cfg = rs.conf()
> cfg.members[0].host = "10.1.1.101:27017"
> rs.reconfig(cfg)
> rs.add("10.1.1.102:27017")
> rs.add("10.1.1.103:27017")
> rs.status();
The IPs are local but it works with global as well as long as the servers can talk to each other (VPN, firewall, DMZ, whatever). Btw you should consider security carefully.
I've created a Replica Set on different physical servers using docker-machine and virtual box as a driver:
$ docker-machine create --driver virtualbox server1
$ docker-machine create --driver virtualbox server2
$ docker-machine create --driver virtualbox server3
Open 3 different terminals, in each
$(Terminal1) eval "$(docker-machine env server1)"
$(Terminal2) eval "$(docker-machine env server2)"
$(Terminal3) eval "$(docker-machine env server3)"
In each terminal:
$(Terminal1) docker run -d -p 27017:27017 --name mongoClient1 mongo mongod --replSet r1
$(Terminal2) docker run -d -p 27017:27017 --name mongoClient2 mongo mongod --replSet r1
$(Terminal3) docker run -d -p 27017:27017 --name mongoClient3 mongo mongod --replSet r1
Go in VirtualBox -> on each environment(server1,server2,server3) -> Setting -> Network -> Adapter 1 -> Port Forwarding. Create a new rule Protocol TCP, Host Port - 27017, Guest Port - 27017, leave Host Ip and Guest Ip empty
Now restart all the environments, you can do this from the VirtualBox or from the terminal, from terminal just run:
$(Terminal1) docker-machine restart server1
$(Terminal2) docker-machine restart server2
$(Terminal3) docker-machine restart server3
Restart the containers:
$(Terminal1) docker start mongoClient1
$(Terminal2) docker start mongoClient2
$(Terminal3) docker start mongoClient3
Now the containers should be running, you can check them by running
$ docker ps in each terminal
Get into the first container's(or another) Mongo Shell
$(Terminal1) docker exec -it mongoClient1 mongo
// now we are in the Mongo Shell
$(Mongo Shell) rs.initiate()
$(Mongo Shell) cfg = rs.conf()
$(Mongo Shell) cfg.members[0].host = <server1's Ip Address>
// you should get server1's Ip Address by running $ docker-machine ls, mine was 192.168.99.100
$(Mongo Shell) rs.reconfig(cfg)
$(Mongo Shell Primary) rs.add("<server2's Ip Address>:27017")
// now we added a Secondary
$(Mongo Shell Primary) rs.add("<server3's Ip Address>:27017", true)
// now we added an Arbiter
$(Mongo Shell Primary) use planes
// now we create a new database
$(Mongo Shell Primary) db.tranporters.insert({name:'Boeing'})
// create a new collection
$(Mongo Shell Primary) db.tranporters.find()
// we obtain the inserted plane
To connect to a Secondary, you can either:
$(Terminal2) docker exec -it mongoClient2 mongo planes
// or
$(Mongo Shell Primary) db = connect ("<server2's Ip Address>:27017/planes")
Now we are in the Mongo Shell of a Secondary
$(Mongo Shell Secondary) rs.slaveOk()
// to allow readings from the Shell
$(Mongo Shell Secondary)db.tranporters.find()
// should return inserted plane
You could use "Weave - the Docker network" to resolve your problem easily.
Weave creates an overlay network that joins containers on different hosts, even at different cloud providers. Weave also supplies a DNS service that lets you find containers by name within the Weave network.
#Here stop already running instances
docker stop m1 m2 m3
#Cleanup of the volumes
docker rm -f m1 m2 m3
# Start MongoDB services optimised for faster startup
docker run -dP --name m1 mongo mongod --replSet "r1" --noprealloc --smallfiles --nojournal --syncdelay 0
docker run -dP --name m2 mongo mongod --replSet "r1" --noprealloc --smallfiles --nojournal --syncdelay 0
docker run -dP --name m3 mongo mongod --replSet "r1" --noprealloc --smallfiles --nojournal --syncdelay 0
export M1_ADDRESS=`docker inspect --format '{{ .NetworkSettings.IPAddress }}' m1`
export M2_ADDRESS=`docker inspect --format '{{ .NetworkSettings.IPAddress }}' m2`
export M3_ADDRESS=`docker inspect --format '{{ .NetworkSettings.IPAddress }}' m3`
docker exec m1 mongo --eval "rs.initiate();"
docker exec m1 mongo --eval "cfg = rs.conf(); cfg.members[0].host = '$M1_ADDRESS:27017'; rs.reconfig(cfg);"
docker exec m1 mongo --eval "rs.add('$M2_ADDRESS:27017');rs.add('$M3_ADDRESS:27017')"
# Check if everything is fine.
docker exec m1 mongo --eval "rs.status();"
I scripted a docker-image that sets up a mongodb replicaSet for any number of containers and automates scaling as you add more containers. check out the github github repo or docker registry
Use Docker Compose
setup your docker-compose.yml
version: "2"
services:
<your_service_name>:
image: rollymaduk/mongo-replica:local
environment:
REPLICA_NAME: "<your_replica_name>"
volumes:
- /var/config:/var/config
run in command line
docker-compose up
scale up to more containers
docker-compose scale <your_service_name>=3
Docker Cloud
To deploy mongo-db replicaSet using docker-cloud, set up a stack
file
stack.yml: stack file not requiring shared volumes
<service_name>:
image: rollymaduk/mongo-replica:cloud
deployment_strategy: high_availability
target_num_containers: 3
environment:
REPLICA_NAME: <your_replica_name>
DOCKERCLOUD_AUTH: <your_docker_auth_key>
stack.yml: stack file requiring shared volume
<service_name>:
image: rollymaduk/mongo-replica:local
deployment_strategy: high_availability
target_num_containers: 3
volumes:
- /var/config:/var/config
environment:
REPLICA_NAME: <your_replica_name>
using docker-cloud cli run in command line (If stack file does not exist in cloud)
docker-cloud stack create --name <your_stack_name> -f <your_stack_file>
docker-cloud stack start <your_stack_name>
using docker-cloud cli run in command line (If stack file already exists in cloud)
docker-cloud stack update -f <your_stack_file> <your_stack_name>
docker-cloud stack redeploy <your_stack_name>
Important: you must specify a shared volume and mount it to the container's
config directory [default: /var/config ] the example below mounts a host
directory /var/config to the container's config volume.
You can use any of the docker recommended ways of sharing volumes between
containers i.e mounting a host directory or data volume container),just
you specify the correct path for the config volume (/var/config).
_to change default config directory use the environment variable
--CONFIG_DIR to setup your container. Make sure to update host volume
path to reflect your new --CONFIG_DIR name_