I'm using docker-compose to start 3 MongoDB servers which should be in a replica set.
I first start 3 MongoDB servers, then I configure the replica set.
This is how I would do the replica set config in a bash script:
mongo --host 127.0.0.1:27017 <<EOF
var cfg = {
"_id": "rs",
"version": 1,
"members": [
{
"_id": 0,
"host": "127.0.0.1:27017",
"priority": 1
},
// snip...
]
};
rs.initiate(cfg);
rs.reconfig(cfg)
EOF
Here I'm trying to replicate the configuring of the replica set using docker-compose.
# docker-compose.yml
mongosetup:
image: mongo:3.0
links:
- mongo1:mongo1
command: echo 'var cfg = { "_id": "rs", "version": 1, "members": [ { "_id": 0, "host": "127.0.0.1:27017", "priority": 1 }, { "_id": 1, "host": "mongo2:27017", "priority": 1 },{ "_id": 2, "host": "mongo2:27017", "priority": 1 } ] }; rs.initiate(cfg);' | mongo mongo1
Unfortunately that creates this error: yaml.scanner.ScannerError: mapping values are not allowed here.
What's the recommended approach?
Is it possible to store the cfg object in a separate file that docker-compose reads?
I fixed the problem by putting the config in setup.sh which I called from entrypoint.
mongosetup:
image: mongo:3.0
links:
- mongo1:mongo1
- mongo2:mongo2
- mongo3:mongo3
volumes:
- ./scripts:/scripts
entrypoint: [ "/scripts/setup.sh" ]
setup.sh
#!/bin/bash
MONGODB1=`ping -c 1 mongo1 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
mongo --host ${MONGODB1}:27017 <<EOF
var cfg = {
"_id": "rs",
"version": 1,
"members": [
{
"_id": 0,
"host": "${MONGODB1}:27017",
[cut..]
EOF
I have created a setup with docker-compose that starts 3 MongoDBs in a replica set and ElasticSearch with mongodb-river.
It's available at https://github.com/stabenfeldt/elastic-mongo
Related
I have the following docker-compose file:
services:
pgdatabase:
image: postgres:13
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=root
- POSTGRES_DB=ny_taxi
volumes:
- "./data:/var/lib/postgresql/data:rw"
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=admin#admin.com
- PGADMIN_DEFAULT_PASSWORD=root
volumes:
- "./data_pgadmin:/var/lib/pgadmin"
ports:
- "8080:80"
I'm trying to connect to postgres using pgadmin but I'm getting the following error:
Unable to connect to server: could not translate host name "pgdatabase" to address: Name does not resolve
Running docker network ls I get:
NAME DRIVER SCOPE
bridge bridge local
docker-sql-pg_default bridge local
host host local
none null local
Then running docker network inspect docker-sql-pg_default I get
[
{
"Name": "docker-sql-pg_default",
"Id": "bfee2f08620b5ffc1f8e10d8bed65c4d03a98a470deb8b987c4e52a9de27c3db",
"Created": "2023-01-24T17:57:27.831702189Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.24.0.0/16",
"Gateway": "172.24.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"8f53be84a95c9c0591df6cc6edb72d4ca070243c3c067ab2fb14c2094b23bcee": {
"Name": "docker-sql-pg-pgdatabase-1",
"EndpointID": "7f3ddb29b000bc4cfda9c54a4f13e0aa30f1e3f8e5cc1a8ba91cee840c16cd60",
"MacAddress": "02:42:ac:18:00:02",
"IPv4Address": "172.24.0.2/16",
"IPv6Address": ""
},
"bf2eb29b73fe9e49f4bef668a1f70ac2c7e9196b13350f42c28337a47fcd71f4": {
"Name": "docker-sql-pg-pgadmin-1",
"EndpointID": "b3a9504d75e11aa0f08f6a2b5c9c2f660438e23f0d1dd5d7cf4023a5316961d2",
"MacAddress": "02:42:ac:18:00:03",
"IPv4Address": "172.24.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "docker-sql-pg",
"com.docker.compose.version": "2.13.0"
}
}
]
I tried to connect to the gateway IP 172.24.0.1 and the IP of postgres base 172.24.0.2 but I got timeout error. Why my network isn't running?
Basically I solved my problem using the following steps:
First I accessed the pg-admin container and I used ping to verify if the pgadmin could reach the postgres.
Since pgadmin was reaching postgres I used sudo netstat -tulpn | grep LISTEN to verify in my host machine the ports that are in use. surprisingly I have two instances of pgadmin running on 8080 (one bugged).
I used docker-compose down to stop the servers and used docker system prune to delete all images/containers...
I verified the used ports again and one pgadmin still running on 8080.
I used pidof to check the PID of running (bugged) pgadmin.
Then I used kill -9 to kill the proccess.
Last, I used docker-compose up -d and I was able to communicate pgadmin with postgres via pgadmin interface.
I'm testing using docker compose for a replicated set on my local machine and am having trouble with getting Compass connected and functioning with the local set. Everything has deployed fine and I can see the exposed ports listening on the host machine. I just keep getting ECONNREFUSED.
Here is the Docker Compose file.
services:
mongo1:
hostname: mongo1
container_name: mongo1
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30001:27017
volumes:
- md1:/data/db
restart: always
command: mongod --replSet mongo-net
mongo2:
hostname: mongo2
container_name: mongo2
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30002:27017
volumes:
- md2:/data/db
restart: always
command: mongod --replSet mongo-net
mongo3:
hostname: mongo3
container_name: mongo3
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30003:27017
volumes:
- md3:/data/db
restart: always
command: mongod --replSet mongo-net
mongoinit:
image: mongo
restart: "no"
depends_on:
- mongo1
- mongo2
- mongo3
command: >
sh -c "mongosh --host mongo1:27017 --eval
'
db = (new Mongo("localhost:27017")).getDB("test");
config = {
"_id" : "mongo-net",
"members" : [
{
"_id" : 0,
"host" : "localhost:27017",
"priority": 1
},
{
"_id" : 1,
"host" : "localhost:27017",
"priority": 2
},
{
"_id" : 2,
"host" : "localhost:27017",
"priority": 3
}
]
};
rs.initiate(config);
'"
networks:
mongo-network:
volumes:
md1:
md2:
md3:
The containers and replication set deploy fine and are communicating. I can see the defined ports exposed and listening.
My issue is trying to use Compass to connect to the replicated set. I get ECONNREFUSED.
What's odd is I can actually see the client connecting on the primary logs, but get no other information about what the connection was refused/DCed.
{
"attr": {
"connectionCount": 7,
"connectionId": 129,
"remote": "192.168.32.1:61508",
"uuid": "f78c3803-fc7c-4655-9b77-94329bd41a7d"
},
"c": "NETWORK",
"ctx": "listener",
"id": 22943,
"msg": "Connection accepted",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.032+00:00"
}
}
{
"attr": {
"client": "conn129",
"doc": {
"application": {
"name": "MongoDB Compass"
},
"driver": {
"name": "nodejs",
"version": "4.8.1"
},
"os": {
"architecture": "arm64",
"name": "darwin",
"type": "Darwin",
"version": "22.1.0"
},
"platform": "Node.js v16.5.0, LE (unified)|Node.js v16.5.0, LE (unified)"
},
"remote": "192.168.32.1:61508"
},
"c": "NETWORK",
"ctx": "conn129",
"id": 51800,
"msg": "client metadata",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.034+00:00"
}
}
{
"attr": {
"connectionCount": 6,
"connectionId": 129,
"remote": "192.168.32.1:61508",
"uuid": "f78c3803-fc7c-4655-9b77-94329bd41a7d"
},
"c": "NETWORK",
"ctx": "conn129",
"id": 22944,
"msg": "Connection ended",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.044+00:00"
}
}
Overview: I would like to insert data in file.json in db within docker-compose using CURL.
My Try: I have tried to mount file.json inside the FROST container, then run the command CURL within docker-compose.
curl -X POST -H "Content-Type: application/json" -d file.json http://localhost:8080/FROST-Server/v1.1/Things
as the result the docker-compose yaml is so
version: '2'
services:
web:
image: fraunhoferiosb/frost-server:latest
environment:
- serviceRootUrl=http://localhost:8080/FROST-Server
- http_cors_enable=true
- http_cors_allowed.origins=*
- persistence_db_driver=org.postgresql.Driver
- persistence_db_url=jdbc:postgresql://database:5432/sensorthings
- persistence_db_username=sensorthings
- persistence_db_password=ChangeMe
- persistence_autoUpdateDatabase=true
ports:
- 8080:8080
- 1883:1883
volumes:
- ./file.json:file.json
depends_on:
- database
command:
- curl
- -X
- POST
- --data-binary
- ./file.json
- -H
- "Content-Type: application/json" # <-- YAML string quoting
- -H
- "Accept: application/json"
- "http://localhost:8080/FROST-Server/v1.1/Things"
database:
image: postgis/postgis:14-3.2-alpine
environment:
- POSTGRES_DB=sensorthings
- POSTGRES_USER=sensorthings
- POSTGRES_PASSWORD=ChangeMe
volumes:
- postgis_volume:/var/lib/postgresql/data
volumes:
postgis_volume:
After running docker-compuse up, i got this error
Creating network "frost_default" with the default driver
Creating frost_database_1 ... done
Creating frost_web_1 ... done
Attaching to frost_database_1, frost_web_1
database_1 |
database_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
database_1 |
database_1 | 2022-06-13 11:24:06.420 UTC [1] LOG: starting PostgreSQL 14.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
database_1 | 2022-06-13 11:24:06.421 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
database_1 | 2022-06-13 11:24:06.421 UTC [1] LOG: listening on IPv6 address "::", port 5432
database_1 | 2022-06-13 11:24:06.458 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
database_1 | 2022-06-13 11:24:06.486 UTC [22] LOG: database system was shut down at 2022-06-13 09:58:59 UTC
database_1 | 2022-06-13 11:24:06.498 UTC [1] LOG: database system is ready to accept connections
web_1 | % Total % Received % Xferd Average Speed Time Time Time Current
web_1 | Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
web_1 | curl: (7) Failed to connect to localhost port 8080: Connection refused
frost_web_1 exited with code 7
note: docker-compose.yml and file.json are in same path.
file.json
{
"name": "Car",
"description": "Smart connected car equipped with sensors.",
"properties": {
"vin": "5YJ3EDEB5KF223462"
},
"Locations": [
{
"name": "Parking lot",
"description": "The parking lot of the fictive company.",
"encodingType": "application/vnd.geo+json",
"location": {
"type": "Point",
"coordinates": [8.419432640075684, 49.01395040719586]
}
}
],
"Datastreams": [
{
"name": "Oil pump",
"description": "Measuring the motor oil pressure.",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Measurement",
"unitOfMeasurement": {
"name": "Bar",
"symbol": "bar",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/unit/Instances.html#Bar"
},
"Sensor": {
"name": "OIL_PRES_SENSOR2",
"description": "Oil pressure sensor",
"encodingType": "application/pdf",
"metadata": "..."
},
"ObservedProperty": {
"name": "Pressure",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/quantity/Instances.html#Pressure",
"description": "The oil pressure."
},
"Observations": [
{
"phenomenonTime": "2022-01-10T10:00:00Z",
"result": 2.1
},
{
"phenomenonTime": "2022-01-10T10:01:10Z",
"result": 2.3
},
{
"phenomenonTime": "2022-01-10T10:02:20Z",
"result": 2.7
},
{
"phenomenonTime": "2022-01-10T10:03:30Z",
"result": 2.9
},
{
"phenomenonTime": "2022-01-10T10:04:40Z",
"result": 4.1
},
{
"phenomenonTime": "2022-01-10T10:05:50Z",
"result": 3.7
}
]
},
{
"name": "Motor Temperature",
"description": "The temperature of the motor.",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Measurement",
"unitOfMeasurement": {
"name": "Centigrade",
"symbol": "C",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/unit/Instances.html#DegreeCentigrade"
},
"Sensor": {
"name": "DHT22/Temperature",
"description": "Temperature sensor of a DHT22",
"encodingType": "application/pdf",
"metadata": "https://www.sparkfun.com/datasheets/Sensors/Temperature/DHT22.pdf"
},
"ObservedProperty": {
"name": "Temperature",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/quantity/Instances.html#ThermodynamicTemperature",
"description": "The temperature."
},
"Observations": [
{
"phenomenonTime": "2019-03-14T10:00:00Z",
"result": 21.0
},
{
"phenomenonTime": "2019-03-14T10:01:00Z",
"result": 23.1
},
{
"phenomenonTime": "2019-03-14T10:02:00Z",
"result": 40.5
},
{
"phenomenonTime": "2019-03-14T10:03:00Z",
"result": 47.1
},
{
"phenomenonTime": "2019-03-14T10:04:00Z",
"result": 32.2
},
{
"phenomenonTime": "2019-03-14T10:05:00Z",
"result": 30.3
}
]
},
{
"name": "Access",
"description": "The access state (e.g., open, closed) of the vehicle.",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Measurement",
"unitOfMeasurement": {
"name": "Centimeter",
"symbol": "cm",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/unit/Instances.html#Centimeter"
},
"Sensor": {
"name": "Distance Sensor",
"description": "Measure the distance.",
"encodingType": "application/pdf",
"metadata": "..."
},
"ObservedProperty": {
"name": "Length",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/quantity/Instances.html#Length",
"description": "The length of the measured distance."
},
"Observations": [
{
"phenomenonTime": "2019-03-14T10:00:00Z",
"result": 0.0
},
{
"phenomenonTime": "2019-03-14T10:01:00Z",
"result": 0.0
},
{
"phenomenonTime": "2019-03-14T10:02:00Z",
"result": 21.4
},
{
"phenomenonTime": "2019-03-14T10:03:00Z",
"result": 21.7
},
{
"phenomenonTime": "2019-03-14T10:04:00Z",
"result": 20.9
},
{
"phenomenonTime": "2019-03-14T10:05:00Z",
"result": 0.0
}
]
}
]
}
any other solution is welcome too:)
I think you're executing your "curl" command before your "web" container starts. If I were you, I will run the "web" on "0.0.0.0" instead of "localhost", and use separate container to sleep for some time, then execute the curl command (don't forget about the container entrypopoint and override it if required).
I did run your docker-compose and execute the curl manually, but I got 400:
curl -vvv -X POST --data-binary ./file.json -H "Content-Type: application/json" -H "Accept: application/json" "http://localhost:8080/FROST-Server/v1.1/Things"
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 127.0.0.1:8080...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> POST /FROST-Server/v1.1/Things HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.68.0
> Content-Type: application/json
> Accept: application/json
> Content-Length: 11
>
* upload completely sent off: 11 out of 11 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 400
< Access-Control-Allow-Origin: *
< Access-Control-Expose-Headers: Location
< Content-Length: 224
< Date: Tue, 14 Jun 2022 12:43:06 GMT
< Connection: close
<
* Closing connection 0
{"code":400,"type":"error","message":"Unexpected character ('.' (code 46)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')\n at [Source: (BufferedReader); line: 1, column: 2]"}
Since I'm not familiar with this service, I don't know how to fix this, but you can test bellow docker-compose if you could fix this:
version: '2'
services:
curl:
image: alpine/curl:latest
entrypoint: /bin/sh
command:
- -c
- sleep 10
- curl
- -vvv
- -X
- POST
- --data-binary
- /file.json
- -H
- "Content-Type: application/json" # <-- YAML string quoting
- -H
- "Accept: application/json"
- "http://web:8080/FROST-Server/v1.1/Things"
volumes:
- ./file.json:/file.json
depends_on:
- web
web:
image: fraunhoferiosb/frost-server:latest
environment:
- serviceRootUrl=http://0.0.0.0:8080/FROST-Server
- http_cors_enable=true
- http_cors_allowed.origins=*
- persistence_db_driver=org.postgresql.Driver
- persistence_db_url=jdbc:postgresql://database:5432/sensorthings
- persistence_db_username=sensorthings
- persistence_db_password=ChangeMe
- persistence_autoUpdateDatabase=true
ports:
- 8080:8080
- 1883:1883
# volumes:
# - ./file.json:/usr/local/tomcat/file.json
depends_on:
- database
# command:
# - curl
# - -X
# - POST
# - --data-binary
# - ./file.json
# - -H
# - "Content-Type: application/json" # <-- YAML string quoting
# - -H
# - "Accept: application/json"
# - "http://localhost:8080/FROST-Server/v1.1/Things"
database:
image: postgis/postgis:14-3.2-alpine
environment:
- POSTGRES_DB=sensorthings
- POSTGRES_USER=sensorthings
- POSTGRES_PASSWORD=ChangeMe
volumes:
- postgis_volume:/var/lib/postgresql/data
volumes:
postgis_volume:
Edit 1: volume was missing from "curl" container
The End Solution of the problem. I would like to thank #RoohAllahGodazgar for helping me out.
Explanation: we need to have another image as curl image which we can run curl request on it after the FROST server is run. on curl service, we should mount first the dummy data inside the container and after that run sleep 20s so that we make sure that FROST server is run, then run crul method. after that the container will be exited.
version: '2'
services:
curl:
image: alpine/curl:latest
command: sh -c "sleep 20 && curl -X POST -d #/file.json http://web:8080/FROST-Server/v1.1/Things"
volumes:
- ./file.json:/file.json
depends_on:
- web
web:
image: fraunhoferiosb/frost-server:latest
environment:
- serviceRootUrl=http://0.0.0.0:8080/FROST-Server
- http_cors_enable=true
- http_cors_allowed.origins=*
- persistence_db_driver=org.postgresql.Driver
- persistence_db_url=jdbc:postgresql://database:5432/sensorthings
- persistence_db_username=sensorthings
- persistence_db_password=ChangeMe
- persistence_autoUpdateDatabase=true
ports:
- 8080:8080
- 1883:1883
depends_on:
- database
database:
image: postgis/postgis:14-3.2-alpine
environment:
- POSTGRES_DB=sensorthings
- POSTGRES_USER=sensorthings
- POSTGRES_PASSWORD=ChangeMe
volumes:
- postgis_volume:/var/lib/postgresql/data
volumes:
postgis_volume:
So, I am following a MongoDB tutorial on Pluralsight and I've been able to create a, b and c database on the same machine. After a successful creation of all three, I run mongo on port 30000 which is the port for my primary database.
>mongo --port 30000
It displayed connecting to the port and then I typed
db.getMongo()
It made a connection to the address
And I typed in a javascript object as done by the guy on Pluralsight which goes
>var democonfig={ _id: "demo", members: [{ _id: 0, host: 'localhost: 30000', priority: 10}, { _id: 1, host: 'localhost: 40000'}, { _id: 2, host: 'localhost: 50000', arbiterOnly: true}] };
After I pressed enter, I tried to run rs.initiate with the file democonfig
rs.initiate(democonfig)
This is the error I get:
{ "ok" : 0, "errmsg" : "Bad digit \" \" while parsing 30000", "code" : 93 }
This is how my replicaSet bat file looks like.
cd \Pluralsight\
md \Pluralsight\db1
md \Pluralsight\db2
md \Pluralsight\db3
#REM Primary
start "a" c:\MongoDB\bin\mongod.exe --dbpath ./db1 --port 30000 --replSet "demo"
#REM Secondary
start "b" c:\MongoDB\bin\mongod.exe --dbpath ./db2 --port 40000 --replSet "demo"
#REM Arbiter
start "c" c:\MongoDB\bin\mongod.exe --dbpath ./db3 --port 50000 --replSet "demo"
I ran into the same issue on Pluralsight's: "Introduction to MongoDb" tutorial. Below is what I used in the "configuring a replica set" section:
{
"_id": "demo",
"members": [
{
"_id": 0,
"host": "localhost:30000",
"priority": 10
},
{
"_id": 1,
"host": "localhost:40000"
},
{
"_id": 2,
"host": "localhost:50000",
"arbiterOnly": true
}
]
}
Solved it! Just removed all the spaces in between the javascript code and it ran fine.
I just removed space between localhost: and port number (localhost:30000) and the same thing for other 2 hosts. It worked fine.
I already have MongoDB and installed Elasticsearch with Mongoriver. So I set up my river:
$ curl -X PUT localhost:9200/_river/database_test/_meta -d '{
"type": "mongodb",
"mongodb": {
"servers": [
{
"host": "127.0.0.1",
"port": 27017
}
],
"options": {
"secondary_read_preference": true
},
"db": "database_test",
"collection": "event"
},
"index": {
"name": "database_test",
"type": "event"
}
}'
I simply want to get events that have country:Canada so I try:
$ curl -XGET 'http://localhost:9200/database_test/_search?q=country:Canada'
And I get:
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
I am searching the web and I read that I should first index my collection with Elasticsearch (lost the link). Should I index my Mongodb? What should I do to get results from an existing MongoDB collection?
The mongodb river relies on the operations log of MongoDB to index documents, so it is a requirement that you create your mongo database as a replica set. I assume that you're missing it, so when you create the river, the initial import sees nothing to index. I am also assuming that you're on Linux and you have a handle on the shell cli tools, so try this:
Follow these steps:
Make sure that the mapper-attachments Elasticsearch plugins is also installed
Make a backup of your database with mongodump
edit mongodb.conf (usually in /etc/mongodb.conf, but varies on how you installed it) and add the line:
replSet = rs0
"rs0" is the name of the replicaset, it can be whatever you like.
restart your mongo and then log in its console. Type:
rs.initiate()
rs.slaveOk()
The prompt will change to rs0:PRIMARY>
Now create your river just as you did in the question and restore your database with mongorestore. Elasticsearch should index your documents.
I recomend using this plugin: http://mobz.github.io/elasticsearch-head/ to navigate your indexes and rivers and make sure your data got indexed.
If that doesnt work, please post which versions you are using for the mongodb-river-plugin, elasticsearch and mongodb.