How to insert data using docker compose - postgresql

Overview: I would like to insert data in file.json in db within docker-compose using CURL.
My Try: I have tried to mount file.json inside the FROST container, then run the command CURL within docker-compose.
curl -X POST -H "Content-Type: application/json" -d file.json http://localhost:8080/FROST-Server/v1.1/Things
as the result the docker-compose yaml is so
version: '2'
services:
web:
image: fraunhoferiosb/frost-server:latest
environment:
- serviceRootUrl=http://localhost:8080/FROST-Server
- http_cors_enable=true
- http_cors_allowed.origins=*
- persistence_db_driver=org.postgresql.Driver
- persistence_db_url=jdbc:postgresql://database:5432/sensorthings
- persistence_db_username=sensorthings
- persistence_db_password=ChangeMe
- persistence_autoUpdateDatabase=true
ports:
- 8080:8080
- 1883:1883
volumes:
- ./file.json:file.json
depends_on:
- database
command:
- curl
- -X
- POST
- --data-binary
- ./file.json
- -H
- "Content-Type: application/json" # <-- YAML string quoting
- -H
- "Accept: application/json"
- "http://localhost:8080/FROST-Server/v1.1/Things"
database:
image: postgis/postgis:14-3.2-alpine
environment:
- POSTGRES_DB=sensorthings
- POSTGRES_USER=sensorthings
- POSTGRES_PASSWORD=ChangeMe
volumes:
- postgis_volume:/var/lib/postgresql/data
volumes:
postgis_volume:
After running docker-compuse up, i got this error
Creating network "frost_default" with the default driver
Creating frost_database_1 ... done
Creating frost_web_1 ... done
Attaching to frost_database_1, frost_web_1
database_1 |
database_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
database_1 |
database_1 | 2022-06-13 11:24:06.420 UTC [1] LOG: starting PostgreSQL 14.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit
database_1 | 2022-06-13 11:24:06.421 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
database_1 | 2022-06-13 11:24:06.421 UTC [1] LOG: listening on IPv6 address "::", port 5432
database_1 | 2022-06-13 11:24:06.458 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
database_1 | 2022-06-13 11:24:06.486 UTC [22] LOG: database system was shut down at 2022-06-13 09:58:59 UTC
database_1 | 2022-06-13 11:24:06.498 UTC [1] LOG: database system is ready to accept connections
web_1 | % Total % Received % Xferd Average Speed Time Time Time Current
web_1 | Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
web_1 | curl: (7) Failed to connect to localhost port 8080: Connection refused
frost_web_1 exited with code 7
note: docker-compose.yml and file.json are in same path.
file.json
{
"name": "Car",
"description": "Smart connected car equipped with sensors.",
"properties": {
"vin": "5YJ3EDEB5KF223462"
},
"Locations": [
{
"name": "Parking lot",
"description": "The parking lot of the fictive company.",
"encodingType": "application/vnd.geo+json",
"location": {
"type": "Point",
"coordinates": [8.419432640075684, 49.01395040719586]
}
}
],
"Datastreams": [
{
"name": "Oil pump",
"description": "Measuring the motor oil pressure.",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Measurement",
"unitOfMeasurement": {
"name": "Bar",
"symbol": "bar",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/unit/Instances.html#Bar"
},
"Sensor": {
"name": "OIL_PRES_SENSOR2",
"description": "Oil pressure sensor",
"encodingType": "application/pdf",
"metadata": "..."
},
"ObservedProperty": {
"name": "Pressure",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/quantity/Instances.html#Pressure",
"description": "The oil pressure."
},
"Observations": [
{
"phenomenonTime": "2022-01-10T10:00:00Z",
"result": 2.1
},
{
"phenomenonTime": "2022-01-10T10:01:10Z",
"result": 2.3
},
{
"phenomenonTime": "2022-01-10T10:02:20Z",
"result": 2.7
},
{
"phenomenonTime": "2022-01-10T10:03:30Z",
"result": 2.9
},
{
"phenomenonTime": "2022-01-10T10:04:40Z",
"result": 4.1
},
{
"phenomenonTime": "2022-01-10T10:05:50Z",
"result": 3.7
}
]
},
{
"name": "Motor Temperature",
"description": "The temperature of the motor.",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Measurement",
"unitOfMeasurement": {
"name": "Centigrade",
"symbol": "C",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/unit/Instances.html#DegreeCentigrade"
},
"Sensor": {
"name": "DHT22/Temperature",
"description": "Temperature sensor of a DHT22",
"encodingType": "application/pdf",
"metadata": "https://www.sparkfun.com/datasheets/Sensors/Temperature/DHT22.pdf"
},
"ObservedProperty": {
"name": "Temperature",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/quantity/Instances.html#ThermodynamicTemperature",
"description": "The temperature."
},
"Observations": [
{
"phenomenonTime": "2019-03-14T10:00:00Z",
"result": 21.0
},
{
"phenomenonTime": "2019-03-14T10:01:00Z",
"result": 23.1
},
{
"phenomenonTime": "2019-03-14T10:02:00Z",
"result": 40.5
},
{
"phenomenonTime": "2019-03-14T10:03:00Z",
"result": 47.1
},
{
"phenomenonTime": "2019-03-14T10:04:00Z",
"result": 32.2
},
{
"phenomenonTime": "2019-03-14T10:05:00Z",
"result": 30.3
}
]
},
{
"name": "Access",
"description": "The access state (e.g., open, closed) of the vehicle.",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Measurement",
"unitOfMeasurement": {
"name": "Centimeter",
"symbol": "cm",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/unit/Instances.html#Centimeter"
},
"Sensor": {
"name": "Distance Sensor",
"description": "Measure the distance.",
"encodingType": "application/pdf",
"metadata": "..."
},
"ObservedProperty": {
"name": "Length",
"definition": "http://www.qudt.org/qudt/owl/1.0.0/quantity/Instances.html#Length",
"description": "The length of the measured distance."
},
"Observations": [
{
"phenomenonTime": "2019-03-14T10:00:00Z",
"result": 0.0
},
{
"phenomenonTime": "2019-03-14T10:01:00Z",
"result": 0.0
},
{
"phenomenonTime": "2019-03-14T10:02:00Z",
"result": 21.4
},
{
"phenomenonTime": "2019-03-14T10:03:00Z",
"result": 21.7
},
{
"phenomenonTime": "2019-03-14T10:04:00Z",
"result": 20.9
},
{
"phenomenonTime": "2019-03-14T10:05:00Z",
"result": 0.0
}
]
}
]
}
any other solution is welcome too:)

I think you're executing your "curl" command before your "web" container starts. If I were you, I will run the "web" on "0.0.0.0" instead of "localhost", and use separate container to sleep for some time, then execute the curl command (don't forget about the container entrypopoint and override it if required).
I did run your docker-compose and execute the curl manually, but I got 400:
curl -vvv -X POST --data-binary ./file.json -H "Content-Type: application/json" -H "Accept: application/json" "http://localhost:8080/FROST-Server/v1.1/Things"
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 127.0.0.1:8080...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> POST /FROST-Server/v1.1/Things HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.68.0
> Content-Type: application/json
> Accept: application/json
> Content-Length: 11
>
* upload completely sent off: 11 out of 11 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 400
< Access-Control-Allow-Origin: *
< Access-Control-Expose-Headers: Location
< Content-Length: 224
< Date: Tue, 14 Jun 2022 12:43:06 GMT
< Connection: close
<
* Closing connection 0
{"code":400,"type":"error","message":"Unexpected character ('.' (code 46)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')\n at [Source: (BufferedReader); line: 1, column: 2]"}
Since I'm not familiar with this service, I don't know how to fix this, but you can test bellow docker-compose if you could fix this:
version: '2'
services:
curl:
image: alpine/curl:latest
entrypoint: /bin/sh
command:
- -c
- sleep 10
- curl
- -vvv
- -X
- POST
- --data-binary
- /file.json
- -H
- "Content-Type: application/json" # <-- YAML string quoting
- -H
- "Accept: application/json"
- "http://web:8080/FROST-Server/v1.1/Things"
volumes:
- ./file.json:/file.json
depends_on:
- web
web:
image: fraunhoferiosb/frost-server:latest
environment:
- serviceRootUrl=http://0.0.0.0:8080/FROST-Server
- http_cors_enable=true
- http_cors_allowed.origins=*
- persistence_db_driver=org.postgresql.Driver
- persistence_db_url=jdbc:postgresql://database:5432/sensorthings
- persistence_db_username=sensorthings
- persistence_db_password=ChangeMe
- persistence_autoUpdateDatabase=true
ports:
- 8080:8080
- 1883:1883
# volumes:
# - ./file.json:/usr/local/tomcat/file.json
depends_on:
- database
# command:
# - curl
# - -X
# - POST
# - --data-binary
# - ./file.json
# - -H
# - "Content-Type: application/json" # <-- YAML string quoting
# - -H
# - "Accept: application/json"
# - "http://localhost:8080/FROST-Server/v1.1/Things"
database:
image: postgis/postgis:14-3.2-alpine
environment:
- POSTGRES_DB=sensorthings
- POSTGRES_USER=sensorthings
- POSTGRES_PASSWORD=ChangeMe
volumes:
- postgis_volume:/var/lib/postgresql/data
volumes:
postgis_volume:
Edit 1: volume was missing from "curl" container

The End Solution of the problem. I would like to thank #RoohAllahGodazgar for helping me out.
Explanation: we need to have another image as curl image which we can run curl request on it after the FROST server is run. on curl service, we should mount first the dummy data inside the container and after that run sleep 20s so that we make sure that FROST server is run, then run crul method. after that the container will be exited.
version: '2'
services:
curl:
image: alpine/curl:latest
command: sh -c "sleep 20 && curl -X POST -d #/file.json http://web:8080/FROST-Server/v1.1/Things"
volumes:
- ./file.json:/file.json
depends_on:
- web
web:
image: fraunhoferiosb/frost-server:latest
environment:
- serviceRootUrl=http://0.0.0.0:8080/FROST-Server
- http_cors_enable=true
- http_cors_allowed.origins=*
- persistence_db_driver=org.postgresql.Driver
- persistence_db_url=jdbc:postgresql://database:5432/sensorthings
- persistence_db_username=sensorthings
- persistence_db_password=ChangeMe
- persistence_autoUpdateDatabase=true
ports:
- 8080:8080
- 1883:1883
depends_on:
- database
database:
image: postgis/postgis:14-3.2-alpine
environment:
- POSTGRES_DB=sensorthings
- POSTGRES_USER=sensorthings
- POSTGRES_PASSWORD=ChangeMe
volumes:
- postgis_volume:/var/lib/postgresql/data
volumes:
postgis_volume:

Related

is it possible to run Hyperledger Explorer on ARM64 like Raspberry pi?

I am trying to see the blocks or Fabric using Hyperledger Explorer on raspberry pi. everytime I run docker-compose up the result is always the same like this:
Creating explorerdb.mynetwork.com ... done Creating explorer.mynetwork.com ... done Attaching to explorerdb.mynetwork.com, explorer.mynetwork.com explorer.mynetwork.com | exec /usr/local/bin/docker-entrypoint.sh: exec format error explorerdb.mynetwork.com | 2022-12-13 19:59:02.094 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 explorerdb.mynetwork.com | 2022-12-13 19:59:02.096 UTC [1] LOG: listening on IPv6 address "::", port 5432 explorerdb.mynetwork.com | 2022-12-13 19:59:02.102 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" explorerdb.mynetwork.com | 2022-12-13 19:59:02.152 UTC [20] LOG: database system was shut down at 2022-12-13 19:58:39 UTC explorerdb.mynetwork.com | 2022-12-13 19:59:02.164 UTC [1] LOG: database system is ready to accept connections explorer.mynetwork.com exited with code 1
my docker-compose.yaml
`version: '2.1'
volumes:
pgdata:
walletstore:
networks:
mynetwork.com:
name: el_red
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:latest
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- mynetwork.com
explorer.mynetwork.com:
image: hyperledger/explorer:latest
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
environment:
- DATABASE_HOST=explorerdb.mynetwork.com
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=debug
- LOG_LEVEL_DB=debug
- LOG_LEVEL_CONSOLE=info
- LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=false
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./organizations:/tmp/crypto
- walletstore:/opt/explorer/wallet
ports:
- 8080:8080
depends_on:
explorerdb.mynetwork.com:
condition: service_healthy
networks:
- mynetwork.com
`
my test-network.json
`
{
"name": "my-network",
"version": "1.0.0",
"client": {
"tlsEnable": true,
"adminCredential": {
"id": "exploreradmin",
"password": "exploreradminpw"
},
"enableAuthentication": true,
"organization": "Org1MSP",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"channels": {
"mychannel": {
"peers": {
"peer0.org1.example.com": {}
}
}
},
"organizations": {
"Org1MSP": {
"mspid": "Org1MSP",
"adminPrivateKey": {
"pem": "-----BEGIN PRIVATE KEY-----\nMIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgtkP3OchnVeSd6c0n\ns/SXp7E3JLiBQZExi1UVXuCQYcahRANCAAQgJLvV9SaRC550c1hyDVfDao1MaxJU\nlvnDq1Yi51/d2d5sLndQ4q33nuAoybKIR3eQIrvE2Wu4wTGQCL2r3t2F\n-----END PRIVATE KEY-----\n"
},
"peers": ["peer0.org1.example.com"],
"signedCert": {
"path": "/tmp/crypto//peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/cert.pem"
}
}
},
"peers": {
"peer0.org1.example.com": {
"tlsCACerts": {
"path": "/tmp/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt"
},
"url": "grpcs://peer0.org1.example.com:7051"
}
}
}
`
I wait for like an hour everytime I run docker-compose up but nothing happen. Please help. Thank you so much in advance.

Unable to connect to Mongodb container from flask container

#Docker compose file
version: "3.4" # optional since v1.27.0
services:
flaskblog:
build:
context: flaskblog
dockerfile: Dockerfile
image: flaskblog
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
APP_PORT: 5000
MONGODB_DATABASE: flaskdb
MONGODB_USERNAME: flaskuser
MONGODB_PASSWORD:
MONGODB_HOSTNAME: mongodbuser
volumes:
- appdata:/var/www
depends_on:
- mongodb
networks:
- frontend
- backend
mongodb:
image: mongo:4.2
#container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD:
MONGO_INITDB_DATABASE: flaskdb
MONGODB_DATA_DIR: /data/db2
MONDODB_LOG_DIR: /dev/null
volumes:
- mongodbdata:/data/db2
#- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongodbdata:
driver: local
appdata:
driver: local
#Flask container file
FROM python:3.8-slim-buster
#python:3.6-stretch
LABEL MAINTAINER="Shekhar Banerjee"
ENV GROUP_ID=1000 \
USER_ID=1000 \
SECRET_KEY="4" \
EMAIL_USER="dankml.com" \
EMAIL_PASS="nzw"
RUN mkdir /home/logix3
WORKDIR /home/logix3
ADD . /home/logix3/
RUN pip install -r requirements.txt
RUN pip install gunicorn
RUN groupadd --gid $GROUP_ID www
RUN useradd --create-home -u $USER_ID --shell /bin/sh --gid www www
USER www
EXPOSE 5000/tcp
#CMD ["python","run.py"]
CMD [ "gunicorn", "-w", "4", "--bind", "127.0.0.1:5000", "run:app"]
These are my docker-compose file and Dockerfile for an application . The container for MongoDB and Flask are working fine individually, but when I try to run them together, I do not get any response on the localhost, There are no error messages in the logs or the containers
Curl in the system gives this :
$ curl -i http://127.0.0.1:5000
curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused
Can anyone suggest how do I debug this ??
**Only backend network is functional
[
{
"Name": "mlspace_backend",
"Id": "e7e37cad0058330c99c55ffea2b98281e0c6526763e34550db24431b30030b77",
"Created": "2022-05-15T22:52:25.0281001+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"12c50f5f70d18b4c7ffc076177b59ff063a8ff81c4926c8cae0bf9e74dc2fc83": {
"Name": "mlspace_mongodb_1",
"EndpointID": "8278f672d9211aec9b539e14ae4eeea5e8f7aaef95448f44aab7ec1a8c61bb0b",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"75cde7a34d6b3c4517c536a06349579c6c39090a93a017b2c280d987701ed0cf": {
"Name": "mlspace_flaskblog_1",
"EndpointID": "20489de8841d937f01768170a89f74c37ed049d241b096c8de8424c51e58704c",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "backend",
"com.docker.compose.project": "mlspace",
"com.docker.compose.version": "1.23.2"
}
}
]
I checked the logs of the containers. no errors found, flask engine seem to be running but the site wont run , even curl won't given any output

Strimzi kafka connect with debezium mongodb-connector not creating using REST (Mongodb as Source)

Installed strimzi
helm repo add strimzi https://strimzi.io/charts/ && helm install strimzi-kafka strimzi/strimzi-kafka-operator
Output:
Name: strimzi-cluster-operator-587cb79468-hrs9q
strimzi-cluster-operator with (quay.io/strimzi/operator:0.28.0)
(strimzi-kafka-connect:0.28) I used to build image with following Dockerfile
FROM quay.io/strimzi/kafka:0.28.0-kafka-3.1.0
USER root:root
RUN mkdir -p /opt/kafka/plugins/debezium
COPY ./debezium-connector-mysql/ /opt/kafka/plugins/debezium/
COPY ./debezium-connector-mongodb/ /opt/kafka/plugins/debezium/
COPY ./confluentinc-kafka-connect-elasticsearch/ /opt/kafka/plugins/debezium/
COPY ./mongodb-kafka-connect-mongodb-1.7.0/ /opt/kafka/plugins/debezium/
RUN chown -R kafka:root /opt/kafka
USER 1001
Following is KafkaConnect configuration:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: my-connect-cluster
spec:
version: 3.1.0
replicas: 1
bootstrapServers: my-cluster-kafka-bootstrap:9093
image: strimzi-kafka-connect:0.28.2
tls:
trustedCertificates:
- secretName: my-cluster-cluster-ca-cert
certificate: ca.crt
config:
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
# -1 means it will use the default replication factor configured in the broker
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
status.storage.replication.factor: -1
Installed Kafka using https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.28.0
Specifically(kafka-persistent.yaml) with following config:
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
version: 3.1.0
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
inter.broker.protocol.version: "3.1"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 10Gi
deleteClaim: false
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 10Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}
Following is the status of plugins:
kubectl exec my-connect-cluster-connect-59cfff997b-4kv9b -it my-connect-cluster-connect -- curl http://localhost:8083/connector-plugins | jq '.'
[
{
"class": "com.mongodb.kafka.connect.MongoSinkConnector",
"type": "sink",
"version": "1.7.0"
},
{
"class": "com.mongodb.kafka.connect.MongoSourceConnector",
"type": "source",
"version": "1.7.0"
},
{
"class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"type": "sink",
"version": "11.1.8"
},
{
"class": "io.debezium.connector.mongodb.MongoDbConnector",
"type": "source",
"version": "1.8.1.Final"
},
{
"class": "io.debezium.connector.mysql.MySqlConnector",
"type": "source",
"version": "1.0.0.Final"
},
{
"class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"type": "sink",
"version": "3.1.0"
},
{
"class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"type": "source",
"version": "3.1.0"
},
{
"class": "org.apache.kafka.connect.mirror.MirrorCheckpointConnector",
"type": "source",
"version": "1"
},
{
"class": "org.apache.kafka.connect.mirror.MirrorHeartbeatConnector",
"type": "source",
"version": "1"
},
{
"class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
"type": "source",
"version": "1"
}
]
Everything is working fine when i am creating connector for mysql.
[kafka#my-connect-cluster-connect-59cfff997b-4kv9b kafka]$ curl -i -X POST -H "Accept:application/json" \
> -H "Content-Type:application/json" http://localhost:8083/connectors/ \
> -d '{
> "name": "inventory-test-mysql",
> "config": {
> "connector.class": "io.debezium.connector.mysql.MySqlConnector",
> "tasks.max": "1",
> "database.hostname": "172.17.0.7",
> "database.port": "3306",
> "database.user": "root",
> "database.password": "debezium",
> "database.server.id": "184054",
> "database.server.name": "dbserver1",
> "database.include.list": "inventory",
> "database.history.kafka.bootstrap.servers": "my-cluster-kafka-bootstrap:9092",
> "database.history.kafka.topic": "schema-changes-for-inventory",
> "include.schema.changes": "true"
> }
> }'
HTTP/1.1 201 Created
Date: Tue, 22 Feb 2022 11:50:14 GMT
Location: http://localhost:8083/connectors/inventory-test-mysql
Content-Type: application/json
Content-Length: 560
Server: Jetty(9.4.43.v20210629)
{"name":"inventory-test-mysql","config":{"connector.class":"io.debezium.connector.mysql.MySqlConnector","tasks.max":"1","database.hostname":"172.17.0.7","database.port":"3306","database.user":"root","database.password":"debezium","database.server.id":"184054","database.server.name":"dbserver1","database.include.list":"inventory","database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes-for-inventory","include.schema.changes":"true","name":"inventory-test-mysql"},"tasks":[],"type":"source"}
Topic and connector created successfully, here is the output:
➜ ~ kubectl exec --tty -i kafka-client-strimzi -- bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --list
__consumer_offsets
__strimzi-topic-operator-kstreams-topic-store-changelog
__strimzi_store_topic
connect-cluster-configs
connect-cluster-offsets
connect-cluster-status
dbserver1
dbserver1.inventory.addresses
dbserver1.inventory.customers
dbserver1.inventory.geom
dbserver1.inventory.orders
dbserver1.inventory.products
dbserver1.inventory.products_on_hand
schema-changes-customers
schema-changes-for-inventory
Connectors status:
[kafka#my-connect-cluster-connect-59cfff997b-4kv9b kafka]$ curl http://localhost:8083/connectors
["inventory-test-mysql"]
[kafka#my-connect-cluster-connect-59cfff997b-4kv9b kafka]$ curl http://localhost:8083/connectors/inventory-test-mysql/status
{"name":"inventory-test-mysql","connector":{"state":"RUNNING","worker_id":"172.17.0.19:8083"},"tasks":[{"id":0,"state":"RUNNING","worker_id":"172.17.0.19:8083"}],"type":"source"}
And i am receiving the schema changes whenever i add or update a record in table of inventory database.
Issue:
This whole setup is not working with either
{
"class": "com.mongodb.kafka.connect.MongoSinkConnector",
"type": "sink",
"version": "1.7.0"
},
{
"class": "com.mongodb.kafka.connect.MongoSourceConnector",
"type": "source",
"version": "1.7.0"
}
or
{
"class": "io.debezium.connector.mongodb.MongoDbConnector",
"type": "source",
"version": "1.8.1.Final"
}
Here is the output of curl:
[kafka#my-connect-cluster-connect-59cfff997b-4kv9b kafka]$ curl -i -X POST -H "Accept:application/json" \
> -H "Content-Type:application/json" http://localhost:8083/connectors/ \
> -d '{
> "name": "mongodb-connector",
> "config": {
> "tasks.max":1,
> "connector.class":"com.mongodb.kafka.connect.MongoSourceConnector",
> "connection.uri":"mongodb://root:password#172.17.0.20:27017",
> "key.converter":"org.apache.kafka.connect.storage.StringConverter",
> "value.converter":"org.apache.kafka.connect.storage.StringConverter",
> "key.converter.schemas.enable": "false",
> "value.converter.schemas.enable": "false",
> "database":"mydb",
> "collection":"dataSource"
> }
> }'
HTTP/1.1 500 Server Error
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: application/json
Content-Length: 137
Connection: close
Server: Jetty(9.4.43.v20210629)
{
"servlet":"org.glassfish.jersey.servlet.ServletContainer-1d98daa0",
"message":"Request failed.",
"url":"/connectors/",
"status":"500"
}
OR with
[kafka#my-connect-cluster-connect-59cfff997b-4kv9b kafka]$ curl -i -X POST -H "Accept:application/json" \
> -H "Content-Type:application/json" http://localhost:8083/connectors/ \
> -d '{
> "name": "mongodb-connector",
> "config": {
> "connector.class": "io.debezium.connector.mongodb.MongoDbConnector",
> "mongodb.hosts": "rs0/172.17.0.20:27017,rs0/172.17.0.21:27017",
> "mongodb.name": "mydb",
> "mongodb.user": "root",
> "mongodb.password": "password",
> "database.whitelist": "mydb[.]*"
> }
> }'
HTTP/1.1 500 Server Error
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: application/json
Content-Length: 137
Connection: close
Server: Jetty(9.4.43.v20210629)
{
"servlet":"org.glassfish.jersey.servlet.ServletContainer-1d98daa0",
"message":"Request failed.",
"url":"/connectors/",
"status":"500"
}
I would appreciate, if you let me know what am i missing.
If REST is not possible with Strimzi and Mongodb, then what is alternative ?
Is it related to version of Strimzi-Kafka ?
Is it related to version of Mongodb plugins ?
Thanks

Orion Reports Error While Provisioning a Device

I am following up with the FIWARE-IOTAgent-LWM2M tutorial available here trying to pre-provision an LWM2M device. I cloned github repo, installed dependencies and created docker-compose.yml file. All containers (including lightweightm2m-iotagent) started successfully.
However, when I tried provisioning the device using:
(curl localhost:4041/iot/devices -s -S --header 'Content-Type: application/json' \
--header 'Accept: application/json' --header 'fiware-service: factory' --header 'fiware-servicepath: /robots' \
-d #- | python -mjson.tool) <<EOF
{
"devices": [
{
"device_id": "robot1",
"entity_type": "Robot",
"attributes": [
{
"name": "Battery",
"type": "number"
}
],
"lazy": [
{
"name": "Message",
"type": "string"
}
],
"commands": [
{
"name": "Position",
"type": "location"
}
],
"internal_attributes": {
"lwm2mResourceMapping": {
"Battery" : {
"objectType": 7392,
"objectInstance": 0,
"objectResource": 1
},
"Message" : {
"objectType": 7392,
"objectInstance": 0,
"objectResource": 2
},
"Position" : {
"objectType": 7392,
"objectInstance": 0,
"objectResource": 3
}
}
}
}
]
}
EOF
I get the following error:
{
"message": "Request error connecting to the Context Broker: {\"code\":\"400\",\"reasonPhrase\":\"Bad Request\",\"details\":\"JSON Parse Error: unknown field: /contextRegistrations/contextRegistration/attributes/attribute/isDomain\"}",
"name": "BAD_REQUEST"
}
I am not sure how to debug this. Any idea how to fix it?
Question Edited: Below is the docker-compose file I'm using.
version: "3.1"
services:
mongo:
image: mongo:3.6
command: --nojournal
ports:
- "27017:27017"
expose:
- "27017"
orion:
image: fiware/orion
links:
- mongo
ports:
- "1026:1026"
command: -dbhost mongo -logLevel DEBUG
depends_on:
- mongo
expose:
- "1026"
lightweightm2m-iotagent:
image: telefonicaiot/lightweightm2m-iotagent
hostname: idas
links:
- orion
expose:
- "4041"
- "5684"
ports:
- "4041:4041"
- "5684:5684/udp"
mosquitto:
image: ansi/mosquitto
ports:
- "1883:1883"
expose:
- "1883"
It is a known issue, already fixed in master branch.
It has been fixed recently on November 7th, 2018. On the other hand telefonicaiot/lightweightm2m-iotagent:latest (and telefonicaiot/lightweightm2m-iotagent should default to latest) last update at the time of writting this is November 13th, 2018 so it should include the fix.
Probably your telefonicaiod/lightweightm2m-iotagent image is out of date. Pulling it again from Dockerhub should solve the problem.

Configuring MongoDB replica set from docker-compose

I'm using docker-compose to start 3 MongoDB servers which should be in a replica set.
I first start 3 MongoDB servers, then I configure the replica set.
This is how I would do the replica set config in a bash script:
mongo --host 127.0.0.1:27017 <<EOF
var cfg = {
"_id": "rs",
"version": 1,
"members": [
{
"_id": 0,
"host": "127.0.0.1:27017",
"priority": 1
},
// snip...
]
};
rs.initiate(cfg);
rs.reconfig(cfg)
EOF
Here I'm trying to replicate the configuring of the replica set using docker-compose.
# docker-compose.yml
mongosetup:
image: mongo:3.0
links:
- mongo1:mongo1
command: echo 'var cfg = { "_id": "rs", "version": 1, "members": [ { "_id": 0, "host": "127.0.0.1:27017", "priority": 1 }, { "_id": 1, "host": "mongo2:27017", "priority": 1 },{ "_id": 2, "host": "mongo2:27017", "priority": 1 } ] }; rs.initiate(cfg);' | mongo mongo1
Unfortunately that creates this error: yaml.scanner.ScannerError: mapping values are not allowed here.
What's the recommended approach?
Is it possible to store the cfg object in a separate file that docker-compose reads?
I fixed the problem by putting the config in setup.sh which I called from entrypoint.
mongosetup:
image: mongo:3.0
links:
- mongo1:mongo1
- mongo2:mongo2
- mongo3:mongo3
volumes:
- ./scripts:/scripts
entrypoint: [ "/scripts/setup.sh" ]
setup.sh
#!/bin/bash
MONGODB1=`ping -c 1 mongo1 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
mongo --host ${MONGODB1}:27017 <<EOF
var cfg = {
"_id": "rs",
"version": 1,
"members": [
{
"_id": 0,
"host": "${MONGODB1}:27017",
[cut..]
EOF
I have created a setup with docker-compose that starts 3 MongoDBs in a replica set and ElasticSearch with mongodb-river.
It's available at https://github.com/stabenfeldt/elastic-mongo