Cygnus-ngsi installed with Docker-compose does not save data in MongoDB - fiware-orion

I have Orion, MongoDB and Cygnus-ngsi installed with docker-compose on an Ubuntu 18.04 machine. The images I used were: fiware/orion:latest, fiware/cygnus-ngsi:latest and mongo:3.6.
They were installed with the command: docker-compose -f reflexwaterDocker.yaml up
I had no installation problems. All containers are UP. Orion saving to MongoDB. The Cygnus API working.
I created a subscription to an Orion entity, all ok. But when I update the entity, the orion performs on the mongo, but Cygnus does not persist the historical data.
The agent.conf file in opt/apache-flume/conf is configured correctly. I am using the default setup for the docker-compose installation.
I followed the process described in: https://github.com/telefonicaid/fiware-cygnus/blob/master/doc/cygnus-ngsi/installation_and_administration_guide/install_with_docker.md
All ports are correct and have been tested.
I've used Cygnus with Orion and MongoDB manually installing, and I had no problem. But also, there I set up an agent_conf and a cygnus_instance.conf. Using the docker, following the step by step in the documentation it is not necessary to configure cygnus_instance.conf. It doesn't even exist inside the opt/apache-flume/conf folder.
Does anyone have any idea what might be happening to Cygnus not persisting the data? Or have you been through this and managed to solve it?
My docker-compose looks like this for Cygnus-ngsi:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: cygnus
networks:
- default
depends_on:
- mongo
expose:
- "5050"
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- CYGNUS_SERVICE_PORT=5050
- CYGNUS_AGENT_NAME=cygnus-ngsi
- CYGNUS_MONGO_SERVICE_PORT=5050
- CYGNUS_DEFAULT_SERVICE=def_serv
- CYGNUS_DEFAULT_SERVICE_PATH=reflexWater
- CYGNUS_MONGO_HOSTS=localhost:27017
- CYGNUS_MONGO_USER=""
- CYGNUS_MONGO_PASS=""
- CYGNUS_MONGO_ENABLE_ENCODING=false
- CYGNUS_MONGO_ENABLE_GROUPING=false
- CYGNUS_MONGO_ENABLE_NAME_MAPPINGS=false
- CYGNUS_MONGO_DATA_MODEL=dm-by-entity
- CYGNUS_MONGO_ATTR_PERSISTENCE=column
- CYGNUS_MONGO_DB_PREFIX=db_
- CYGNUS_MONGO_COLLECTION_PREFIX=col_
- CYGNUS_MONGO_ENABLE_LOWERCASE=false
- CYGNUS_MONGO_BATCH_TIMEOUT=30
- CYGNUS_MONGO_BATCH_TTL=10
- CYGNUS_MONGO_DATA_EXPIRATION=0
- CYGNUS_MONGO_COLLECTIONS_SIZE=0
- CYGNUS_LOG_LEVEL=DEBUG
- CYGNUS_SKIP_CONF_GENERATION=false
I ran the command: docker logs cygnus
Cygnus tries to persist the data as per the log:
time=2020-05-03T01:18:27.683Z | lvl=INFO | corr=fda0c4fc-8cdb-11ea-ad11-0242ac120003 | trans=f1674faf-aa5a-4ca8-a62a-2136378f6d08 | srv=default |subsrv=/ | comp=cygnus-ngsi | op=persistAggregation | msg=com.telefonica.iot.cygnus.sinks.NGSIMongoSink[235] : [mongo-sink] Persisting data at NGSIMongoSink. Database: db_default, Collection: col_/_Room1_Room, Data: [Document{{temperature=279, recvTime=Sun May 03 01:18:27 UTC 2020}}]
But I also noticed the following error:
time=2020-05-03T01:18:28.553Z| lvl=WARN|corr=fda0c4fc-8cdb-11ea-ad11-0242ac120003 |trans=f1674faf-aa5a-4ca8-a62a-2136378f6d08 | srv=default| subsrv=/ | comp=cygnus-ngsi | op=createCollection|msg=com.telefonica.iot.cygnus.backends.mongo.MongoBackendImpl[192] : Error in collection col_/_Room1_Room creating index ex=Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='""', source='db_default', password=<hidden>, mechanismProperties=<hidden>}
As I understand it, it doesn't persist because of MongoDB's credentials.. But I am not using Username and Password for the MongoDB bank.
So, what can it be?
I didn't understand the error and I couldn't even find a plausible justification for the error.
Someone who has already used cygnus on the docker and who can give me some guidance on how to solve this.
Thank you

Taking into account this
I've used Cygnus with Orion and MongoDB manually installing, and I had no problem
I'd say that the problem is somehow related to the docker deployment. Moreover, looking to the trace:
Exception authenticating MongoCredential
maybe the CYGNUS_MONGO_USER="" and CYGNUS_MONGO_PASS="" env vars are not correctly processed (by entrypoint.sh or similar script, I guess) and this is causing the problem.
In order to debug this, I'd suggest you to compare the Cygnus agents.conf/cygnus_instance.conf files in working case (manually installing) and in the failing case (docker). Maybe the differences could provide some insight on the problem. In the case of docker, maybe the files aren't at opt/apache-flume/conf but they should be in someplace (either internally in the container or mounted as volume in the hosting system).
Another related question: the Cygnus version you make to work manually is exactly the same you one in the docker case? Try to ensure you are using exactly the same software.

Related

Deploying Keycloak in production: Cannot set quarkus.http.redirect-insecure-requests without enabling SSL

For already few hours I am struggling with getting Keycloak in production mode to work. When I try to run Keycloak in production, I get the next error:
keycloak | 2022-05-25 16:32:43,094 INFO [org.infinispan.CLUSTER] (main) ISPN000080: Disconnecting JGroups channel `ISPN`
keycloak | 2022-05-25 16:32:43,164 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (production) mode
keycloak | 2022-05-25 16:32:43,165 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Cannot set quarkus.http.redirect-insecure-requests without enabling SSL.
keycloak | 2022-05-25 16:32:43,165 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command.
My docker-compose file:
keycloak:
# depends_on:
# - postgres_data
container_name: keycloak
environment:
DB_VENDOR: postgres
DB_ADDR: postgres
DB_DATABASE: ${POSTGRESQL_DB}
DB_USER: ${POSTGRESQL_USER}
DB_PASSWORD: ${POSTGRESQL_PASS}
KEYCLOAK_ADMIN: ${KEYCLOAK_USER}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_PASSWORD}
VIRTUAL_PORT: "8080"
PROXY_ADDRESS_FORWARDING: "true"
image: quay.io/keycloak/keycloak:${KEYCLOAK_VERSION}
volumes:
- ./theme/:/opt/keycloak/themes/metronic-theme/
- ./keys/:/opt/keycloak/conf/keys/
ports:
- "8082:8080"
restart: unless-stopped
command:
- start --proxy=passthrough --hostname="myhostname" --hostname-strict-backchannel=true --https-certificate-file=/opt/keycloak/conf/keys/server.crt.pem --https-certificate-file=/opt/keycloak/conf/keys/server.key.pem
I am trying to deploy this on version 18.0.0.
There's a problem in the commands you add to the command: section of your docker compose: You define https-certificate-file twice, the one for the key should be https-certificate-key-file - see ref at the new TLS guide
That said, you are also mixing "old" wildfly environment variables with new ones from the quarkus based distribution. See e.g. the database guide and the reverse proxy guide for the equivalent parameters in the new distribution. e.g. PROXY_ADDRESS_FORWARDING is now KC_PROXY=edge/passthrough/...
In general, you should look at the new guides, every guide has the corresponding params at the bottom, when you open up a key you see the different formats (CLI, ENV) for the key.
Sidenote: You can now also configure Keycloak using only env variables or the CLI, not both.
Please look here: http://www.mastertheboss.com/keycloak/getting-started-with-keycloak-powered-by-quarkus/
To start the instance with the cert I am using the following param: --https-key-store-password=YourCoolKeyStorePassword and the command looks like ./kc.sh start --db-url='jdbc:postgresql://localhost:5432/' --db-username=postgres --db-password=YourCoolPSQLPassword --hostname mykeycloak.test.com:10810 --https-key-store-password=YourCoolKeyStorePassword

Running command during docker compose or docker build failed

I am trying to build mongo inside docker and I want to push database, collection and document inside the collection I tried with docker build and below my Dockerfile
FROM mongo
RUN mongosh mongodb://127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
RUN mongosh mongodb://127.0.0.1:27017/demeter --eval 'var document = {"_id": "61912ebb4b6d7dcc7e689914","name": "Test Account","email":"test#test.net", "role": "admin", "company_domain": "test.net","type": "regular","status": "active","createdBy": "61901a01097cb16e554f5a19","twoFactorAuth": false, "password": "$2a$10$MPDjDZIboLlD8xpc/RfOouAAAmBLwEEp2ESykk/2rLcqcDJJEbEVS"}; db.Users.insert(document);'
EXPOSE 27017
and using Docker Compose
version: '3.9'
services:
web:
build:
context: ./server
dockerfile: Dockerfile
ports:
- "8080:8080"
demeter_db:
image: "mongo"
volumes:
- ./mongodata:/data/db
ports:
- "27017:27017"
command: mongosh mongodb://127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
demeter_redis:
image: "redis"
I want to add the below records because the Web Server is using them in backend. if there is a better way of doing it I would be thankful.
What I get is the below error
demeter_db_1 | Current Mongosh Log ID: 61dc697509ee790cc89fc7aa
demeter_db_1 | Connecting to: mongodb://127.0.0.1:27017/demeter?directConnection=true&serverSelectionTimeoutMS=2000
demeter_db_1 | MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017
Knowing when I connect to interactive shell inside mongo container and add them manually things works fine.
root#8b20d117586d:/# mongosh 127.0.0.1:27017/demeter --eval 'db.createCollection("Users")'
Current Mongosh Log ID: 61dc64ee8a2945352c13c177
Connecting to: mongodb://127.0.0.1:27017/demeter?directConnection=true&serverSelectionTimeoutMS=2000
Using MongoDB: 5.0.5
Using Mongosh: 1.1.7
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.
------
The server generated these startup warnings when booting:
2022-01-10T16:52:14.717+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
2022-01-10T16:52:15.514+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
------
{ ok: 1 }
root#8b20d117586d:/# exit
exit
Cheers

Cant Access PostgreSQL on Google Cloud SQL from NestJS project on Google App Engine

This is my first question in Stack Overflow so please excuse me when my information is lack.
Issue
I am struggling to connect PostgreSQL on CloudSQL from NestJS on Google App Engine.
When I try to use the application in local environment the program works but when it comes to production in Google App Engine then it does not work.
Since i struggled days, I decided to ask awesome community here.
My Environment
Node.js: v10.19.0
NestJS: 6.10.5
TypeORM
PostgreSQL: 11.5.1
My app.yaml
runtime: nodejs10
env: standard
default_expiration: "4d 5h"
env_variables:
DATABASE_HOST: < public IP for Cloud SQL instance >
DATABASE_USERNAME: username
DATABASE_PASSWORD: password
DATABASE_NAME: databasename
INSTANCE_CONNECTION_NAME: "PROJECT_ID:REGION:INSTANCE_ID:DATABASE_NAME"
handlers:
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Error
[Nest] 18 - 02/27/2020, 8:25:46 AM [TypeOrmModule] Unable to connect to the database. Retrying (3)... +34816ms
2020-02-27 08:25:46 default[20200227t163916] Error: connect ETIMEDOUT 34.84.188.209:5432 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:
Others Operations
GAE Service Account
Also I added the Cloud SQL Client authority my GAE Service account (something like this service-PROJECT_ID#gae-api-prod.google.com.iam.gserviceaccount.com).
I also added package.json as written in below:
"engines": {
"node": "10.x.x"
},
In the typeorm options, I added extra socketpath.
extra: {
socketPath: `/cloudsql/<INSTANCE_CONNECTION_NAME>/`,
},
I do not understand if this option should be set or not (I have tried both).
socketPath: `/cloudsql/<INSTANCE_CONNECTION_NAME>/.s.PGSQL.5432
or
socketPath: `/cloudsql/<INSTANCE_CONNECTION_NAME>
According to the example provided in GitHub
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/blob/master/appengine/cloudsql_postgresql/app.flexible.yaml
The INSTANCE_CONNECTION_NAME environment variable does not include the DATABASE_NAME as a parameter.
e.g. my-awesome-project:us-central1:my-cloud-sql-instance
Maybe this could be causing that the name of the instance is not being resolved for the Proxy.

concourse git resource error: getting the final child's pid from pipe caused "EOF"

when trying to pull a git resource we are getting an error
runc run: exit status 1: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\""
we are using oracle linux - release 7.6. Docker version 18.03.1-ce.
we have followed the instructions on https://github.com/concourse/concourse-docker. we have tried with older versions of concourse (4.2.0 & 4.2.3). we can see the workers are up using fly.
we found this: https://github.com/concourse/concourse/issues/4021 on github which had a similar issue but couldn't find the relating story on stack overflow which the answerer had mentioned.
our docker compose file:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_DB: concourse
POSTGRES_USER: concourse_user
POSTGRES_PASSWORD: concourse_pass
web:
image: concourse/concourse
command: web
links: [db]
depends_on: [db]
ports: ["61111:8080"]
volumes: ["<path to repo folder>/keys/web:/concourse-keys"]
environment:
CONCOURSE_EXTERNAL_URL: <our url>
CONCOURSE_POSTGRES_HOST: db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
worker:
image: concourse/concourse
command: worker
privileged: true
depends_on: [web]
volumes: ["<path to repo folder>/keys/worker:/concourse-keys"]
links: [web]
stop_signal: SIGUSR2
environment:
CONCOURSE_TSA_HOST: web:2222
we expected the resource to pull as the connectivity to the repo is in place and verified.
Not sure about your second issue with volumes, but I solved the original problem by setting user.max_user_namespaces parameter to 15000:
sysctl -w user.max_user_namespaces=15000
The solution was found here: https://github.com/docker/docker.github.io/issues/7962
This issue was fixed by updating the kernal from 3.1.x to 4.1.x. we have a new issue: failed to create volume on all our pipelines. i will update if i find a solution to this too

ERROR: yaml.parser.ParserError: while parsing a block mapping

I'm building Iroha for which i'm running a script for environment setup which is internally calling the docker-compose.yml, where i"m getting the error:
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "/home/cdac/iroha/docker/docker-compose.yml", line 3, column 5
expected <block end>, but found '<scalar>'
in "/home/cdac/iroha/docker/docker-compose.yml", line 13, column 6
docker-compose.yml file is showing below.
services:
node:
image: hyperledger/iroha:develop-build
ports:
- "${IROHA_PORT}:50051"
- "${DEBUGGER_PORT}:20000"
environment:
- IROHA_POSTGRES_HOST=${COMPOSE_PROJECT_NAME}_postgres_1
- IROHA_POSTGRES_PORT=5432
- IROHA_POSTGRES_USER=iroha
- IROHA_POSTGRES_PASSWORD=helloworld
- CCACHE_DIR=/tmp/ccache
export G_ID=$(id -g $(whoami))
export U_ID=$(id -g $(whoami))
user: ${U_ID:-0}:${G_ID:-0}
depends_on:
- postgres
tty: true
volumes:
- ../:/opt/iroha
- ccache-data:/tmp/ccache
working_dir: /opt/iroha
cap_add:
- SYS_PTRACE
security_opt:
- seccomp:unconfined
postgres:
image: postgres:9.5
environment:
- POSTGRES_USER=iroha
- IROHA_POSTGRES_PASSWORD=helloworld
command: -c 'max_prepared_transactions=100'
volumes:
ccache-data:
any help will be appreciate, thanks in advance.
These lines are not belongs to the docker-compose syntax
export G_ID=$(id -g $(whoami))
export U_ID=$(id -g $(whoami))
Also this line wont be able to work as expected
user: ${U_ID:-0}:${G_ID:-0}
You should write your own shell script and use it as an entry point for the docker container (this should be done in the Dockerfile step) then run a container directly from the image that you have created without the need to assign a user or export anything within the docker-compose as it will be executed once your container is running.
Check the following URL which contains more explanation about the allowed keywords in docker-compose: Compose File: Service Configuration Reference
#MostafaHussein I removed the above 3 lines then executed the run-iroha-dev.sh script, and it started to work. it attached me to /opt/iroha in docker container and downloaded hyperledger/iroha:develop-build and iroha images and launched two containers.
is it same what you are suggesting?