Cygnus-NGSI won't save data in PostgreSQL - postgresql

Im trying to save some data in a postgreSQL database using cygnus-ngsi, but nothing happens. Im running all services in a docker container using docker-compose.
docker-compose.yml:
...
cygnus:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: cygnus_fiware
volumes:
- ./config/cygnus/cygnus.conf:/opt/apache-flume/conf/agent.conf
- ./config/cygnus/grouping_rules.conf:/opt/apache-flume/conf/grouping_rules.conf
links:
- orion
- postgres
expose:
- "5050"
ports:
- "5050:5050"
postgres:
restart: always
image: postgres:latest
container_name: postgres_fiware
volumes:
- ./data/db/postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
expose:
- "5432"
environment:
- POSTGRES_USER=teste
- POSTGRES_DB=newdb
- POSTGRES_PASSWORD=123456789
agent.conf
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = postgresql-sink
cygnus-ngsi.channels = postgresql-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnus-ngsi.sources.http-source.channels = postgresql-channel
# source class, must not be changed
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnus-ngsi.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
# URL target
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
# default service (service semantic depends on the persistence sink)
cygnus-ngsi.sources.http-source.handler.default_service = default
# default service path (service path semantic depends on the persistence sink)
cygnus-ngsi.sources.http-source.handler.default_service_path = /
# source interceptors, do not change
cygnus-ngsi.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
# GroupingInterceptor, do not change
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# see the doc/design/interceptors document for more details
cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /opt/apache-flume/conf/grouping_rules.conf
# ============================================
# NGSIPostgreSQLSink configuration
# channel name from where to read notification events
cygnus-ngsi.sinks.postgresql-sink.channel = postgresql-channel
# sink class, must not be changed
cygnus-ngsi.sinks.postgresql-sink.type = com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink
# true applies the new encoding, false applies the old encoding.
cygnus-ngsi.sinks.postgresql-sink.enable_encoding = false
# true if name mappings are enabled for this sink, false otherwise
cygnus-ngsi.sinks.postgresql-sink.enable_name_mappings = false
# true if the grouping feature is enabled for this sink, false otherwise
cygnus-ngsi.sinks.postgresql-sink.enable_grouping = false
# true if lower case is wanted to forced in all the element names, false otherwise
cygnus-ngsi.sinks.postgresql-sink.enable_lowercase = false
# the FQDN/IP address where the PostgreSQL server runs
cygnus-ngsi.sinks.postgresql-sink.postgresql_host = postgres
# the port where the PostgreSQL server listens for incomming connections
cygnus-ngsi.sinks.postgresql-sink.postgresql_port = 5432
# the name of the postgresql database
cygnus-ngsi.sinks.postgresql-sink.postgresql_database = newdb
# a valid user in the PostgreSQL server
cygnus-ngsi.sinks.postgresql-sink.postgresql_username = teste
# password for the user above
cygnus-ngsi.sinks.postgresql-sink.postgresql_password = 123456789
# how the attributes are stored, either per row either per column (row, column)
cygnus-ngsi.sinks.postgresql-sink.attr_persistence = row
# select the data_model: dm-by-service-path or dm-by-entity
cygnus-ngsi.sinks.postgresql-sink.data_model = dm-by-entity
# number of notifications to be included within a processing batch
cygnus-ngsi.sinks.postgresql-sink.batch_size = 100
# timeout for batch accumulation
cygnus-ngsi.sinks.postgresql-sink.batch_timeout = 30
# number of retries upon persistence error
cygnus-ngsi.sinks.postgresql-sink.batch_ttl = 10
# =============================================
# postgresql-channel configuration
# channel type (must not be changed)
cygnus-ngsi.channels.postgresql-channel.type = memory
# capacity of the channel
cygnus-ngsi.channels.postgresql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnus-ngsi.channels.postgresql-channel.transactionCapacity = 100
Messages from docker-compose:
cygnus_fiware | time=2017-06-19T15:11:15.132Z | lvl=WARN | corr=4af15a0c-5501-11e7-aa0a-0242ac130004 | trans=508576db-1443-4c64-bfc9-629d1a0b250e | srv=espometeo | subsrv=/environment | comp=cygnus-ngsi | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[257] : [NGSIRestHandler] Unnecessary header
cygnus_fiware | time=2017-06-19T15:11:15.133Z | lvl=INFO | corr=89ac4c66-5501-11e7-850f-0242ac130004 | trans=3ae0dc99-de51-49a3-937f-42d887b7e7d7 | srv=espometeo | subsrv=/environment | comp=cygnus-ngsi | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[282] : [NGSIRestHandler] Starting internal transaction (3ae0dc99-de51-49a3-937f-42d887b7e7d7)
cygnus_fiware | time=2017-06-19T15:11:15.133Z | lvl=INFO | corr=89ac4c66-5501-11e7-850f-0242ac130004 | trans=3ae0dc99-de51-49a3-937f-42d887b7e7d7 | srv=espometeo | subsrv=/environment | comp=cygnus-ngsi | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[299] : [NGSIRestHandler] Received data ({ "subscriptionId" : "5947d328e143997a02b11008", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "EstacaoMeteo", "isPattern" : "false", "id" : "Estacao3", "attributes" : [ { "name" : "Humidity", "type" : "float", "value" : "35.3", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2017-06-24T13:03:00" } ] }, { "name" : "Temperature", "type" : "float", "value" : "15.2", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2017-06-24T13:03:00" } ] } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
cygnus_fiware | time=2017-06-19T15:11:36.141Z | lvl=INFO | corr=89ac4c66-5501-11e7-850f-0242ac130004 | trans=3ae0dc99-de51-49a3-937f-42d887b7e7d7 | srv=espometeo | subsrv=/environment | comp=cygnus-ngsi | op=persistAggregation | msg=com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink[479] : [postgresql-sink] Persisting data at NGSIPostgreSQLSink. Schema (espometeo), Table (environment_estacao3_estacaometeo), Fields ((recvTimeTs,recvTime,fiwareServicePath,entityId,entityType,attrName,attrType,attrValue,attrMd)), Values (('1497885075139','2017-06-19T15:11:15.139Z','/environment','Estacao3','EstacaoMeteo','Humidity','float','35.3','[{"name":"TimeInstant","type":"ISO8601","value":"2017-06-24T13:03:00"}]'),('1497885075139','2017-06-19T15:11:15.139Z','/environment','Estacao3','EstacaoMeteo','Temperature','float','15.2','[{"name":"TimeInstant","type":"ISO8601","value":"2017-06-24T13:03:00"}]'))
cygnus_fiware | time=2017-06-19T15:11:36.142Z | lvl=WARN | corr=89ac4c66-5501-11e7-850f-0242ac130004 | trans=3ae0dc99-de51-49a3-937f-42d887b7e7d7 | srv=espometeo | subsrv=/environment | comp=cygnus-ngsi | op=processNewBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[541] :
Seems like cygnus is getting all the data from the Orion and putting it right, but when i go to the postgresql db, there nothing. Some one already have this problem?
Persistance ERROR message:
cygnus_fiware | time=2017-06-22T09:45:06.092Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=processRollbackedBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[398] : CygnusPersistenceError. -, null. Stack trace: [com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink.persistAggregation(NGSIPostgreSQLSink.java:504), com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink.persistBatch(NGSIPostgreSQLSink.java:231), com.telefonica.iot.cygnus.sinks.NGSISink.processRollbackedBatches(NGSISink.java:390), com.telefonica.iot.cygnus.sinks.NGSISink.process(NGSISink.java:372), org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68), org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147), java.lang.Thread.run(Thread.java:748)]
Cygnus start ERRORS:
cygnus_fiware | + exec /usr/lib/jvm/java-6-sun/bin/java -Xms512m -Xmx1g -Dcom.sun.management.jmxremote -Dflume.root.logger=INFO,console -Duser.timezone=UTC -Dfile.encoding=UTF-8 -cp '/opt/apache-flume/conf:/opt/apache-flume/lib/*:/opt/apache-flume/plugins.d/cygnus/lib/*:/opt/apache-flume/plugins.d/cygnus/libext/*' -Djava.library.path= com.telefonica.iot.cygnus.nodes.CygnusApplication -f /opt/apache-flume/conf/agent.conf -n cygnus-ngsi -p 8081
cygnus_fiware | /opt/apache-flume/bin/cygnus-flume-ng: line 232: /usr/lib/jvm/java-6-sun/bin/java: No such file or directory
cygnus_fiware | /opt/apache-flume/bin/cygnus-flume-ng: line 232: exec: /usr/lib/jvm/java-6-sun/bin/java: cannot execute: No such file or directory

There is a problem with PostgreSQL sink when cache is not enabled. It is described at this issue.

I solved the issue by changing false to true in agent.conf file: cygnus-ngsi.sinks.postgresql-sink.backend.enable_cache = true.

Related

Openstack API not providing precise data

I am using Openstack - Stein in CentOS 7.9
I was using python to collect data about the openstack nova performance, like server names and id in the openstack project, i have 3 instance(server) created, i can see all three instance in openstack cli, but when i connect to api mentioned in openstack, it provides no data or less data.
I refereed openstack documentation here
[root#centos-vm1 kavin(keystone_admin)]# openstack server list
+--------------------------------------+-----------------+--------+----------------------------------------+-------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------------+--------+----------------------------------------+-------+----------+
| 08cf6226-0303-4b4c-ba53-10af79b81dae | test_instance_3 | ACTIVE | test_networ_3=10.150.0.8 | | m1.tiny |
| 9986f205-82b3-4cbb-bcdc-fb32eab97c83 | test_instance_1 | ACTIVE | test_networ_2=10.100.0.5, x.x.x.x | | m1.small |
| d1c0f520-8540-432c-8fe1-554390fd79bf | test_instance_2 | ACTIVE | test_networ_1=10.50.0.8 | | m1.small |
+--------------------------------------+-----------------+--------+----------------------------------------+-------+----------+
My python code:
import requests,json
from six.moves.urllib.parse import urljoin
identity = {
"methods": ["password"],
"password": {
"user": {
"name": "admin",
"domain": { "id": "default" },
"password": "xxxxxxxxxxxxxxx"
}
}
}
OS_AUTH_URL = 'http://x.x.x.x:5000/v3'
data = {'auth': {'identity': identity}}
HEADERS = {'Content-Type': 'application/json', 'scope': 'unscoped'}
r = requests.post(
OS_AUTH_URL+'/auth/tokens',
headers = HEADERS,
json = data,
verify = False
)
auth_token = r.headers['X-Subject-Token'] # i got auth token
# server list
NOVA_URL="http://x.x.x.x:8774/v2.1"
HEADERS = {"X-Auth-Token" : str(auth_token)}
r = requests.get(
NOVA_URL+'/servers',
headers = HEADERS,
)
r.raise_for_status()
print(r.json())
Output :
{'servers': []}
help me, collect accurate data using api calls, thanks
According to api-ref List Servers doc, maybe you should add the project scope in the request.
By default the servers are filtered using the project ID associated with the authenticated request.
In my opinion, you could use openstacksdk to execute the operation, simply with the Connection object and list_servers method.
import openstack
conn = openstack.connect(
region_name='example-region',
auth_url='http://x.x.x.x:5000/v3/',
username='amazing-user',
password='super-secret-password',
project_id='33...b5',
domain_id='05...03'
)
servers = conn.list_servers()

How to enable authentication via openLDAP on JupyterHub without SSL certificate?

I am trying to create a JupyterHub that uses an LDAP to authenticate users.
The JupyterHub is working but when I am trying log on the web page show a Error 500.
I must clarify that when I tried it with the PAM method it worked without problems, but when configuring it to use the LDAP it started to fail.
I am using docker-compose. I currently have three containers:
1. JupyterHub:
jupyter:
build:
context: ./jupyterhub
ports:
- "8380:8000"
environment:
VIRTUAL_HOST: jupyter.probandofran.eu
LETSENCRYPT_HOST: jupyter.probandofran.eu
VIRTUAL_PORT: 8000
restart: on-failure
depends_on:
- jupyterlab
- revproxy-letsencrypt
volumes:
- ${VOLUMES_BASE_PATH}/jupyter:/home
- /var/run/docker.sock:/var/run/docker.sock:ro
healthcheck:
test: curl --fail -s http://jupyter:8000/ || exit 1
interval: 10s # time between tests
timeout: 5s # time waiting for a response
start_period: 30s # time from launch when the failed tests are ignored
retries: 5 # countdown of failed tests
2.Jupyterlab:
jupyterlab:
build:
context: ./jupyterlab
image: ngd-jupyterlab
command: echo
healthcheck:
test: curl --fail -s http://jupyterlab:8080/ || exit 1
interval: 10s # time between tests
timeout: 5s # time waiting for a response
start_period: 30s # time from launch when the failed tests are ignored
retries: 5 # countdown of failed tests
3. OpenLDAP
openldap:
image: docker.io/bitnami/openldap:2.6
ports:
- '1389:1389'
- '1636:1636'
environment:
- LDAP_ENABLE_TLS=no
- LDAP_ADMIN_USERNAME=admin
- LDAP_ADMIN_PASSWORD=adminpassword
- LDAP_USERS=user1,user02
- LDAP_PASSWORDS=user1,password2
volumes:
- 'openldap_data:/bitnami/openldap'
I think the problem is in the JupyterHub configuration file
jupyterhub_config.py
from jupyter_client.localinterfaces import public_ips
ip = public_ips()[0]
c.Spawner.default_url = '/lab'
c.Authenticator.admin_users = {'fran'}
c.JupyterHub.admin_access = False
in_docker_compose = True # "False" for standalone testing (for instance, to check changes to Jupyterlab image)
c.JupyterHub.hub_ip = ip if not in_docker_compose else '0.0.0.0'
# 'jupyter' is the name of Jupyterhub service in "docker-compose" file
c.JupyterHub.hub_connect_ip = '' if not in_docker_compose else 'jupyter'
if in_docker_compose:
c.DockerSpawner.network_name = 'webproxy' # Should match the network name used in the docker compose file
c.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'
c.DockerSpawner.image = 'ngd-jupyterlab' # 'jupyter/datascience-notebook:r-4.0.3'
notebook_dir = "/home/jovyan/"
c.DockerSpawner.notebook_dir = notebook_dir
c.DockerSpawner.volumes = {'jupyterhub-user-{username}': dict(bind=notebook_dir, mode="rw")}
c.DockerSpawner.use_internal_ip = True
c.DockerSpawner.remove_containers = True
c.DockerSpawner.remove = True
# c.DockerSpawner.extra_host_config = { 'network_mode': network_name }
c.JupyterHub.authenticator_class = 'ldapauthenticator.LDAPAuthenticator'
c.LDAPAuthenticator.lookup_dn = False
c.LDAPAuthenticator.bind_dn_template = [
"uid={username},ou=people,dc=wikimedia,dc=org",
"cn={username},ou=users,dc=example,dc=org"
]
# c.JupyterHub.allow_named_servers = False
# c.JupyterHub.authenticator_class = 'jupyterhub.auth.PAMAuthenticator'
c.JupyterHub.cleanup_proxy = True
c.JupyterHub.cleanup_servers = True
c.LDAPAuthenticator.server_use_ssl = False
c.LDAPAuthenticator.use_ssl = False
c.LDAPAuthenticator.server_address = 'openldap'
c.LDAPAuthenticator.server_port = 1389
c.LDAPAuthenticator.user_search_base = 'dc=example,dc=org'
# c.JupyterHub.reset_db = True
When I use the docker-compose up it show:
jupyter_1 | [E 2022-04-20 11:57:07.496 JupyterHub web:1789] Uncaught exception POST /hub/login?next=%2Fhub%2F (192.168.112.1)
jupyter_1 | HTTPServerRequest(protocol='http', host='jupyter.probandofran.eu', method='POST', uri='/hub/login?next=%2Fhub%2F', version='HTTP/1.1', remote_ip='192.168.112.1')
jupyter_1 | Traceback (most recent call last):
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/tornado/web.py", line 1704, in _execute
jupyter_1 | result = await result
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/jupyterhub/handlers/login.py", line 151, in post
jupyter_1 | user = await self.login_user(data)
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/jupyterhub/handlers/base.py", line 804, in login_user
jupyter_1 | authenticated = await self.authenticate(data)
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/jupyterhub/auth.py", line 473, in get_authenticated_user
jupyter_1 | authenticated = await maybe_future(self.authenticate(handler, data))
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldapauthenticator/ldapauthenticator.py", line 382, in authenticate
jupyter_1 | conn = self.get_connection(userdn, password)
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldapauthenticator/ldapauthenticator.py", line 314, in get_connection
jupyter_1 | conn = ldap3.Connection(
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldap3/core/connection.py", line 355, in __init__
jupyter_1 | self.do_auto_bind()
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldap3/core/connection.py", line 374, in do_auto_bind
jupyter_1 | self.start_tls(read_server_info=False)
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldap3/core/connection.py", line 1264, in start_tls
jupyter_1 | if self.server.tls.start_tls(self) and self.strategy.sync: # for asynchronous connections _start_tls is run by the strategy
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldap3/core/tls.py", line 277, in start_tls
jupyter_1 | raise LDAPStartTLSError(connection.last_error)
jupyter_1 | ldap3.core.exceptions.LDAPStartTLSError: startTLS failed - protocolError
jupyter_1 |
It's my first time posting on stackOverflow, I'm sorry if I made any mistake posting.
I think it is bug from ldapauthenticator, we can hot fix by change the logic inside this lib (file jupyterhub_config.py)
As #jnishii suggestion in https://github.com/jupyterhub/ldapauthenticator/issues/211

Problem when querying Raw Data with STH-Comet - Returns empty

I have Orion, Cygnus and STH-Comet(installed and configured in formal mode). Each component is in a container docker. I implemented the infrastructure with docker-compose.yml.
The Cygnus container is configured as follows:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: cygnus
volumes:
- /home/ubuntu/cygnus/multisink_agent.conf:/opt/fiware-cygnus/docker/cygnus-ngsi/multisink_agent.conf
depends_on:
- mongo
networks:
- default
expose:
- "5050"
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- CYGNUS_SERVICE_PORT=5050
- CYGNUS_MONITORING_TYPE=http
- CYGNUS_AGENT_NAME=cygnus-ngsi
- CYGNUS_MONGO_SERVICE_PORT=5050
- CYGNUS_MONGO_HOSTS=mongo:27017
- CYGNUS_MONGO_USER=
- CYGNUS_MONGO_PASS=
- CYGNUS_MONGO_ENABLE_ENCODING=false
- CYGNUS_MONGO_ENABLE_GROUPING=false
- CYGNUS_MONGO_ENABLE_NAME_MAPPINGS=false
- CYGNUS_MONGO_DATA_MODEL=dm-by-entity
- CYGNUS_MONGO_ATTR_PERSISTENCE=column
- CYGNUS_MONGO_DB_PREFIX=sth_
- CYGNUS_MONGO_COLLECTION_PREFIX=sth_
- CYGNUS_MONGO_ENABLE_LOWERCASE=false
- CYGNUS_MONGO_BATCH_TIMEOUT=30
- CYGNUS_MONGO_BATCH_TTL=10
- CYGNUS_MONGO_DATA_EXPIRATION=0
- CYGNUS_MONGO_COLLECTIONS_SIZE=0
- CYGNUS_MONGO_MAX_DOCUMENTS=0
- CYGNUS_MONGO_BATCH_SIZE=1
- CYGNUS_LOG_LEVEL=DEBUG
- CYGNUS_SKIP_CONF_GENERATION=false
- CYGNUS_STH_ENABLE_ENCODING=false
- CYGNUS_STH_ENABLE_GROUPING=false
- CYGNUS_STH_ENABLE_NAME_MAPPINGS=false
- CYGNUS_STH_DB_PREFIX=sth_
- CYGNUS_STH_COLLECTION_PREFIX=sth_
- CYGNUS_STH_DATA_MODEL=dm-by-entity
- CYGNUS_STH_ENABLE_LOWERCASE=false
- CYGNUS_STH_BATCH_TIMEOUT=30
- CYGNUS_STH_BATCH_TTL=10
- CYGNUS_STH_DATA_EXPIRATION=0
- CYGNUS_STH_BATCH_SIZE=1
Obs: In the multisink_agent.conf file I changed the service and the servicepath:
cygnus-ngsi.sources.http-source-mongo.handler.default_service = tese
cygnus-ngsi.sources.http-source-mongo.handler.default_service_path = /iot
And the STH-Comet container looks like this:
image: fiware/sth-comet:latest
hostname: sth
container_name: sth
depends_on:
- cygnus
- mongo
networks:
- default
expose:
- "8666"
ports:
- "8666:8666"
environment:
- STH_HOST=0.0.0.0
- STH_PORT=8666
- DB_URI=mongo:27017
- DB_USERNAME=
- DB_PASSWORD=
- LOGOPS_LEVEL=DEBUG
In the STH-Comet config.js file I enabled CORS and I changed the defaultService and the defaultServicePath. The file looks like this:
var config = {};
// STH server configuration
//--------------------------
config.server = {
host: 'localhost',
port: '8666',
// Default value: "testservice".
defaultService: 'tese',
// Default value: "/testservicepath".
defaultServicePath: '/iot',
filterOutEmpty: 'true',
aggregationBy: ['day', 'hour', 'minute'],
temporalDir: 'temp',
maxPageSize: '100'
};
// Cors Configuration
config.cors = {
// The enabled is use to set CORS policy
enabled: 'true',
options: {
origin: ['*'],
headers: [
'Access-Control-Allow-Origin',
'Access-Control-Allow-Headers',
'Access-Control-Request-Headers',
'Origin, Referer, User-Agent'
],
additionalHeaders: ['fiware-servicepath', 'fiware-service'],
credentials: 'true'
}
};
// Database configuration
//------------------------
config.database = {
dataModel: 'collection-per-entity',
user: '',
password: '',
authSource: '',
URI: 'localhost:27017',
replicaSet: '',
prefix: 'sth_',
collectionPrefix: 'sth_',
poolSize: '5',
writeConcern: '1',
shouldStore: 'both',
truncation: {
expireAfterSeconds: '0',
size: '0',
max: '0'
},
ignoreBlankSpaces: 'true',
nameMapping: {
enabled: 'false',
configFile: './name-mapping.json'
},
nameEncoding: 'false'
};
// Logging configuration
//------------------------
config.logging = {
level: 'debug',
format: 'pipe',
proofOfLifeInterval: '60',
processedRequestLogStatisticsInterval: '60'
};
module.exports = config;
I use Cygnus to persist historical data. STH-Comet is used only to query raw and aggregated data.
Cygnus' signature on Orion did this:
"description": "A subscription All Entities",
"subject": {
"entities": [
{
"idPattern": ".*"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"http": {
"url": "http://cygnus:5050/notify"
},
"attrs": [],
"attrsFormat":"legacy"
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
The headers used for fiware-service and fiware-servicepath are:
Fiware-service: tese
Fiware-servicepath: /iot
The entities data are stored in orion-tese. I have the collection: entities
{
"_id" : {
"id" : "Tank1",
"type" : "Tank",
"servicePath" : "/iot"
},
"attrNames" : [
"temperature"
],
"attrs" : {
"temperature" : {
"value" : 0.333,
"type" : "Float",
"mdNames" : [ ],
"creDate" : 1594334464,
"modDate" : 1594337770
}
},
"creDate" : 1594334464,
"modDate" : 1594337771,
"lastCorrelator" : "f86d0d74-c23c-11ea-9c82-0242ac1c0005"
}
The raw and aggregated data are stored in sth_tese.
I have the collections:
sth_/iot_Tank1_Tank.aggr
and
sth_/iot_Tank1_Tank
The sth_/iot_Tank1_Tank raw data is in mongoDB:
{
"_id" : ObjectId("5f079d0369591c06b0fc981a"),
"temperature" : 279,
"recvTime" : ISODate("2020-07-09T22:41:05.670Z")
}
{
"_id" : ObjectId("5f07a9eb69591c06b0fc981b"),
"temperature" : 0.333,
"recvTime" : ISODate("2020-07-09T23:36:11.160Z")
}
When I run: http://localhost:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?aggrMethod=sum&aggrPeriod=minute
or
http://localhost:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&aggrMethod=sum&aggrPeriod=minute
I have the result: "sum": 279 and "sum": 0.333. I can recover ALL the aggregated data, max, min, sum, sum2.
The difficulty is with the STH-Comet when I try to retrieve the raw data, the return code is 200 and the value returns empty.
I've tried with APIs v1 and v2, to no avail.
request with v2:
http://sth:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
Return
{
"type": "StructuredValue",
"value": []
}
request with v1:
http://sth:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?lastN=10
Return
{
"contextResponses": [{
"contextElement": {
"attributes": [{
"name": "temperature",
"values": []
}],
"id": "Tank1",
"isPattern": false,
"type": "Tank"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}]
}
The STH-Comet log shows that it is online and connects to the correct database:
time=2020-07-09T22:39:06.698Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Establishing connection to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:06.879Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Connection successfully established to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:07.218Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_SERVER_START | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Server started at http://0.0.0.0:8666
The STH-Comet log with the api v2 request:
time=2020-07-09T23:46:47.400Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=GET /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.404Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Getting access to the raw data collection for retrieval...
time=2020-07-09T23:46:47.408Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=The raw data collection for retrieval exists
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=No raw data available for the request: /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Responding with no points
According to the log, it establishes the connection to recover the raw data: msg=Getting access to the raw data collection for retrieval.... Confirms that the raw data exists: msg=The raw data collection for retrieval exists. But, it cannot recover this data and generates the message that the raw data is not available and does not return any points:msg=No raw data available for the request and msg=Responding with no points.
I already read the configuration part in the documentation. I've reinstalled everything, several times. I combed all settings and I can't find anything to justify this problem.
What could it be?
Could someone with expertise in STH-Comet give any guidance?
Thanks!
Sometimes the way in which STH tries to recover information doesn't match to the way in wich Cygnus store it. However, that doesn't to be the case here. The datamodel used by STH is configured with config.database.dataModel and it seems to be correct: collection-per-entity (as you have collections like sth_/iot_Tank1_Tank, which correspondds to a single entity, i.e. the one with id Tank1 and type Tank).
Assuming that the setting in config.js is not being overridden by DATA_MODEL env var (although it would be wise to check that, looking to the env vars actuallly inyected to the docker container running STH, I guess that with docker inspect) the only way I think we can continue debugging is to inspect which actual query does STH on MongoDB to end in No raw data available for the request.
MongoDB has a profiler that allows to record every query done in the DB. Thus the procedure would be as follows:
Avoid (or minimize) any other usage of MongoDB instance, to avoid "noise" in the information recorded by the profiler
Start the profiler in "all queries" mode (i.e. profiling level 2)
Do the query at STH API
Stop the profiler
Check the queries recorded by the profiler as a consequence of the request done in step 3
Explaining the usage of the MongoDB profiler is out of the scope of this answer, but the reference I provided above is a good starting point if you don't know it already.
Once you have information about the queries, please provide feedback as comments to this answers. Thanks!

Using gatling session variable in triple qouted string

How to use session variable in StringBody of gatling?
I have defined my exec like,
val migrateAsset = exec(_.set("assetId", AssetIdGenerator.generateRandomAssetId()))
.exec(http("Migrate Asset")
.post(s"$url/asset/metadata")
.header("Content-Type", "application/json")
.header("Authorization", s"Bearer ${authToken}")
.body(StringBody(
s"""
|{
| "objectType" : "DocumentType",
| "fileName" : "main.xml",
| "locations" : [
| {
| "region" : "eu-west-1",
| "url" : "https://s3-eu-west-1.amazonaws.com/${bucketName}/${assetId}"
| },
| {
| "region" : "us-east-1",
| "url" : s"https://s3.amazonaws.com/${bucketName}/${assetId}"
| }
| ],
| "format" : "MAIN",
| "mimeType" : "text/plain"
|}
""".stripMargin
))
.check(status.is(200)))
In the body, I want same assetId to be passed for both eu-west and us-east regions. Since, assetId is randomly generated, I have stored it in session variable so as to make sure, I use same assetId for both locations.
But I cant pass assetId in StringBody format. It keeps giving me error like,
AssetsMigrationLoadSimulation.scala:31: not found: value
assetId
| "url" : "https://s3-eu-west-1.amazonaws.com/${bucketName}/${assetId}"
As Hans-Peter mentioned - your IDE has seen the ${...} that you're using to reference a gatling session parameter and decided that you are trying to do regular scala string interpolation - so it's put the 's' in front of the string.
remove the 's' and this should work

Fiware cygnus: no data have been persisted in mongo DB

I am trying to use cygnus with Mongo DB, but no data have been persisted in the data base.
Here is the notification got in cygnus:
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Starting transaction (1437482681-118-0000000000)
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "55a73819d0c457bb20b1d467", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "enocean", "isPattern" : "false", "id" : "enocean:myButtonA", "attributes" : [ { "name" : "ButtonValue", "type" : "", "value" : "ON", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2015-07-20T21:29:56.509293Z" } ] } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Event put in the channel (id=1454120446, ttl=10)
Here is my agent configuration:
cygnusagent.sources = http-source
cygnusagent.sinks = OrionMongoSink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /home/egm_demo/usr/fiware-cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
#cygnusagent.sinks.mongo-sink.db_prefix = kura
# prefix pro the MongoDB collections
#cygnusagent.sinks.mongo-sink.collection_prefix = button
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
# ============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
Here is my rule :
{
"grouping_rules": [
{
"id": 1,
"fields": [
"button"
],
"regex": ".*",
"destination": "kura",
"fiware_service_path": "/kuraspath"
}
]
}
Any ideas of what I have missed? Thanks in advance for your help!
This configuration parameter is wrong:
cygnusagent.sinks = OrionMongoSink
According to your configuration, it must be mongo-sink (I mean, you are configuring a Mongo sink named mongo-sink when you configure lines such as cygnusagent.sinks.mongo-sink.type).
In addition, I would recommend you to not using the grouping rules feature; it is an advanced feature about sending the data to a collection different than the default one, and in a first stage I would play with the default behaviour. Thus, my recommendation is to leave the path to the file in cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file, but comment all the JSON within it :)