Why is VCAP_SERVICES null? - mongodb

I want to read VCAP_SERVICES during my app startup to connect to my mongodb service, but its null?
barry-alexanders-MacBook-Pro:~ barryalexander$ vmc create-service mongodb mongodb-relcal RelCal
Creating Service: OK
Binding Service [mongodb-relcal]: OK
Stopping Application 'RelCal': OK
Staging Application 'RelCal': OK
Starting Application 'RelCal': OK
barry-alexanders-MacBook-Pro:~ barryalexander$ vmc apps
+-------------+----+---------+-------------------------+----------------+
| Application | # | Health | URLS | Services |
+-------------+----+---------+-------------------------+----------------+
| RelCal | 1 | RUNNING | relcal.cloudfoundry.com | mongodb-relcal |
| barry | 1 | STOPPED | barry.cloudfoundry.com | |
+-------------+----+---------+-------------------------+----------------+
barry-alexanders-MacBook-Pro:~ barryalexander$ vmc env RelCal
No Environment Variables

VCAP_SERVICES is not revealed when using the vmc 'env' command. However we can see by pushing this simple node app
var http = require('http');
var util = require('util');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write(util.inspect(process.env.VCAP_SERVICES));
res.write("\n\n************\n\n");
res.end(util.inspect(req.headers));
}).listen(3000);
the output shows the VCAP_SERVICES env variable and then the headers for the request, the output looks like this;
'{"mongodb-2.0":[{"name":"mongo-test","label":"mongodb-2.0","plan":"free","tags":["mongodb","mongodb-1.8","nosql","document"],"credentials":{"hostname":"172.30.48.70","host":"172.30.48.70","port":25137,"username":"7ad80054-bb70-49fa-9aae-6ff5c1b458fc","password":"491bcfe9-e441-4caf-8422-00a81dbf727b","name":"4b354e7e-c39d-4053-89e1-7195b1360fd9","db":"db","url":"mongodb://7ad80054-bb70-49fa-9aae-6ff5c1b458fc:491bcfe9-e441-4caf-8422-00a81dbf727b#172.30.48.70:25137/db"}}]}'
************
{ host: 'node-headers.cloudfoundry.com',
'x-forwarded-for': '80.175.199.28, 172.30.8.253',
connection: 'close',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.82 Safari/537.1',
accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'accept-language': 'en-US,en;q=0.8',
'accept-charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'x-cluster-client-ip': '80.175.199.28',
cookie: '__qca=P0-351832036-1339515989739; s_nr=1344955391423; __utma=207604417.1698837494.1342027762.1345020276.1345215879.7; __utmc=207604417; __utmz=207604417.1342027762.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); s_cc=true; s_sq=%5B%5BB%5D%5D',
'accept-encoding': 'gzip,deflate,sdch' }
You can see this application running at http://node-headers.cloudfoundry.com if you wish to use it for reference.

Related

Openstack API not providing precise data

I am using Openstack - Stein in CentOS 7.9
I was using python to collect data about the openstack nova performance, like server names and id in the openstack project, i have 3 instance(server) created, i can see all three instance in openstack cli, but when i connect to api mentioned in openstack, it provides no data or less data.
I refereed openstack documentation here
[root#centos-vm1 kavin(keystone_admin)]# openstack server list
+--------------------------------------+-----------------+--------+----------------------------------------+-------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------------+--------+----------------------------------------+-------+----------+
| 08cf6226-0303-4b4c-ba53-10af79b81dae | test_instance_3 | ACTIVE | test_networ_3=10.150.0.8 | | m1.tiny |
| 9986f205-82b3-4cbb-bcdc-fb32eab97c83 | test_instance_1 | ACTIVE | test_networ_2=10.100.0.5, x.x.x.x | | m1.small |
| d1c0f520-8540-432c-8fe1-554390fd79bf | test_instance_2 | ACTIVE | test_networ_1=10.50.0.8 | | m1.small |
+--------------------------------------+-----------------+--------+----------------------------------------+-------+----------+
My python code:
import requests,json
from six.moves.urllib.parse import urljoin
identity = {
"methods": ["password"],
"password": {
"user": {
"name": "admin",
"domain": { "id": "default" },
"password": "xxxxxxxxxxxxxxx"
}
}
}
OS_AUTH_URL = 'http://x.x.x.x:5000/v3'
data = {'auth': {'identity': identity}}
HEADERS = {'Content-Type': 'application/json', 'scope': 'unscoped'}
r = requests.post(
OS_AUTH_URL+'/auth/tokens',
headers = HEADERS,
json = data,
verify = False
)
auth_token = r.headers['X-Subject-Token'] # i got auth token
# server list
NOVA_URL="http://x.x.x.x:8774/v2.1"
HEADERS = {"X-Auth-Token" : str(auth_token)}
r = requests.get(
NOVA_URL+'/servers',
headers = HEADERS,
)
r.raise_for_status()
print(r.json())
Output :
{'servers': []}
help me, collect accurate data using api calls, thanks
According to api-ref List Servers doc, maybe you should add the project scope in the request.
By default the servers are filtered using the project ID associated with the authenticated request.
In my opinion, you could use openstacksdk to execute the operation, simply with the Connection object and list_servers method.
import openstack
conn = openstack.connect(
region_name='example-region',
auth_url='http://x.x.x.x:5000/v3/',
username='amazing-user',
password='super-secret-password',
project_id='33...b5',
domain_id='05...03'
)
servers = conn.list_servers()

How to enable authentication via openLDAP on JupyterHub without SSL certificate?

I am trying to create a JupyterHub that uses an LDAP to authenticate users.
The JupyterHub is working but when I am trying log on the web page show a Error 500.
I must clarify that when I tried it with the PAM method it worked without problems, but when configuring it to use the LDAP it started to fail.
I am using docker-compose. I currently have three containers:
1. JupyterHub:
jupyter:
build:
context: ./jupyterhub
ports:
- "8380:8000"
environment:
VIRTUAL_HOST: jupyter.probandofran.eu
LETSENCRYPT_HOST: jupyter.probandofran.eu
VIRTUAL_PORT: 8000
restart: on-failure
depends_on:
- jupyterlab
- revproxy-letsencrypt
volumes:
- ${VOLUMES_BASE_PATH}/jupyter:/home
- /var/run/docker.sock:/var/run/docker.sock:ro
healthcheck:
test: curl --fail -s http://jupyter:8000/ || exit 1
interval: 10s # time between tests
timeout: 5s # time waiting for a response
start_period: 30s # time from launch when the failed tests are ignored
retries: 5 # countdown of failed tests
2.Jupyterlab:
jupyterlab:
build:
context: ./jupyterlab
image: ngd-jupyterlab
command: echo
healthcheck:
test: curl --fail -s http://jupyterlab:8080/ || exit 1
interval: 10s # time between tests
timeout: 5s # time waiting for a response
start_period: 30s # time from launch when the failed tests are ignored
retries: 5 # countdown of failed tests
3. OpenLDAP
openldap:
image: docker.io/bitnami/openldap:2.6
ports:
- '1389:1389'
- '1636:1636'
environment:
- LDAP_ENABLE_TLS=no
- LDAP_ADMIN_USERNAME=admin
- LDAP_ADMIN_PASSWORD=adminpassword
- LDAP_USERS=user1,user02
- LDAP_PASSWORDS=user1,password2
volumes:
- 'openldap_data:/bitnami/openldap'
I think the problem is in the JupyterHub configuration file
jupyterhub_config.py
from jupyter_client.localinterfaces import public_ips
ip = public_ips()[0]
c.Spawner.default_url = '/lab'
c.Authenticator.admin_users = {'fran'}
c.JupyterHub.admin_access = False
in_docker_compose = True # "False" for standalone testing (for instance, to check changes to Jupyterlab image)
c.JupyterHub.hub_ip = ip if not in_docker_compose else '0.0.0.0'
# 'jupyter' is the name of Jupyterhub service in "docker-compose" file
c.JupyterHub.hub_connect_ip = '' if not in_docker_compose else 'jupyter'
if in_docker_compose:
c.DockerSpawner.network_name = 'webproxy' # Should match the network name used in the docker compose file
c.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'
c.DockerSpawner.image = 'ngd-jupyterlab' # 'jupyter/datascience-notebook:r-4.0.3'
notebook_dir = "/home/jovyan/"
c.DockerSpawner.notebook_dir = notebook_dir
c.DockerSpawner.volumes = {'jupyterhub-user-{username}': dict(bind=notebook_dir, mode="rw")}
c.DockerSpawner.use_internal_ip = True
c.DockerSpawner.remove_containers = True
c.DockerSpawner.remove = True
# c.DockerSpawner.extra_host_config = { 'network_mode': network_name }
c.JupyterHub.authenticator_class = 'ldapauthenticator.LDAPAuthenticator'
c.LDAPAuthenticator.lookup_dn = False
c.LDAPAuthenticator.bind_dn_template = [
"uid={username},ou=people,dc=wikimedia,dc=org",
"cn={username},ou=users,dc=example,dc=org"
]
# c.JupyterHub.allow_named_servers = False
# c.JupyterHub.authenticator_class = 'jupyterhub.auth.PAMAuthenticator'
c.JupyterHub.cleanup_proxy = True
c.JupyterHub.cleanup_servers = True
c.LDAPAuthenticator.server_use_ssl = False
c.LDAPAuthenticator.use_ssl = False
c.LDAPAuthenticator.server_address = 'openldap'
c.LDAPAuthenticator.server_port = 1389
c.LDAPAuthenticator.user_search_base = 'dc=example,dc=org'
# c.JupyterHub.reset_db = True
When I use the docker-compose up it show:
jupyter_1 | [E 2022-04-20 11:57:07.496 JupyterHub web:1789] Uncaught exception POST /hub/login?next=%2Fhub%2F (192.168.112.1)
jupyter_1 | HTTPServerRequest(protocol='http', host='jupyter.probandofran.eu', method='POST', uri='/hub/login?next=%2Fhub%2F', version='HTTP/1.1', remote_ip='192.168.112.1')
jupyter_1 | Traceback (most recent call last):
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/tornado/web.py", line 1704, in _execute
jupyter_1 | result = await result
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/jupyterhub/handlers/login.py", line 151, in post
jupyter_1 | user = await self.login_user(data)
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/jupyterhub/handlers/base.py", line 804, in login_user
jupyter_1 | authenticated = await self.authenticate(data)
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/jupyterhub/auth.py", line 473, in get_authenticated_user
jupyter_1 | authenticated = await maybe_future(self.authenticate(handler, data))
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldapauthenticator/ldapauthenticator.py", line 382, in authenticate
jupyter_1 | conn = self.get_connection(userdn, password)
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldapauthenticator/ldapauthenticator.py", line 314, in get_connection
jupyter_1 | conn = ldap3.Connection(
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldap3/core/connection.py", line 355, in __init__
jupyter_1 | self.do_auto_bind()
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldap3/core/connection.py", line 374, in do_auto_bind
jupyter_1 | self.start_tls(read_server_info=False)
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldap3/core/connection.py", line 1264, in start_tls
jupyter_1 | if self.server.tls.start_tls(self) and self.strategy.sync: # for asynchronous connections _start_tls is run by the strategy
jupyter_1 | File "/usr/local/lib/python3.8/dist-packages/ldap3/core/tls.py", line 277, in start_tls
jupyter_1 | raise LDAPStartTLSError(connection.last_error)
jupyter_1 | ldap3.core.exceptions.LDAPStartTLSError: startTLS failed - protocolError
jupyter_1 |
It's my first time posting on stackOverflow, I'm sorry if I made any mistake posting.
I think it is bug from ldapauthenticator, we can hot fix by change the logic inside this lib (file jupyterhub_config.py)
As #jnishii suggestion in https://github.com/jupyterhub/ldapauthenticator/issues/211

The value supplied for parameter 'instanceProfileName' is not valid

Running cdk deploy I receive the following error message:
CREATE_FAILED | AWS::ImageBuilder::InfrastructureConfiguration | TestInfrastructureConfiguration The value supplied for parameter 'instanceProfileName' is not valid. The provided instance profile does not exist. Please specify a different instance profile and try again. (Service: Imagebuilder, Status Code: 400, Request ID: 41f431d7-8544-48e9-9faf-a870b83b0100, Extended Request ID: null)
The C# code looks like this:
var instanceProfile = new CfnInstanceProfile(this, "TestInstanceProfile", new CfnInstanceProfileProps {
InstanceProfileName = "test-instance-profile",
Roles = new string[] { "TestServiceRoleForImageBuilder" }
});
var infrastructureConfiguration = new CfnInfrastructureConfiguration(this, "TestInfrastructureConfiguration", new CfnInfrastructureConfigurationProps {
Name = "test-infrastructure-configuration",
InstanceProfileName = instanceProfile.InstanceProfileName,
InstanceTypes = new string[] { "t2.medium" },
Logging = new CfnInfrastructureConfiguration.LoggingProperty {
S3Logs = new CfnInfrastructureConfiguration.S3LogsProperty {
S3BucketName = "s3-test-assets",
S3KeyPrefix = "ImageBuilder/Logs"
}
},
SubnetId = "subnet-12f3456f",
SecurityGroupIds = new string[] { "sg-12b3e4e5b67f8900f" }
});
The TestServiceRoleForImageBuilder exists and was working previously. Same code was running successfully about a month ago. Any suggestions?
If I remove the CfninfrastructureConfiguration creation part, deployment runs successfully:, but takes at least 2 minutes to complete.
AwsImageBuilderStack: deploying...
AwsImageBuilderStack: creating CloudFormation changeset...
0/3 | 14:24:37 | REVIEW_IN_PROGRESS | AWS::CloudFormation::Stack | AwsImageBuilderStack User Initiated
0/3 | 14:24:43 | CREATE_IN_PROGRESS | AWS::CloudFormation::Stack | AwsImageBuilderStack User Initiated
0/3 | 14:24:47 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata/Default (CDKMetadata)
0/3 | 14:24:47 | CREATE_IN_PROGRESS | AWS::IAM::InstanceProfile | TestInstanceProfile
0/3 | 14:24:47 | CREATE_IN_PROGRESS | AWS::IAM::InstanceProfile | TestInstanceProfile Resource creation Initiated
1/3 | 14:24:48 | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata/Default (CDKMetadata) Resource creation Initiated
1/3 | 14:24:48 | CREATE_COMPLETE | AWS::CDK::Metadata | CDKMetadata/Default (CDKMetadata)
1/3 Currently in progress: AwsImageBuilderStack, TestInstanceProfile
3/3 | 14:26:48 | CREATE_COMPLETE | AWS::IAM::InstanceProfile | TestInstanceProfile
3/3 | 14:26:49 | CREATE_COMPLETE | AWS::CloudFormation::Stack | AwsImageBuilderStack
Is it probably some race condition? Should I use multiple stacks to achieve my goal?
Should it be possible to use a wait condition (AWS::CloudFormation::WaitCondition) to bypass the 2 minutes of creation time in case it is intended (AWS::IAM::InstanceProfile resources always take exactly 2 minutes to create)?
Environment
CDK CLI Version: 1.73.0
Node.js Version: 14.13.0
OS: Windows 10
Language (Version): C# (.NET Core 3.1)
Update
Since the cause seems to be AWS internal, I used a pre-created instance profile as a workaround. The profile can be either created through IAM Management Console or CLI. However it would be nice to have a proper solution.
You have to create a dependency between the two constructs. CDK does not infer it when using the optional name parameter, as opposed to the logical id (which doesn't seem to work in this situation).
infrastructureConfiguration.node.addDependency(instanceProfile)
Here are the relevant docs: https://docs.aws.amazon.com/cdk/api/latest/docs/core-readme.html#construct-dependencies

Problem when querying Raw Data with STH-Comet - Returns empty

I have Orion, Cygnus and STH-Comet(installed and configured in formal mode). Each component is in a container docker. I implemented the infrastructure with docker-compose.yml.
The Cygnus container is configured as follows:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: cygnus
volumes:
- /home/ubuntu/cygnus/multisink_agent.conf:/opt/fiware-cygnus/docker/cygnus-ngsi/multisink_agent.conf
depends_on:
- mongo
networks:
- default
expose:
- "5050"
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- CYGNUS_SERVICE_PORT=5050
- CYGNUS_MONITORING_TYPE=http
- CYGNUS_AGENT_NAME=cygnus-ngsi
- CYGNUS_MONGO_SERVICE_PORT=5050
- CYGNUS_MONGO_HOSTS=mongo:27017
- CYGNUS_MONGO_USER=
- CYGNUS_MONGO_PASS=
- CYGNUS_MONGO_ENABLE_ENCODING=false
- CYGNUS_MONGO_ENABLE_GROUPING=false
- CYGNUS_MONGO_ENABLE_NAME_MAPPINGS=false
- CYGNUS_MONGO_DATA_MODEL=dm-by-entity
- CYGNUS_MONGO_ATTR_PERSISTENCE=column
- CYGNUS_MONGO_DB_PREFIX=sth_
- CYGNUS_MONGO_COLLECTION_PREFIX=sth_
- CYGNUS_MONGO_ENABLE_LOWERCASE=false
- CYGNUS_MONGO_BATCH_TIMEOUT=30
- CYGNUS_MONGO_BATCH_TTL=10
- CYGNUS_MONGO_DATA_EXPIRATION=0
- CYGNUS_MONGO_COLLECTIONS_SIZE=0
- CYGNUS_MONGO_MAX_DOCUMENTS=0
- CYGNUS_MONGO_BATCH_SIZE=1
- CYGNUS_LOG_LEVEL=DEBUG
- CYGNUS_SKIP_CONF_GENERATION=false
- CYGNUS_STH_ENABLE_ENCODING=false
- CYGNUS_STH_ENABLE_GROUPING=false
- CYGNUS_STH_ENABLE_NAME_MAPPINGS=false
- CYGNUS_STH_DB_PREFIX=sth_
- CYGNUS_STH_COLLECTION_PREFIX=sth_
- CYGNUS_STH_DATA_MODEL=dm-by-entity
- CYGNUS_STH_ENABLE_LOWERCASE=false
- CYGNUS_STH_BATCH_TIMEOUT=30
- CYGNUS_STH_BATCH_TTL=10
- CYGNUS_STH_DATA_EXPIRATION=0
- CYGNUS_STH_BATCH_SIZE=1
Obs: In the multisink_agent.conf file I changed the service and the servicepath:
cygnus-ngsi.sources.http-source-mongo.handler.default_service = tese
cygnus-ngsi.sources.http-source-mongo.handler.default_service_path = /iot
And the STH-Comet container looks like this:
image: fiware/sth-comet:latest
hostname: sth
container_name: sth
depends_on:
- cygnus
- mongo
networks:
- default
expose:
- "8666"
ports:
- "8666:8666"
environment:
- STH_HOST=0.0.0.0
- STH_PORT=8666
- DB_URI=mongo:27017
- DB_USERNAME=
- DB_PASSWORD=
- LOGOPS_LEVEL=DEBUG
In the STH-Comet config.js file I enabled CORS and I changed the defaultService and the defaultServicePath. The file looks like this:
var config = {};
// STH server configuration
//--------------------------
config.server = {
host: 'localhost',
port: '8666',
// Default value: "testservice".
defaultService: 'tese',
// Default value: "/testservicepath".
defaultServicePath: '/iot',
filterOutEmpty: 'true',
aggregationBy: ['day', 'hour', 'minute'],
temporalDir: 'temp',
maxPageSize: '100'
};
// Cors Configuration
config.cors = {
// The enabled is use to set CORS policy
enabled: 'true',
options: {
origin: ['*'],
headers: [
'Access-Control-Allow-Origin',
'Access-Control-Allow-Headers',
'Access-Control-Request-Headers',
'Origin, Referer, User-Agent'
],
additionalHeaders: ['fiware-servicepath', 'fiware-service'],
credentials: 'true'
}
};
// Database configuration
//------------------------
config.database = {
dataModel: 'collection-per-entity',
user: '',
password: '',
authSource: '',
URI: 'localhost:27017',
replicaSet: '',
prefix: 'sth_',
collectionPrefix: 'sth_',
poolSize: '5',
writeConcern: '1',
shouldStore: 'both',
truncation: {
expireAfterSeconds: '0',
size: '0',
max: '0'
},
ignoreBlankSpaces: 'true',
nameMapping: {
enabled: 'false',
configFile: './name-mapping.json'
},
nameEncoding: 'false'
};
// Logging configuration
//------------------------
config.logging = {
level: 'debug',
format: 'pipe',
proofOfLifeInterval: '60',
processedRequestLogStatisticsInterval: '60'
};
module.exports = config;
I use Cygnus to persist historical data. STH-Comet is used only to query raw and aggregated data.
Cygnus' signature on Orion did this:
"description": "A subscription All Entities",
"subject": {
"entities": [
{
"idPattern": ".*"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"http": {
"url": "http://cygnus:5050/notify"
},
"attrs": [],
"attrsFormat":"legacy"
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
The headers used for fiware-service and fiware-servicepath are:
Fiware-service: tese
Fiware-servicepath: /iot
The entities data are stored in orion-tese. I have the collection: entities
{
"_id" : {
"id" : "Tank1",
"type" : "Tank",
"servicePath" : "/iot"
},
"attrNames" : [
"temperature"
],
"attrs" : {
"temperature" : {
"value" : 0.333,
"type" : "Float",
"mdNames" : [ ],
"creDate" : 1594334464,
"modDate" : 1594337770
}
},
"creDate" : 1594334464,
"modDate" : 1594337771,
"lastCorrelator" : "f86d0d74-c23c-11ea-9c82-0242ac1c0005"
}
The raw and aggregated data are stored in sth_tese.
I have the collections:
sth_/iot_Tank1_Tank.aggr
and
sth_/iot_Tank1_Tank
The sth_/iot_Tank1_Tank raw data is in mongoDB:
{
"_id" : ObjectId("5f079d0369591c06b0fc981a"),
"temperature" : 279,
"recvTime" : ISODate("2020-07-09T22:41:05.670Z")
}
{
"_id" : ObjectId("5f07a9eb69591c06b0fc981b"),
"temperature" : 0.333,
"recvTime" : ISODate("2020-07-09T23:36:11.160Z")
}
When I run: http://localhost:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?aggrMethod=sum&aggrPeriod=minute
or
http://localhost:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&aggrMethod=sum&aggrPeriod=minute
I have the result: "sum": 279 and "sum": 0.333. I can recover ALL the aggregated data, max, min, sum, sum2.
The difficulty is with the STH-Comet when I try to retrieve the raw data, the return code is 200 and the value returns empty.
I've tried with APIs v1 and v2, to no avail.
request with v2:
http://sth:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
Return
{
"type": "StructuredValue",
"value": []
}
request with v1:
http://sth:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?lastN=10
Return
{
"contextResponses": [{
"contextElement": {
"attributes": [{
"name": "temperature",
"values": []
}],
"id": "Tank1",
"isPattern": false,
"type": "Tank"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}]
}
The STH-Comet log shows that it is online and connects to the correct database:
time=2020-07-09T22:39:06.698Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Establishing connection to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:06.879Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Connection successfully established to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:07.218Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_SERVER_START | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Server started at http://0.0.0.0:8666
The STH-Comet log with the api v2 request:
time=2020-07-09T23:46:47.400Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=GET /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.404Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Getting access to the raw data collection for retrieval...
time=2020-07-09T23:46:47.408Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=The raw data collection for retrieval exists
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=No raw data available for the request: /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Responding with no points
According to the log, it establishes the connection to recover the raw data: msg=Getting access to the raw data collection for retrieval.... Confirms that the raw data exists: msg=The raw data collection for retrieval exists. But, it cannot recover this data and generates the message that the raw data is not available and does not return any points:msg=No raw data available for the request and msg=Responding with no points.
I already read the configuration part in the documentation. I've reinstalled everything, several times. I combed all settings and I can't find anything to justify this problem.
What could it be?
Could someone with expertise in STH-Comet give any guidance?
Thanks!
Sometimes the way in which STH tries to recover information doesn't match to the way in wich Cygnus store it. However, that doesn't to be the case here. The datamodel used by STH is configured with config.database.dataModel and it seems to be correct: collection-per-entity (as you have collections like sth_/iot_Tank1_Tank, which correspondds to a single entity, i.e. the one with id Tank1 and type Tank).
Assuming that the setting in config.js is not being overridden by DATA_MODEL env var (although it would be wise to check that, looking to the env vars actuallly inyected to the docker container running STH, I guess that with docker inspect) the only way I think we can continue debugging is to inspect which actual query does STH on MongoDB to end in No raw data available for the request.
MongoDB has a profiler that allows to record every query done in the DB. Thus the procedure would be as follows:
Avoid (or minimize) any other usage of MongoDB instance, to avoid "noise" in the information recorded by the profiler
Start the profiler in "all queries" mode (i.e. profiling level 2)
Do the query at STH API
Stop the profiler
Check the queries recorded by the profiler as a consequence of the request done in step 3
Explaining the usage of the MongoDB profiler is out of the scope of this answer, but the reference I provided above is a good starting point if you don't know it already.
Once you have information about the queries, please provide feedback as comments to this answers. Thanks!

SailsJS Mongodb timeout

I have this controller code in my sails app:
let userId = req.body.userId;
User.findOne({ id: userId })
.then((user) => {
console.log('User found:', user);
return res.ok('It worked!');
}).catch((err) => {
sails.log.error('indexes - error', err);
return res.badRequest(err);
});
When I start my server it works, but then after some times (~5min) it stops working and I end up having the following error message:
web_1 | Sending 400 ("Bad Request") response:
web_1 | Error (E_UNKNOWN) :: Encountered an unexpected error
web_1 | MongoError: server 13.81.244.244:27017 received an error {"name":"MongoError","message":"read ETIMEDOUT"}
web_1 | at null.<anonymous> (/myapp/node_modules/sails-mongo/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js:213:40)
web_1 | at g (events.js:260:16)
web_1 | at emitTwo (events.js:87:13)
web_1 | at emit (events.js:172:7)
web_1 | at null.<anonymous> (/myapp/node_modules/sails-mongo/node_modules/mongodb/node_modules/mongodb-core/lib/connection/pool.js:119:12)
web_1 | at g (events.js:260:16)
web_1 | at emitTwo (events.js:87:13)
web_1 | at emit (events.js:172:7)
web_1 | at Socket.<anonymous> (/myapp/node_modules/sails-mongo/node_modules/mongodb/node_modules/mongodb-core/lib/connection/connection.js:154:93)
web_1 | at Socket.g (events.js:260:16)
web_1 | at emitOne (events.js:77:13)
web_1 | at Socket.emit (events.js:169:7)
web_1 | at emitErrorNT (net.js:1269:8)
web_1 | at nextTickCallbackWith2Args (node.js:511:9)
web_1 | at process._tickDomainCallback (node.js:466:17)
web_1 |
web_1 | Details: MongoError: server 13.81.244.244:27017 received an error {"name":"MongoError","message":"read ETIMEDOUT"}
The DB is still up at this point, and looking at the logs, everything seems fine on this side:
2017-03-23T16:45:51.664+0000 I NETWORK [thread1] connection accepted from 13.81.243.59:51558 #7811 (39 connections now open)
2017-03-23T16:45:51.664+0000 I NETWORK [conn7811] received client metadata from 13.81.253.59:51558 conn7811: { driver: { name: "nodejs", version: "2.2.25" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.4.0-62-generic" }, platform: "Node.js v4.7.3, LE, mongodb-core: 2.1.9" }
2017-03-23T16:45:51.723+0000 I ACCESS [conn7811] Successfully authenticated as principal username on dbname
My connections.js hook looks like this:
module.exports.connections = {
sailsmongo: {
adapter : 'sails-mongo',
host : process.env.MONGODB_HOST,
port : 27017,
user : process.env.MONGODB_USERNAME,
password : process.env.MONGODB_PASSWORD,
database : process.env.MONGODB_DBNAME
},
}
and in package.json:
"sails": "~0.12.4",
"sails-mongo": "^0.12.1",
Notes:
Among the unconfirmed possible sources of misbehavior, I see:
the app is dockerized
I have a query that takes quite some time (~1/2min) and call several child processes, so I'd suspect some memory leak out there, though this is still unconfirmed!
Any idea on this?
EDIT:
After some digging, I have the impression, looking at the DB logs that sails/waterline opens a new connection on each query while it should be plugged once and kept alive. This would be the cause of the issue.
From this, I decided to try along Mongoose, and bingo, in Mongoose it works like a charm.
I'm guessing this is a Sails/Waterline bug occurring then, though I'm not clear on how to reproduce this correctly.
Anyway, I'm now moving my app from Waterline to Mongoose.