Problem when querying Raw Data with STH-Comet - Returns empty - fiware-orion

I have Orion, Cygnus and STH-Comet(installed and configured in formal mode). Each component is in a container docker. I implemented the infrastructure with docker-compose.yml.
The Cygnus container is configured as follows:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: cygnus
volumes:
- /home/ubuntu/cygnus/multisink_agent.conf:/opt/fiware-cygnus/docker/cygnus-ngsi/multisink_agent.conf
depends_on:
- mongo
networks:
- default
expose:
- "5050"
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- CYGNUS_SERVICE_PORT=5050
- CYGNUS_MONITORING_TYPE=http
- CYGNUS_AGENT_NAME=cygnus-ngsi
- CYGNUS_MONGO_SERVICE_PORT=5050
- CYGNUS_MONGO_HOSTS=mongo:27017
- CYGNUS_MONGO_USER=
- CYGNUS_MONGO_PASS=
- CYGNUS_MONGO_ENABLE_ENCODING=false
- CYGNUS_MONGO_ENABLE_GROUPING=false
- CYGNUS_MONGO_ENABLE_NAME_MAPPINGS=false
- CYGNUS_MONGO_DATA_MODEL=dm-by-entity
- CYGNUS_MONGO_ATTR_PERSISTENCE=column
- CYGNUS_MONGO_DB_PREFIX=sth_
- CYGNUS_MONGO_COLLECTION_PREFIX=sth_
- CYGNUS_MONGO_ENABLE_LOWERCASE=false
- CYGNUS_MONGO_BATCH_TIMEOUT=30
- CYGNUS_MONGO_BATCH_TTL=10
- CYGNUS_MONGO_DATA_EXPIRATION=0
- CYGNUS_MONGO_COLLECTIONS_SIZE=0
- CYGNUS_MONGO_MAX_DOCUMENTS=0
- CYGNUS_MONGO_BATCH_SIZE=1
- CYGNUS_LOG_LEVEL=DEBUG
- CYGNUS_SKIP_CONF_GENERATION=false
- CYGNUS_STH_ENABLE_ENCODING=false
- CYGNUS_STH_ENABLE_GROUPING=false
- CYGNUS_STH_ENABLE_NAME_MAPPINGS=false
- CYGNUS_STH_DB_PREFIX=sth_
- CYGNUS_STH_COLLECTION_PREFIX=sth_
- CYGNUS_STH_DATA_MODEL=dm-by-entity
- CYGNUS_STH_ENABLE_LOWERCASE=false
- CYGNUS_STH_BATCH_TIMEOUT=30
- CYGNUS_STH_BATCH_TTL=10
- CYGNUS_STH_DATA_EXPIRATION=0
- CYGNUS_STH_BATCH_SIZE=1
Obs: In the multisink_agent.conf file I changed the service and the servicepath:
cygnus-ngsi.sources.http-source-mongo.handler.default_service = tese
cygnus-ngsi.sources.http-source-mongo.handler.default_service_path = /iot
And the STH-Comet container looks like this:
image: fiware/sth-comet:latest
hostname: sth
container_name: sth
depends_on:
- cygnus
- mongo
networks:
- default
expose:
- "8666"
ports:
- "8666:8666"
environment:
- STH_HOST=0.0.0.0
- STH_PORT=8666
- DB_URI=mongo:27017
- DB_USERNAME=
- DB_PASSWORD=
- LOGOPS_LEVEL=DEBUG
In the STH-Comet config.js file I enabled CORS and I changed the defaultService and the defaultServicePath. The file looks like this:
var config = {};
// STH server configuration
//--------------------------
config.server = {
host: 'localhost',
port: '8666',
// Default value: "testservice".
defaultService: 'tese',
// Default value: "/testservicepath".
defaultServicePath: '/iot',
filterOutEmpty: 'true',
aggregationBy: ['day', 'hour', 'minute'],
temporalDir: 'temp',
maxPageSize: '100'
};
// Cors Configuration
config.cors = {
// The enabled is use to set CORS policy
enabled: 'true',
options: {
origin: ['*'],
headers: [
'Access-Control-Allow-Origin',
'Access-Control-Allow-Headers',
'Access-Control-Request-Headers',
'Origin, Referer, User-Agent'
],
additionalHeaders: ['fiware-servicepath', 'fiware-service'],
credentials: 'true'
}
};
// Database configuration
//------------------------
config.database = {
dataModel: 'collection-per-entity',
user: '',
password: '',
authSource: '',
URI: 'localhost:27017',
replicaSet: '',
prefix: 'sth_',
collectionPrefix: 'sth_',
poolSize: '5',
writeConcern: '1',
shouldStore: 'both',
truncation: {
expireAfterSeconds: '0',
size: '0',
max: '0'
},
ignoreBlankSpaces: 'true',
nameMapping: {
enabled: 'false',
configFile: './name-mapping.json'
},
nameEncoding: 'false'
};
// Logging configuration
//------------------------
config.logging = {
level: 'debug',
format: 'pipe',
proofOfLifeInterval: '60',
processedRequestLogStatisticsInterval: '60'
};
module.exports = config;
I use Cygnus to persist historical data. STH-Comet is used only to query raw and aggregated data.
Cygnus' signature on Orion did this:
"description": "A subscription All Entities",
"subject": {
"entities": [
{
"idPattern": ".*"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"http": {
"url": "http://cygnus:5050/notify"
},
"attrs": [],
"attrsFormat":"legacy"
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
The headers used for fiware-service and fiware-servicepath are:
Fiware-service: tese
Fiware-servicepath: /iot
The entities data are stored in orion-tese. I have the collection: entities
{
"_id" : {
"id" : "Tank1",
"type" : "Tank",
"servicePath" : "/iot"
},
"attrNames" : [
"temperature"
],
"attrs" : {
"temperature" : {
"value" : 0.333,
"type" : "Float",
"mdNames" : [ ],
"creDate" : 1594334464,
"modDate" : 1594337770
}
},
"creDate" : 1594334464,
"modDate" : 1594337771,
"lastCorrelator" : "f86d0d74-c23c-11ea-9c82-0242ac1c0005"
}
The raw and aggregated data are stored in sth_tese.
I have the collections:
sth_/iot_Tank1_Tank.aggr
and
sth_/iot_Tank1_Tank
The sth_/iot_Tank1_Tank raw data is in mongoDB:
{
"_id" : ObjectId("5f079d0369591c06b0fc981a"),
"temperature" : 279,
"recvTime" : ISODate("2020-07-09T22:41:05.670Z")
}
{
"_id" : ObjectId("5f07a9eb69591c06b0fc981b"),
"temperature" : 0.333,
"recvTime" : ISODate("2020-07-09T23:36:11.160Z")
}
When I run: http://localhost:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?aggrMethod=sum&aggrPeriod=minute
or
http://localhost:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&aggrMethod=sum&aggrPeriod=minute
I have the result: "sum": 279 and "sum": 0.333. I can recover ALL the aggregated data, max, min, sum, sum2.
The difficulty is with the STH-Comet when I try to retrieve the raw data, the return code is 200 and the value returns empty.
I've tried with APIs v1 and v2, to no avail.
request with v2:
http://sth:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
Return
{
"type": "StructuredValue",
"value": []
}
request with v1:
http://sth:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?lastN=10
Return
{
"contextResponses": [{
"contextElement": {
"attributes": [{
"name": "temperature",
"values": []
}],
"id": "Tank1",
"isPattern": false,
"type": "Tank"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}]
}
The STH-Comet log shows that it is online and connects to the correct database:
time=2020-07-09T22:39:06.698Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Establishing connection to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:06.879Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Connection successfully established to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:07.218Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_SERVER_START | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Server started at http://0.0.0.0:8666
The STH-Comet log with the api v2 request:
time=2020-07-09T23:46:47.400Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=GET /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.404Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Getting access to the raw data collection for retrieval...
time=2020-07-09T23:46:47.408Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=The raw data collection for retrieval exists
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=No raw data available for the request: /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Responding with no points
According to the log, it establishes the connection to recover the raw data: msg=Getting access to the raw data collection for retrieval.... Confirms that the raw data exists: msg=The raw data collection for retrieval exists. But, it cannot recover this data and generates the message that the raw data is not available and does not return any points:msg=No raw data available for the request and msg=Responding with no points.
I already read the configuration part in the documentation. I've reinstalled everything, several times. I combed all settings and I can't find anything to justify this problem.
What could it be?
Could someone with expertise in STH-Comet give any guidance?
Thanks!

Sometimes the way in which STH tries to recover information doesn't match to the way in wich Cygnus store it. However, that doesn't to be the case here. The datamodel used by STH is configured with config.database.dataModel and it seems to be correct: collection-per-entity (as you have collections like sth_/iot_Tank1_Tank, which correspondds to a single entity, i.e. the one with id Tank1 and type Tank).
Assuming that the setting in config.js is not being overridden by DATA_MODEL env var (although it would be wise to check that, looking to the env vars actuallly inyected to the docker container running STH, I guess that with docker inspect) the only way I think we can continue debugging is to inspect which actual query does STH on MongoDB to end in No raw data available for the request.
MongoDB has a profiler that allows to record every query done in the DB. Thus the procedure would be as follows:
Avoid (or minimize) any other usage of MongoDB instance, to avoid "noise" in the information recorded by the profiler
Start the profiler in "all queries" mode (i.e. profiling level 2)
Do the query at STH API
Stop the profiler
Check the queries recorded by the profiler as a consequence of the request done in step 3
Explaining the usage of the MongoDB profiler is out of the scope of this answer, but the reference I provided above is a good starting point if you don't know it already.
Once you have information about the queries, please provide feedback as comments to this answers. Thanks!

Related

Openstack API not providing precise data

I am using Openstack - Stein in CentOS 7.9
I was using python to collect data about the openstack nova performance, like server names and id in the openstack project, i have 3 instance(server) created, i can see all three instance in openstack cli, but when i connect to api mentioned in openstack, it provides no data or less data.
I refereed openstack documentation here
[root#centos-vm1 kavin(keystone_admin)]# openstack server list
+--------------------------------------+-----------------+--------+----------------------------------------+-------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------------+--------+----------------------------------------+-------+----------+
| 08cf6226-0303-4b4c-ba53-10af79b81dae | test_instance_3 | ACTIVE | test_networ_3=10.150.0.8 | | m1.tiny |
| 9986f205-82b3-4cbb-bcdc-fb32eab97c83 | test_instance_1 | ACTIVE | test_networ_2=10.100.0.5, x.x.x.x | | m1.small |
| d1c0f520-8540-432c-8fe1-554390fd79bf | test_instance_2 | ACTIVE | test_networ_1=10.50.0.8 | | m1.small |
+--------------------------------------+-----------------+--------+----------------------------------------+-------+----------+
My python code:
import requests,json
from six.moves.urllib.parse import urljoin
identity = {
"methods": ["password"],
"password": {
"user": {
"name": "admin",
"domain": { "id": "default" },
"password": "xxxxxxxxxxxxxxx"
}
}
}
OS_AUTH_URL = 'http://x.x.x.x:5000/v3'
data = {'auth': {'identity': identity}}
HEADERS = {'Content-Type': 'application/json', 'scope': 'unscoped'}
r = requests.post(
OS_AUTH_URL+'/auth/tokens',
headers = HEADERS,
json = data,
verify = False
)
auth_token = r.headers['X-Subject-Token'] # i got auth token
# server list
NOVA_URL="http://x.x.x.x:8774/v2.1"
HEADERS = {"X-Auth-Token" : str(auth_token)}
r = requests.get(
NOVA_URL+'/servers',
headers = HEADERS,
)
r.raise_for_status()
print(r.json())
Output :
{'servers': []}
help me, collect accurate data using api calls, thanks
According to api-ref List Servers doc, maybe you should add the project scope in the request.
By default the servers are filtered using the project ID associated with the authenticated request.
In my opinion, you could use openstacksdk to execute the operation, simply with the Connection object and list_servers method.
import openstack
conn = openstack.connect(
region_name='example-region',
auth_url='http://x.x.x.x:5000/v3/',
username='amazing-user',
password='super-secret-password',
project_id='33...b5',
domain_id='05...03'
)
servers = conn.list_servers()

Prisma migration to MySQL on localhost Fails

I am new to Prisma & MySQL. I am trying to migrate some sample tables to MySQL. Please, I need your help...
File - schema.prisma
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
}
...
File - .env
DATABASE_URL=mysql://admin:admin#localhost:3306/nss
...
MySQL Shell
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> select user();
+-----------------+
| user() |
+-----------------+
| admin#localhost |
+-----------------+
1 row in set (0.00 sec)
mysql> SHOW GLOBAL VARIABLES LIKE 'PORT';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| port | 3306 |
+---------------+-------+
1 row in set (0.25 sec)
mysql>
package.json
{
"name": "node-graphql",
"version": "1.0.0",
"license": "MIT",
"scripts": {
"dev": "nodemon src/server.js"
},
"prettier": {
"semi": false,
"singleQuote": true,
"trailingComma": "all"
},
"dependencies": {
"apollo-server": "3.6.3",
"graphql": "15.8.0",
"graphql-scalars": "1.14.1",
"nexus": "1.2.0"
},
"devDependencies": {
"#prisma/client": "^3.12.0",
"nodemon": "2.0.15",
"prisma": "^3.12.0"
},
"prisma": {
"seed": "node prisma/seed.js"
}
}
In VS-Code Terminal, I issued the command
npx prisma migrate dev
Got the message: Error: Migration engine error:

AWS CDK Pipeline Error - No stack found matching "xxxxx"

I am having a hard time with the last CDK Pipeline I have deployed. I have followed the steps here:https://docs.aws.amazon.com/cdk/latest/guide/cdk_pipeline.html and the overall experience has been quite painful.
First of all I had to manually update the S3 bucket policy to let the pipeline read/write from the bucket it as I was otherwise getting denied access 403 errors.
That part seems resolved but now, in the "UpdatePipeline" stage, I am getting failures with that error message: Error: No stack found matching 'PTPipelineStack'. Use "list" to print manifest, when clearly, the Stack exists in CloudFormation and if I run the cdk list command from the CLI I do see the PTPipelineStack. I have destroyed the pipeline and redeployed it a few times "just in case", but didn't really help.
Any suggestion as to what be done to help with this?
bin/file.ts:
#!/usr/bin/env node
import * as cdk from '#aws-cdk/core'
import 'source-map-support/register'
import { MyPipelineStack } from '../lib/build-pipeline'
const app = new cdk.App()
const pipelineStack = new MyPipelineStack(app, 'PTPipelineStack', {
env: {
account: 'xxxxxxxxxxxx',
region: 'eu-west-1',
},
})
app.synth()
lib/build-pipeline.ts:
import * as codepipeline from '#aws-cdk/aws-codepipeline'
import * as codepipeline_actions from '#aws-cdk/aws-codepipeline-actions'
import { Construct, Stack, StackProps, Stage, StageProps } from '#aws-cdk/core'
import { CdkPipeline, SimpleSynthAction } from '#aws-cdk/pipelines'
import { PasstimeStack } from './passtime-stack'
export class MyApplication extends Stage {
constructor(scope: Construct, id: string, props?: StageProps) {
super(scope, id, props)
new PasstimeStack(this, 'Cognito')
}
}
export class MyPipelineStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props)
const sourceArtifact = new codepipeline.Artifact()
const cloudAssemblyArtifact = new codepipeline.Artifact()
const pipeline = new CdkPipeline(this, 'Pipeline', {
pipelineName: 'PassTimeAppPipeline',
cloudAssemblyArtifact,
sourceAction: new codepipeline_actions.BitBucketSourceAction({
actionName: 'Github',
connectionArn:
'arn:aws:codestar-connections:eu-west-1:xxxxxxxxxxxxxxx',
owner: 'owner',
repo: 'repo',
branch: 'dev',
output: sourceArtifact,
}),
synthAction: SimpleSynthAction.standardNpmSynth({
sourceArtifact,
cloudAssemblyArtifact,
installCommand: 'npm ci',
environment: {
privileged: true,
},
}),
})
pipeline.addApplicationStage(
new MyApplication(this, 'Dev', {
env: {
account: 'xxxxxxxx',
region: 'eu-west-1',
},
})
)
}
}
deps on my package.json:
"devDependencies": {
"#aws-cdk/assert": "^1.94.1",
"#types/jest": "^26.0.21",
"#types/node": "14.14.35",
"aws-cdk": "^1.94.1",
"jest": "^26.4.2",
"ts-jest": "^26.5.4",
"ts-node": "^9.0.0",
"typescript": "4.2.3"
},
"dependencies": {
"#aws-cdk/aws-appsync": "^1.94.1",
"#aws-cdk/aws-cloudfront": "^1.94.1",
"#aws-cdk/aws-cloudfront-origins": "^1.94.1",
"#aws-cdk/aws-codebuild": "^1.94.1",
"#aws-cdk/aws-codepipeline": "^1.94.1",
"#aws-cdk/aws-codepipeline-actions": "^1.94.1",
"#aws-cdk/aws-cognito": "^1.94.1",
"#aws-cdk/aws-dynamodb": "^1.94.1",
"#aws-cdk/aws-iam": "^1.94.1",
"#aws-cdk/aws-kms": "^1.94.1",
"#aws-cdk/aws-lambda": "^1.94.1",
"#aws-cdk/aws-lambda-nodejs": "^1.94.1",
"#aws-cdk/aws-pinpoint": "^1.94.1",
"#aws-cdk/aws-s3": "^1.94.1",
"#aws-cdk/aws-s3-deployment": "^1.94.1",
"#aws-cdk/core": "^1.94.1",
"#aws-cdk/custom-resources": "^1.94.1",
"#aws-cdk/pipelines": "^1.94.1",
"#aws-sdk/s3-request-presigner": "^3.9.0",
"source-map-support": "^0.5.16"
}
Code Build Logs:
[Container] 2021/03/19 17:43:59 Entering phase INSTALL
--
16 | [Container] 2021/03/19 17:43:59 Running command npm install -g aws-cdk
17 | /usr/local/bin/cdk -> /usr/local/lib/node_modules/aws-cdk/bin/cdk
18 | + aws-cdk#1.94.1
19 | added 193 packages from 186 contributors in 6.404s
20 |
21 | [Container] 2021/03/19 17:44:09 Phase complete: INSTALL State: SUCCEEDED
22 | [Container] 2021/03/19 17:44:09 Phase context status code: Message:
23 | [Container] 2021/03/19 17:44:09 Entering phase PRE_BUILD
24 | [Container] 2021/03/19 17:44:10 Phase complete: PRE_BUILD State: SUCCEEDED
25 | [Container] 2021/03/19 17:44:10 Phase context status code: Message:
26 | [Container] 2021/03/19 17:44:10 Entering phase BUILD
27 | [Container] 2021/03/19 17:44:10 Running command cdk -a . deploy PTPipelineStack --require-approval=never --verbose
28 | CDK toolkit version: 1.94.1 (build 60d8f91)
29 | Command line arguments: {
30 | _: [ 'deploy' ],
31 | a: '.',
32 | app: '.',
33 | 'require-approval': 'never',
34 | requireApproval: 'never',
35 | verbose: 1,
36 | v: 1,
37 | lookups: true,
38 | 'ignore-errors': false,
39 | ignoreErrors: false,
40 | json: false,
41 | j: false,
42 | debug: false,
43 | ec2creds: undefined,
44 | i: undefined,
45 | 'version-reporting': undefined,
46 | versionReporting: undefined,
47 | 'path-metadata': true,
48 | pathMetadata: true,
49 | 'asset-metadata': true,
50 | assetMetadata: true,
51 | 'role-arn': undefined,
52 | r: undefined,
53 | roleArn: undefined,
54 | staging: true,
55 | 'no-color': false,
56 | noColor: false,
57 | fail: false,
58 | all: false,
59 | 'build-exclude': [],
60 | E: [],
61 | buildExclude: [],
62 | ci: false,
63 | execute: true,
64 | force: false,
65 | f: false,
66 | parameters: [ {} ],
67 | 'previous-parameters': true,
68 | previousParameters: true,
69 | '$0': '/usr/local/bin/cdk',
70 | STACKS: [ 'PTPipelineStack' ],
71 | 'S-t-a-c-k-s': [ 'PTPipelineStack' ]
72 | }
73 | merged settings: {
74 | versionReporting: true,
75 | pathMetadata: true,
76 | output: 'cdk.out',
77 | app: '.',
78 | context: {},
79 | debug: false,
80 | assetMetadata: true,
81 | requireApproval: 'never',
82 | toolkitBucket: {},
83 | staging: true,
84 | bundlingStacks: [ '*' ],
85 | lookups: true
86 | }
87 | Toolkit stack: CDKToolkit
88 | Setting "CDK_DEFAULT_REGION" environment variable to eu-west-1
89 | Resolving default credentials
90 | Looking up default account ID from STS
91 | Default account ID: xxxxxx
92 | Setting "CDK_DEFAULT_ACCOUNT" environment variable to xxxxxxxxx
93 | context: {
94 | 'aws:cdk:enable-path-metadata': true,
95 | 'aws:cdk:enable-asset-metadata': true,
96 | 'aws:cdk:version-reporting': true,
97 | 'aws:cdk:bundling-stacks': [ '*' ]
98 | }
99 | --app points to a cloud assembly, so we bypass synth
100 | No stack found matching 'PTPipelineStack'. Use "list" to print manifest
101 | Error: No stack found matching 'PTPipelineStack'. Use "list" to print manifest
102 | at CloudAssembly.selectStacks (/usr/local/lib/node_modules/aws-cdk/lib/api/cxapp/cloud-assembly.ts:115:15)
103 | at CdkToolkit.selectStacksForDeploy (/usr/local/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:385:35)
104 | at CdkToolkit.deploy (/usr/local/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:111:20)
105 | at initCommandLine (/usr/local/lib/node_modules/aws-cdk/bin/cdk.ts:208:9)
106 |
107 | [Container] 2021/03/19 17:44:10 Command did not exit successfully cdk -a . deploy PTPipelineStack --require-approval=never --verbose exit status 1
108 | [Container] 2021/03/19 17:44:10 Phase complete: BUILD State: FAILED
109 | [Container] 2021/03/19 17:44:10 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: cdk -a . deploy PTPipelineStack --require-approval=never --verbose. Reason: exit status 1
110 | [Container] 2021/03/19 17:44:10 Entering phase POST_BUILD
111 | [Container] 2021/03/19 17:44:10 Phase complete: POST_BUILD State: SUCCEEDED
112 | [Container] 2021/03/19 17:44:10 Phase context status code: Message:
I ran into the same issue and I'm not sure exactly how I fixed it, but here's some things to try:
Make sure you have your dev branch pushed to Github and not just correctly locally because that's what your pipeline is pointing to. (this was my problem)
I was using 1.94.1 but swapped to 1.94.0 - not sure if this helped
I make my CDK versions all fixed so I remove the ^, so they don't conflict with different versions at some point
I finally had a breakthrough yesterday. The issue I outlined above was a consequence of an issue that started earlier in the pipeline, that was in fact lacking permissions to access the artifacts s3 bucket. The original error message that appeared at the Source stage:
Upload to S3 failed with the following error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: xxxx; S3 Extended Request ID: xxxx; Proxy: null) (Service: null; Status Code: 0; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null) (Service: null; Status Code: 0; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null)
I had unblocked the pipeline by creating a bucket policy on the artifact bucket but as stated previously that actually only pushed the issue further down the line. But focusing on the original issue I actually realised that the CDK was not granting sufficient permissions to one of the roles it created.
As of today, in order to use a Github repo with an organisation one needs to use the "Github v2" integration, that relies on CodeStar. (v1 = access tokens = private repos).
Currently the only way to set this up with the CDK is to use the BitBucketSourceAction as seen in my code above.
Interestingly, when deploying a new pipeline stack, the CDK creates the dedicated IAM role and grants the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "codestar-connections:UseConnection",
"Resource": "arn:aws:codestar-connections:eu-west-1:xxxxx:connection/xxxx",xx
"Effect": "Allow"
},
{
"Action": [
"s3:GetObject*",
"s3:GetBucket*",
"s3:List*",
"s3:DeleteObject*",
"s3:PutObject",
"s3:Abort*"
],
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
],
"Effect": "Allow"
},
{
"Action": [
"kms:Decrypt",
"kms:DescribeKey",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*"
],
"Resource": "arn:aws:kms:eu-west-1:xxxxxxx:key/xxxxx",
"Effect": "Allow"
}
]
}
This looks ok at first but turns out to not be sufficient for the pipeline to access the bucket and go through the stages. I suspect that it is missing PutBucketPolicy permissions. I have currently fixed it by replacing the specific actions with a s3:*, but that should be fine tuned.
In the end I am using the latest and greatest 1.94.1, it is not a deps issue but a CDK one. I will raise an issue with the aws-cdk gang. 👍

AWS CLI : How to get the API Gateway ID

Is there a way/possible on how we can get the API GAteway ID by name or can we iterate the list and return by its name from AWS CLI, i tried the following way and it doesn't return any thing
aws apigateway get-rest-apis --query 'items[?name==`TestAPI`].value' --output text --region us-east-1
thanks in advance
Updated the list output
"items": [
{
"id": "5aa9gcij77",
"name": "JavaLamdba",
"description": "JavaLamdba",
"createdDate": 1608225655,
"apiKeySource": "HEADER",
"endpointConfiguration": {
"types": [
"REGIONAL"
]
}
},
aws apigateway get-rest-apis --query 'items[?name==`JavaLamdba`].id' --output text --region us-east-1
This should give you the expected result
> select name, description, created_date from aws.aws_api_gateway_rest_api where name = 'lambda-test';
+-------------+-------------+---------------------+
| name | description | created_date |
+-------------+-------------+---------------------+
| lambda-test | lambda-test | 2019-07-25 09:05:16 |
+-------------+-------------+---------------------
https://steampipe.io/

Cygnus-NGSI won't save data in PostgreSQL

Im trying to save some data in a postgreSQL database using cygnus-ngsi, but nothing happens. Im running all services in a docker container using docker-compose.
docker-compose.yml:
...
cygnus:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: cygnus_fiware
volumes:
- ./config/cygnus/cygnus.conf:/opt/apache-flume/conf/agent.conf
- ./config/cygnus/grouping_rules.conf:/opt/apache-flume/conf/grouping_rules.conf
links:
- orion
- postgres
expose:
- "5050"
ports:
- "5050:5050"
postgres:
restart: always
image: postgres:latest
container_name: postgres_fiware
volumes:
- ./data/db/postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
expose:
- "5432"
environment:
- POSTGRES_USER=teste
- POSTGRES_DB=newdb
- POSTGRES_PASSWORD=123456789
agent.conf
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = postgresql-sink
cygnus-ngsi.channels = postgresql-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnus-ngsi.sources.http-source.channels = postgresql-channel
# source class, must not be changed
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnus-ngsi.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
# URL target
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
# default service (service semantic depends on the persistence sink)
cygnus-ngsi.sources.http-source.handler.default_service = default
# default service path (service path semantic depends on the persistence sink)
cygnus-ngsi.sources.http-source.handler.default_service_path = /
# source interceptors, do not change
cygnus-ngsi.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
# GroupingInterceptor, do not change
cygnus-ngsi.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.NGSIGroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# see the doc/design/interceptors document for more details
cygnus-ngsi.sources.http-source.interceptors.gi.grouping_rules_conf_file = /opt/apache-flume/conf/grouping_rules.conf
# ============================================
# NGSIPostgreSQLSink configuration
# channel name from where to read notification events
cygnus-ngsi.sinks.postgresql-sink.channel = postgresql-channel
# sink class, must not be changed
cygnus-ngsi.sinks.postgresql-sink.type = com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink
# true applies the new encoding, false applies the old encoding.
cygnus-ngsi.sinks.postgresql-sink.enable_encoding = false
# true if name mappings are enabled for this sink, false otherwise
cygnus-ngsi.sinks.postgresql-sink.enable_name_mappings = false
# true if the grouping feature is enabled for this sink, false otherwise
cygnus-ngsi.sinks.postgresql-sink.enable_grouping = false
# true if lower case is wanted to forced in all the element names, false otherwise
cygnus-ngsi.sinks.postgresql-sink.enable_lowercase = false
# the FQDN/IP address where the PostgreSQL server runs
cygnus-ngsi.sinks.postgresql-sink.postgresql_host = postgres
# the port where the PostgreSQL server listens for incomming connections
cygnus-ngsi.sinks.postgresql-sink.postgresql_port = 5432
# the name of the postgresql database
cygnus-ngsi.sinks.postgresql-sink.postgresql_database = newdb
# a valid user in the PostgreSQL server
cygnus-ngsi.sinks.postgresql-sink.postgresql_username = teste
# password for the user above
cygnus-ngsi.sinks.postgresql-sink.postgresql_password = 123456789
# how the attributes are stored, either per row either per column (row, column)
cygnus-ngsi.sinks.postgresql-sink.attr_persistence = row
# select the data_model: dm-by-service-path or dm-by-entity
cygnus-ngsi.sinks.postgresql-sink.data_model = dm-by-entity
# number of notifications to be included within a processing batch
cygnus-ngsi.sinks.postgresql-sink.batch_size = 100
# timeout for batch accumulation
cygnus-ngsi.sinks.postgresql-sink.batch_timeout = 30
# number of retries upon persistence error
cygnus-ngsi.sinks.postgresql-sink.batch_ttl = 10
# =============================================
# postgresql-channel configuration
# channel type (must not be changed)
cygnus-ngsi.channels.postgresql-channel.type = memory
# capacity of the channel
cygnus-ngsi.channels.postgresql-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnus-ngsi.channels.postgresql-channel.transactionCapacity = 100
Messages from docker-compose:
cygnus_fiware | time=2017-06-19T15:11:15.132Z | lvl=WARN | corr=4af15a0c-5501-11e7-aa0a-0242ac130004 | trans=508576db-1443-4c64-bfc9-629d1a0b250e | srv=espometeo | subsrv=/environment | comp=cygnus-ngsi | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[257] : [NGSIRestHandler] Unnecessary header
cygnus_fiware | time=2017-06-19T15:11:15.133Z | lvl=INFO | corr=89ac4c66-5501-11e7-850f-0242ac130004 | trans=3ae0dc99-de51-49a3-937f-42d887b7e7d7 | srv=espometeo | subsrv=/environment | comp=cygnus-ngsi | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[282] : [NGSIRestHandler] Starting internal transaction (3ae0dc99-de51-49a3-937f-42d887b7e7d7)
cygnus_fiware | time=2017-06-19T15:11:15.133Z | lvl=INFO | corr=89ac4c66-5501-11e7-850f-0242ac130004 | trans=3ae0dc99-de51-49a3-937f-42d887b7e7d7 | srv=espometeo | subsrv=/environment | comp=cygnus-ngsi | op=getEvents | msg=com.telefonica.iot.cygnus.handlers.NGSIRestHandler[299] : [NGSIRestHandler] Received data ({ "subscriptionId" : "5947d328e143997a02b11008", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "EstacaoMeteo", "isPattern" : "false", "id" : "Estacao3", "attributes" : [ { "name" : "Humidity", "type" : "float", "value" : "35.3", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2017-06-24T13:03:00" } ] }, { "name" : "Temperature", "type" : "float", "value" : "15.2", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2017-06-24T13:03:00" } ] } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
cygnus_fiware | time=2017-06-19T15:11:36.141Z | lvl=INFO | corr=89ac4c66-5501-11e7-850f-0242ac130004 | trans=3ae0dc99-de51-49a3-937f-42d887b7e7d7 | srv=espometeo | subsrv=/environment | comp=cygnus-ngsi | op=persistAggregation | msg=com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink[479] : [postgresql-sink] Persisting data at NGSIPostgreSQLSink. Schema (espometeo), Table (environment_estacao3_estacaometeo), Fields ((recvTimeTs,recvTime,fiwareServicePath,entityId,entityType,attrName,attrType,attrValue,attrMd)), Values (('1497885075139','2017-06-19T15:11:15.139Z','/environment','Estacao3','EstacaoMeteo','Humidity','float','35.3','[{"name":"TimeInstant","type":"ISO8601","value":"2017-06-24T13:03:00"}]'),('1497885075139','2017-06-19T15:11:15.139Z','/environment','Estacao3','EstacaoMeteo','Temperature','float','15.2','[{"name":"TimeInstant","type":"ISO8601","value":"2017-06-24T13:03:00"}]'))
cygnus_fiware | time=2017-06-19T15:11:36.142Z | lvl=WARN | corr=89ac4c66-5501-11e7-850f-0242ac130004 | trans=3ae0dc99-de51-49a3-937f-42d887b7e7d7 | srv=espometeo | subsrv=/environment | comp=cygnus-ngsi | op=processNewBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[541] :
Seems like cygnus is getting all the data from the Orion and putting it right, but when i go to the postgresql db, there nothing. Some one already have this problem?
Persistance ERROR message:
cygnus_fiware | time=2017-06-22T09:45:06.092Z | lvl=ERROR | corr=N/A | trans=N/A | srv=N/A | subsrv=N/A | comp=cygnus-ngsi | op=processRollbackedBatches | msg=com.telefonica.iot.cygnus.sinks.NGSISink[398] : CygnusPersistenceError. -, null. Stack trace: [com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink.persistAggregation(NGSIPostgreSQLSink.java:504), com.telefonica.iot.cygnus.sinks.NGSIPostgreSQLSink.persistBatch(NGSIPostgreSQLSink.java:231), com.telefonica.iot.cygnus.sinks.NGSISink.processRollbackedBatches(NGSISink.java:390), com.telefonica.iot.cygnus.sinks.NGSISink.process(NGSISink.java:372), org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68), org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147), java.lang.Thread.run(Thread.java:748)]
Cygnus start ERRORS:
cygnus_fiware | + exec /usr/lib/jvm/java-6-sun/bin/java -Xms512m -Xmx1g -Dcom.sun.management.jmxremote -Dflume.root.logger=INFO,console -Duser.timezone=UTC -Dfile.encoding=UTF-8 -cp '/opt/apache-flume/conf:/opt/apache-flume/lib/*:/opt/apache-flume/plugins.d/cygnus/lib/*:/opt/apache-flume/plugins.d/cygnus/libext/*' -Djava.library.path= com.telefonica.iot.cygnus.nodes.CygnusApplication -f /opt/apache-flume/conf/agent.conf -n cygnus-ngsi -p 8081
cygnus_fiware | /opt/apache-flume/bin/cygnus-flume-ng: line 232: /usr/lib/jvm/java-6-sun/bin/java: No such file or directory
cygnus_fiware | /opt/apache-flume/bin/cygnus-flume-ng: line 232: exec: /usr/lib/jvm/java-6-sun/bin/java: cannot execute: No such file or directory
There is a problem with PostgreSQL sink when cache is not enabled. It is described at this issue.
I solved the issue by changing false to true in agent.conf file: cygnus-ngsi.sinks.postgresql-sink.backend.enable_cache = true.