I am using the Groovy RESTClient for doing some API automation.
The issue I am having is when I am doing a GET call, the results that I get back are missing all the quotes and formatting I would expect. This is making it hard to parse the results.
Example:
What I would expect (and what I am getting back from my API) ...
{
"results": [{
"licenseType": "mobileAppLicensesWithDevice",
"name": "Lead Retrieval - Device Rental & App license",
"owner": {
"id": "a705c768-ee33-491d-a993-4dd7bc61228b",
"entityType": "exhibitor"
},
"termId": "630493a4-4a70-4f4f-afaf-31610c14c181",
"id": "c215affe-ed0f-4014-8ba8-53f9df97942a",
"readableId": "77umkh20kq3",
"accessCode": "0w7zh6t",
"updatedAt": "2016-09-22T17:02:06.911Z",
"createdAt": "2016-09-22T17:02:06.911Z"
}, {
"licenseType": "mobileAppLicensesWithDevice",
"name": "Lead Retrieval - Device Rental & App license",
"owner": {
"id": "a705c768-ee33-491d-a993-4dd7bc61228b",
"entityType": "exhibitor"
},
"termId": "630493a4-4a70-4f4f-afaf-31610c14c181",
"id": "4249aedb-934f-4db1-89d6-6f7f10152bf5",
"readableId": "fyzv5ay7tfs",
"accessCode": "ray0pwb",
"updatedAt": "2016-09-22T17:02:06.911Z",
"createdAt": "2016-09-22T17:02:06.912Z"
}, {
"licenseType": "mobileAppLicensesWithDevice",
"name": "Lead Retrieval - Device Rental & App license",
"owner": {
"id": "a705c768-ee33-491d-a993-4dd7bc61228b",
"entityType": "exhibitor"
},
"termId": "630493a4-4a70-4f4f-afaf-31610c14c181",
"id": "ea6de933-0043-435c-ad4e-32b81018ec05",
"readableId": "ytby08d586x",
"accessCode": "3sv1lj6",
"updatedAt": "2016-09-22T17:02:06.912Z",
"createdAt": "2016-09-22T17:02:06.912Z"
}],
"etags": [{}, {}, {}],
"total": 3
}
And here is what I am getting back when I use the RESTClient ...
{
etags = [{}, {}, {}], results = [{
accessCode = wvnc16i,
createdAt = 2016 - 09 - 23 T15: 08: 20.673 Z,
id = 06 f7bb76 - cc0c - 450 d - a4af - fcf3392fbd1b,
licenseType = mobileAppLicensesWithDevice,
name = Lead Retrieval - Device Rental & App license,
owner = {
entityType = exhibitor,
id = adc7a8e5 - b137 - 40 c6 - 8765 - deb3ce0b8b3f
},
readableId = r52jvlr0ok7,
termId = 630493 a4 - 4 a70 - 4 f4f - afaf - 31610 c14c181,
updatedAt = 2016 - 09 - 23 T15: 08: 20.673 Z
}, {
accessCode = nxf2dzw,
createdAt = 2016 - 09 - 23 T15: 08: 20.673 Z,
id = bda7ec58 - 5 c64 - 4082 - 9 da1 - 48534 dd6afcc,
licenseType = mobileAppLicensesWithDevice,
name = Lead Retrieval - Device Rental & App license,
owner = {
entityType = exhibitor,
id = adc7a8e5 - b137 - 40 c6 - 8765 - deb3ce0b8b3f
},
readableId = b11yew6bqra,
termId = 630493 a4 - 4 a70 - 4 f4f - afaf - 31610 c14c181,
updatedAt = 2016 - 09 - 23 T15: 08: 20.673 Z
}, {
accessCode = 3e7 f1ip,
createdAt = 2016 - 09 - 23 T15: 08: 20.674 Z,
id = 2e12657 a - 8 a62 - 4 d2c - b5d2 - 2 a3fe4c27f9e,
licenseType = mobileAppLicensesWithDevice,
name = Lead Retrieval - Device Rental & App license,
owner = {
entityType = exhibitor,
id = adc7a8e5 - b137 - 40 c6 - 8765 - deb3ce0b8b3f
},
readableId = ye410zlhqe6,
termId = 630493 a4 - 4 a70 - 4 f4f - afaf - 31610 c14c181,
updatedAt = 2016 - 09 - 23 T15: 08: 20.674 Z
}], total = 3
}
* EDIT *
Example of the code thats implementing this ...
def getLicenseId(String eventId, String exhibitorId) {
String licenseId
def apiGetLicenseId = webClient.get(
path: "event_api/events/${eventId}/exhibitors/${exhibitorId}/licenses",
headers: ['Authorization': "api_key $apiKey"],
requestContentType: JSON
)
assert apiGetLicenseId.status == 200 : "API get license ID failed!\nAPI Response: ${apiGetLicenseId.data}"
String index = apiGetLicenseId.data
def slurper = new JsonSlurper().parseText(index)
slurper.each {
println(it)
}
licenseId = apiGetLicenseId.data.results[1].id
println("Index = " + index)
println("Index Length = " + slurper)
println("License Id = " + licenseId)
return licenseId
}
Any thoughts on what the issue is here and how it can be resolved?
Related
this is my Configuration File. I'm using the new profile https://thingsboard.io/docs/iot-gateway/config/modbus/ [new_modbus.json], in the new configuration file, seems to be able to configure several different devices in { master : { "slaves" : [] }}. And when I do, I couldn't get the right results.
{
"master":{
"slaves":[
{
"unitId":1,
"deviceName":"test1",
"attributesPollPeriod":5000,
"timeseriesPollPeriod":5000,
"sendDataOnlyOnChange":false,
"attributes":[
{
"byteOrder":"BIG",
"tag":"temperature",
"type":"bytes",
"functionCode":3,
"registerCount":1,
"address":1
}
],
"timeseries":[
{
"tag":"distance",
"type":"bytes",
"functionCode":3,
"objectsCount":1,
"address":2
}
],
"attributeUpdates":[
{
"tag":"shared_value_1",
"type":"32uint",
"functionCode":6,
"objectsCount":2,
"address":3
},
{
"tag":"shared_value_2",
"type":"32uint",
"functionCode":6,
"objectsCount":2,
"address":4
}
],
"rpc":[
{
"tag":"bearing_bpfo",
"type":"32uint",
"functionCode":6,
"objectsCount":2,
"address":5
}
],
"host":null,
"port":"/dev/ttyUSB0",
"type":"serial",
"method":"rtu",
"timeout":35,
"byteOrder":"BIG",
"wordOrder":"BIG",
"retries":null,
"retryOnEmpty":null,
"retryOnInvalid":null,
"baudrate":9600,
"pollPeriod":5000,
"connectAttemptCount":1
},
{
"unitId":2,
"deviceName":"Test2",
"attributesPollPeriod":5000,
"timeseriesPollPeriod":5000,
"sendDataOnlyOnChange":false,
"attributes":[
{
"byteOrder":"BIG",
"tag":"temperature",
"type":"bytes",
"functionCode":3,
"registerCount":1,
"address":10
}
],
"timeseries":[
{
"tag":"distance",
"type":"bytes",
"functionCode":3,
"objectsCount":1,
"address":11
}
],
"attributeUpdates":[
{
"tag":"shared_value_1",
"type":"32uint",
"functionCode":6,
"objectsCount":2,
"address":12
}
],
"host":null,
"port":"/dev/ttyUSB0",
"type":"serial",
"method":"rtu",
"timeout":35,
"byteOrder":"BIG",
"wordOrder":"BIG",
"retries":null,
"retryOnEmpty":null,
"retryOnInvalid":null,
"baudrate":9600,
"pollPeriod":5000,
"connectAttemptCount":5
}
]
},
"slave":null
}
The Connector name I am using is the Modbus Connector, and the version information for my deployment is as follows:
OS: Raspberry Pi
Thingsboard IoT Gateway version : 3.0.1
Python version : 3.9.2
Error traceback:
""2022-05-11 15:28:10" - |DEBUG| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 87 - datatype: telemetry key: distance value: None"
""2022-05-11 15:28:10" - |DEBUG| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 92 - {'deviceName': 'testUpdate', 'deviceType': 'default', 'telemetry': [], 'attributes': []}"
""2022-05-11 15:28:10" - |ERROR| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 83 - Modbus Error: [Input/Output] device reports readiness to read but returned no data (device disconnected or multiple access on port?)"
NoneType: None
""2022-05-11 15:28:10" - |DEBUG| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 87 - datatype: telemetry key: distance value: None"
""2022-05-11 15:28:10" - |DEBUG| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 92 - {'deviceName': 'RpcTest', 'deviceType': 'default', 'telemetry': [], 'attributes': []}"
""2022-05-11 15:28:10" - |ERROR| - [bytes_modbus_uplink_converter.py] - bytes_modbus_uplink_converter - convert - 83 - Modbus Error: [Input/Output] device reports readiness to read but returned no data (device disconnected or multiple access on port?)"
NoneType: None
I am trying to implement data deduplication using Kafka streams. Basically, I'd like to drop any duplicates after the first encountered message in a session window with a 1-second size and an 8-hour grace period for late arrivals.
A more concrete example:
Input:
Key: A1; Value: { sensor: A1, current: 42, timestamp: Fri Apr 15 2022 21:59:22.555 }
Key: A1; Value: { sensor: A1, current: 42, timestamp: Fri Apr 15 2022 21:59:23.876 }
Key: A1; Value: { sensor: A1, current: 42, timestamp: Fri Apr 15 2022 21:59:23.574 }
Key: B2; Value: { sensor: B2, current: 42, timestamp: Fri Apr 15 2022 21:59:24.732 }
Desired output:
Key: A1; Value: { sensor: A1, current: 42, timestamp: Fri Apr 15 2022 21:59:22.555 }
Key: A1; Value: { sensor: A1, current: 42, timestamp: Fri Apr 15 2022 21:59:23.876 }
Key: B2; Value: { sensor: B2, current: 52, timestamp: Fri Apr 15 2022 21:59:24.732 }
so the third message
{ sensor: A1, current: 42, timestamp: Fri Apr 15 2022 21:59:23.574 }
from the input stream should be dropped since the sensor and current fields are matching what we already have in a defined 1 second window
Here's sample code:
streamsBuilder
.stream(
"input-topic",
Consumed.with(Serdes.String(), telemetrySerde)
// set AVRO record as ingestion timestamp
.withTimestampExtractor(ValueTimestampExtractor())
)
.groupBy(
{ _, value ->
TelemetryDuplicate(
sensor = value.sensor,
current = value.current
)
},
Grouped.with(telemetryDuplicateSerde, telemetrySerde)
)
.windowedBy(
SessionWindows.ofInactivityGapAndGrace(
/* inactivityGap = */ Duration.ofSeconds(1),
/* afterWindowEnd = */ Duration.ofHours(8)
)
)
// always keep only the first record of the group
.reduce { value, _ -> value }
.toStream()
.selectKey { k, _ -> k.key().sensor }
.to("output-topic", Produced.with(Serdes.String(), telemetrySerde))
it is actually working and it does the job, HOWEVER, despite that I rekey the resulting stream from windowed to just sensor id, I have the following messages in the output-topic:
Key: A1; Value: { sensor: A1, current: 42, timestamp: Fri Apr 15 2022 21:59:22.555 }
Key: A1; Value: { sensor: A1, current: 42, timestamp: Fri Apr 15 2022 21:59:23.876 }
Key: A1; Value: NULL
Key: A1; Value: { sensor: A1, current: 42, timestamp: Fri Apr 15 2022 21:59:23.876 }
Key: B2; Value: { sensor: B2, current: 42, timestamp: Fri Apr 15 2022 21:59:24.732 }
That means that the stream is actually de-duplicated, but in a quite awkward way: due to the change in the session window it produces a tombstone that cancels the previous aggregation despite that neither the selected key nor the value are changed (see how reduce is defined).
The question: is it possible to somehow produce only the first encountered record in the window and not produce any tombstones and "updated" aggregations?
Cheers.
You can add a filter to remove the null values from being produced to your result topic:
...
.selectKey { k, _ -> k.key().sensor }
.filter { _, value -> value != null}
.to("output-topic", Produced.with(Serdes.String(), telemetrySerde))
Please see the example of SessionWindows deduplication here.
I have Orion, Cygnus and STH-Comet(installed and configured in formal mode). Each component is in a container docker. I implemented the infrastructure with docker-compose.yml.
The Cygnus container is configured as follows:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: cygnus
volumes:
- /home/ubuntu/cygnus/multisink_agent.conf:/opt/fiware-cygnus/docker/cygnus-ngsi/multisink_agent.conf
depends_on:
- mongo
networks:
- default
expose:
- "5050"
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- CYGNUS_SERVICE_PORT=5050
- CYGNUS_MONITORING_TYPE=http
- CYGNUS_AGENT_NAME=cygnus-ngsi
- CYGNUS_MONGO_SERVICE_PORT=5050
- CYGNUS_MONGO_HOSTS=mongo:27017
- CYGNUS_MONGO_USER=
- CYGNUS_MONGO_PASS=
- CYGNUS_MONGO_ENABLE_ENCODING=false
- CYGNUS_MONGO_ENABLE_GROUPING=false
- CYGNUS_MONGO_ENABLE_NAME_MAPPINGS=false
- CYGNUS_MONGO_DATA_MODEL=dm-by-entity
- CYGNUS_MONGO_ATTR_PERSISTENCE=column
- CYGNUS_MONGO_DB_PREFIX=sth_
- CYGNUS_MONGO_COLLECTION_PREFIX=sth_
- CYGNUS_MONGO_ENABLE_LOWERCASE=false
- CYGNUS_MONGO_BATCH_TIMEOUT=30
- CYGNUS_MONGO_BATCH_TTL=10
- CYGNUS_MONGO_DATA_EXPIRATION=0
- CYGNUS_MONGO_COLLECTIONS_SIZE=0
- CYGNUS_MONGO_MAX_DOCUMENTS=0
- CYGNUS_MONGO_BATCH_SIZE=1
- CYGNUS_LOG_LEVEL=DEBUG
- CYGNUS_SKIP_CONF_GENERATION=false
- CYGNUS_STH_ENABLE_ENCODING=false
- CYGNUS_STH_ENABLE_GROUPING=false
- CYGNUS_STH_ENABLE_NAME_MAPPINGS=false
- CYGNUS_STH_DB_PREFIX=sth_
- CYGNUS_STH_COLLECTION_PREFIX=sth_
- CYGNUS_STH_DATA_MODEL=dm-by-entity
- CYGNUS_STH_ENABLE_LOWERCASE=false
- CYGNUS_STH_BATCH_TIMEOUT=30
- CYGNUS_STH_BATCH_TTL=10
- CYGNUS_STH_DATA_EXPIRATION=0
- CYGNUS_STH_BATCH_SIZE=1
Obs: In the multisink_agent.conf file I changed the service and the servicepath:
cygnus-ngsi.sources.http-source-mongo.handler.default_service = tese
cygnus-ngsi.sources.http-source-mongo.handler.default_service_path = /iot
And the STH-Comet container looks like this:
image: fiware/sth-comet:latest
hostname: sth
container_name: sth
depends_on:
- cygnus
- mongo
networks:
- default
expose:
- "8666"
ports:
- "8666:8666"
environment:
- STH_HOST=0.0.0.0
- STH_PORT=8666
- DB_URI=mongo:27017
- DB_USERNAME=
- DB_PASSWORD=
- LOGOPS_LEVEL=DEBUG
In the STH-Comet config.js file I enabled CORS and I changed the defaultService and the defaultServicePath. The file looks like this:
var config = {};
// STH server configuration
//--------------------------
config.server = {
host: 'localhost',
port: '8666',
// Default value: "testservice".
defaultService: 'tese',
// Default value: "/testservicepath".
defaultServicePath: '/iot',
filterOutEmpty: 'true',
aggregationBy: ['day', 'hour', 'minute'],
temporalDir: 'temp',
maxPageSize: '100'
};
// Cors Configuration
config.cors = {
// The enabled is use to set CORS policy
enabled: 'true',
options: {
origin: ['*'],
headers: [
'Access-Control-Allow-Origin',
'Access-Control-Allow-Headers',
'Access-Control-Request-Headers',
'Origin, Referer, User-Agent'
],
additionalHeaders: ['fiware-servicepath', 'fiware-service'],
credentials: 'true'
}
};
// Database configuration
//------------------------
config.database = {
dataModel: 'collection-per-entity',
user: '',
password: '',
authSource: '',
URI: 'localhost:27017',
replicaSet: '',
prefix: 'sth_',
collectionPrefix: 'sth_',
poolSize: '5',
writeConcern: '1',
shouldStore: 'both',
truncation: {
expireAfterSeconds: '0',
size: '0',
max: '0'
},
ignoreBlankSpaces: 'true',
nameMapping: {
enabled: 'false',
configFile: './name-mapping.json'
},
nameEncoding: 'false'
};
// Logging configuration
//------------------------
config.logging = {
level: 'debug',
format: 'pipe',
proofOfLifeInterval: '60',
processedRequestLogStatisticsInterval: '60'
};
module.exports = config;
I use Cygnus to persist historical data. STH-Comet is used only to query raw and aggregated data.
Cygnus' signature on Orion did this:
"description": "A subscription All Entities",
"subject": {
"entities": [
{
"idPattern": ".*"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"http": {
"url": "http://cygnus:5050/notify"
},
"attrs": [],
"attrsFormat":"legacy"
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
The headers used for fiware-service and fiware-servicepath are:
Fiware-service: tese
Fiware-servicepath: /iot
The entities data are stored in orion-tese. I have the collection: entities
{
"_id" : {
"id" : "Tank1",
"type" : "Tank",
"servicePath" : "/iot"
},
"attrNames" : [
"temperature"
],
"attrs" : {
"temperature" : {
"value" : 0.333,
"type" : "Float",
"mdNames" : [ ],
"creDate" : 1594334464,
"modDate" : 1594337770
}
},
"creDate" : 1594334464,
"modDate" : 1594337771,
"lastCorrelator" : "f86d0d74-c23c-11ea-9c82-0242ac1c0005"
}
The raw and aggregated data are stored in sth_tese.
I have the collections:
sth_/iot_Tank1_Tank.aggr
and
sth_/iot_Tank1_Tank
The sth_/iot_Tank1_Tank raw data is in mongoDB:
{
"_id" : ObjectId("5f079d0369591c06b0fc981a"),
"temperature" : 279,
"recvTime" : ISODate("2020-07-09T22:41:05.670Z")
}
{
"_id" : ObjectId("5f07a9eb69591c06b0fc981b"),
"temperature" : 0.333,
"recvTime" : ISODate("2020-07-09T23:36:11.160Z")
}
When I run: http://localhost:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?aggrMethod=sum&aggrPeriod=minute
or
http://localhost:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&aggrMethod=sum&aggrPeriod=minute
I have the result: "sum": 279 and "sum": 0.333. I can recover ALL the aggregated data, max, min, sum, sum2.
The difficulty is with the STH-Comet when I try to retrieve the raw data, the return code is 200 and the value returns empty.
I've tried with APIs v1 and v2, to no avail.
request with v2:
http://sth:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
Return
{
"type": "StructuredValue",
"value": []
}
request with v1:
http://sth:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?lastN=10
Return
{
"contextResponses": [{
"contextElement": {
"attributes": [{
"name": "temperature",
"values": []
}],
"id": "Tank1",
"isPattern": false,
"type": "Tank"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}]
}
The STH-Comet log shows that it is online and connects to the correct database:
time=2020-07-09T22:39:06.698Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Establishing connection to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:06.879Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Connection successfully established to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:07.218Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_SERVER_START | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Server started at http://0.0.0.0:8666
The STH-Comet log with the api v2 request:
time=2020-07-09T23:46:47.400Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=GET /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.404Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Getting access to the raw data collection for retrieval...
time=2020-07-09T23:46:47.408Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=The raw data collection for retrieval exists
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=No raw data available for the request: /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Responding with no points
According to the log, it establishes the connection to recover the raw data: msg=Getting access to the raw data collection for retrieval.... Confirms that the raw data exists: msg=The raw data collection for retrieval exists. But, it cannot recover this data and generates the message that the raw data is not available and does not return any points:msg=No raw data available for the request and msg=Responding with no points.
I already read the configuration part in the documentation. I've reinstalled everything, several times. I combed all settings and I can't find anything to justify this problem.
What could it be?
Could someone with expertise in STH-Comet give any guidance?
Thanks!
Sometimes the way in which STH tries to recover information doesn't match to the way in wich Cygnus store it. However, that doesn't to be the case here. The datamodel used by STH is configured with config.database.dataModel and it seems to be correct: collection-per-entity (as you have collections like sth_/iot_Tank1_Tank, which correspondds to a single entity, i.e. the one with id Tank1 and type Tank).
Assuming that the setting in config.js is not being overridden by DATA_MODEL env var (although it would be wise to check that, looking to the env vars actuallly inyected to the docker container running STH, I guess that with docker inspect) the only way I think we can continue debugging is to inspect which actual query does STH on MongoDB to end in No raw data available for the request.
MongoDB has a profiler that allows to record every query done in the DB. Thus the procedure would be as follows:
Avoid (or minimize) any other usage of MongoDB instance, to avoid "noise" in the information recorded by the profiler
Start the profiler in "all queries" mode (i.e. profiling level 2)
Do the query at STH API
Stop the profiler
Check the queries recorded by the profiler as a consequence of the request done in step 3
Explaining the usage of the MongoDB profiler is out of the scope of this answer, but the reference I provided above is a good starting point if you don't know it already.
Once you have information about the queries, please provide feedback as comments to this answers. Thanks!
Reading FIWARE-NGSI v2 Specification (http://telefonicaid.github.io/fiware-orion/api/v2/latest/)
In section Simplified Entity Representation
I couldn't test values mode as recomend. My test fail:
values mode. This mode represents the entity as an array of attribute
values. Information about id and type is left out. See example below.
The order of the attributes in the array is specified by the attrs URI
param (e.g. attrs=branch,colour,engine). If attrs is not used, the
order is arbitrary.
[ 'Ford', 'black', 78.3 ]
Where and how I referenced an entityID?
POST /v2/entities/Room1?options=values&attrs=branch,colour,engine
payload:
[ 'Ford', 'black', 78.3 ]
Answer:
{
"error": "MethodNotAllowed",
"description": "method not allowed"
}
POST /v2/entities?options=values
payload:
[ 'Ford', 'black', 78.3 ]
Answer:
{
"error": "ParseError",
"description": "Errors found in incoming JSON buffer"
}
Version:
GET /version
{
"orion": {
"version": "1.10.0-next",
"uptime": "0 d, 0 h, 1 m, 34 s",
"git_hash": "0f92803495a8b6c145547e19f35e8f633dec92e0",
"compile_time": "Fri Feb 2 09:45:41 UTC 2018",
"compiled_by": "root",
"compiled_in": "77ff7f334a88",
"release_date": "Fri Feb 2 09:45:41 UTC 2018",
"doc": "https://fiware-orion.readthedocs.org/en/master/"
}
}
"options=values" is a representation format for querying data not for posting new entity data for obvious reasons, when you are creating new entities you have to specify the entity id and the entity type and with the values representation format you can't ...
How can we use this particular code in the execution process:
now = new Date();
current_date = new Date(now.getUTCFullYear(), now.getUTCMonth(), now.getUTCDate(), now.getUTCHours(), now.getUTCMinutes(), now.getUTCSeconds());
end_date = obj.end_date (Mon, 02 Apr 2012 20:16:35 GMT);
var millisDiff = end_date.getTime() - current_date.getTime();
console.log(millisDiff / 1000);
I have already inserted my records into a collection. It consists of many parameters. The first record tells the status to be started and the last record tells that the process has ended. They also consist of the time stamp values in all the records on when it started and when it ended.
How can I get the difference in time between the two processes? How can I use the above code?
db.trials.insert( { _id: ObjectId("51d2750d16257024e046c0d7"), Application: "xxx", ProcessContext: "SEARCH", InteractionID: "I001", opID: "BBB", buID: "Default", HostName: "xxx-ttt-tgd-002", InstanceName: "Instance1", EventTime: ISODate("2011-11-03T14:23:00Z"), HOPID: "", HOPName: "", HOPStatus: "", INTStatus: "Started" } );
db.trials.insert( { _id: ObjectId("51d2750d16257024e046c0d7"), Application: "xxx", ProcessContext: "SEARCH", InteractionID: "I001", opID: "BBB", buID: "Default", HostName: "xxx-ttt-tgd-002", InstanceName: "Instance1", EventTime: ISODate("2011-11-03T14:23:58Z"), HOPID: "", HOPName: "", HOPStatus: "", INTStatus: "Started" } );
These are the 2 records for which I wanted to know the time stamp difference using the above logic. How can I use the logic such that I can get the difference?