Upserting multiple entities to context broker - fiware-orion

Is there a way to upsert multiple enitites to the Context Broker v2 in a single http-request like submitting an array in the request?
I have something like this in mind:
[POST] /v2/entities/?options=upsert
[
{
id: 'urn:ngsi-ld:xyz:123',
type: 'xyz',
...
},
{
id: 'urn:ngsi-ld:xyz:456',
type: 'xyz',
...
},
...
]

Yes, using POST /v2/update with append action type. For instance (example taken from NGSIv2 API walkthrough):
POST /v2/op/update
{
"actionType": "append",
"entities": [
{
"type": "Room",
"id": "Room3",
"temperature": {
"value": 21.2,
"type": "Float"
},
"pressure": {
"value": 722,
"type": "Integer"
}
},
{
"type": "Room",
"id": "Room4",
"temperature": {
"value": 31.8,
"type": "Float"
},
"pressure": {
"value": 712,
"type": "Integer"
}
}
]
}
That will update Room3 and Room4 entities if they previously exist or create them if they don't previously exist.

Related

avro schema question: TypeError: unhashable type: 'dict'

I need to write a Avro schema for the following data. The exposure is a array of arrays with 3 numbers.
{
"Response": {
"status": "",
"responseDetail": {
"request_id": "Z618978.R",
"exposure": [
[
372,
20000000.0,
31567227140.238808
]
[
373,
480000000.0,
96567227140.238808
]
[
374,
23300000.0,
251567627149.238808
]
],
"product": "ABC",
}
}
}
So I came up with a schema like the following:
{
"name": "Response",
"type":{
"name": "algoResponseType",
"type": "record",
"fields":
[
{"name": "status", "type": ["null","string"]},
{
"name": "responseDetail",
"type": {
"name": "responseDetailType",
"type": "record",
"fields":
[
{"name": "request_id", "type": "string"},
{
"name": "exposure",
"type": {
"type": "array",
"items":
{
"name": "single_exposure",
"type": {
"type": "array",
"items": "string"
}
}
}
},
{"name": "product", "type": ["null","string"]}
]
}
}
]
}
}
When I tried to register the schema. I got the following error. TypeError: unhashable type: 'dict' which means I used a list as a dictionary key.
Traceback (most recent call last):
File "sa_publisher_main4test.py", line 28, in <module>
schema_registry_client)
File "/usr/local/lib64/python3.6/site-packages/confluent_kafka/schema_registry/avro.py", line 175, in __init__
parsed_schema = parse_schema(schema_dict)
File "fastavro/_schema.pyx", line 71, in fastavro._schema.parse_schema
File "fastavro/_schema.pyx", line 204, in fastavro._schema._parse_schema
TypeError: unhashable type: 'dict'
Can anyone help point out what is causing the error?
There are a few issues.
First, at the very top level of your schema, you have the following:
{
"name": "Response",
"type": {...}
}
But this isn't right. The top level should be a record type with a field called Response. So it should look like this:
{
"name": "Response",
"type": "record",
"fields": [
{
"name": "Response",
"type": {...}
}
]
}
The second problem is that for the array of arrays, you currently have the following:
{
"name":"exposure",
"type":{
"type":"array",
"items":{
"name":"single_exposure",
"type":{
"type":"array",
"items":"string"
}
}
}
}
But instead it should look like this:
{
"name":"exposure",
"type":{
"type":"array",
"items":{
"type":"array",
"items":"string"
}
}
}
After fixing those, the schema will be able to be parsed, but your data contains an array of array of floats and your schema says it should be an array of array of string. Therefore either the schema needs to be changed to float, or the data needs to be strings.
For reference, here's an example script that works after fixing those issues:
import fastavro
s = {
"name":"Response",
"type":"record",
"fields":[
{
"name":"Response",
"type": {
"name":"algoResponseType",
"type":"record",
"fields":[
{
"name":"status",
"type":[
"null",
"string"
]
},
{
"name":"responseDetail",
"type":{
"name":"responseDetailType",
"type":"record",
"fields":[
{
"name":"request_id",
"type":"string"
},
{
"name":"exposure",
"type":{
"type":"array",
"items":{
"type":"array",
"items":"string"
}
}
},
{
"name":"product",
"type":[
"null",
"string"
]
}
]
}
}
]
}
}
]
}
data = {
"Response":{
"status":"",
"responseDetail":{
"request_id":"Z618978.R",
"exposure":[
[
"372",
"20000000.0",
"31567227140.238808"
],
[
"373",
"480000000.0",
"96567227140.238808"
],
[
"374",
"23300000.0",
"251567627149.238808"
]
],
"product":"ABC"
}
}
}
parsed_schema = fastavro.parse_schema(s)
fastavro.validate(data, parsed_schema)
The error you get is because Schema Registry doesn't accept your schema. Your top element has to be a record with "Response" field.
This schema should work, I changed array item type, as in your message you have float and not string.
{
"type": "record",
"name": "yourMessage",
"fields": [
{
"name": "Response",
"type": {
"name": "AlgoResponseType",
"type": "record",
"fields": [
{
"name": "status",
"type": [
"null",
"string"
]
},
{
"name": "responseDetail",
"type": {
"name": "ResponseDetailType",
"type": "record",
"fields": [
{
"name": "request_id",
"type": "string"
},
{
"name": "exposure",
"type": {
"type": "array",
"items": {
"type": "array",
"items": "float"
}
}
},
{
"name": "product",
"type": [
"null",
"string"
]
}
]
}
}
]
}
}
]
}
Your message is not correct, as array elements should have comma between them.
{
"Response": {
"status": "",
"responseDetail": {
"request_id": "Z618978.R",
"exposure": [
[
372,
20000000.0,
31567227140.238808
],
[
373,
480000000.0,
96567227140.238808
],
[
374,
23300000.0,
251567627149.238808
]
],
"product": "ABC",
}
}
}
As you are using fastavro, I recommend running this code to check that your message is an example of a schema.
from fastavro.validation import validate
import json
with open('schema.avsc', 'r') as schema_file:
schema = json.loads(schema_file.read())
message = {
"Response": {
"status": "",
"responseDetail": {
"request_id": "Z618978.R",
"exposure": [
[
372,
20000000.0,
31567227140.238808
],
[
373,
480000000.0,
96567227140.238808
],
[
374,
23300000.0,
251567627149.238808
]
],
"product": "ABC",
}
}
}
try:
validate(message, schema)
print('Message is matching schema')
except Exception as ex:
print(ex)

Loopback 3 get relation from embedded model

I'm using loopback 3 to build a backend with mongoDB.
So i have 3 models: Object, Attachment and AwsS3.
Object has a relation Embeds2Many to Attachment.
Attachment has a relation Many2One to AwsS3.
Objects look like that in mongoDB
[
{
"fieldA": "valueA1",
"attachments": [
{
"id": 1,
"awsS3Id": "1234"
},
{
"id": 2,
"awsS3Id": "1235"
}
]
},
{
"fieldA": "valueA2",
"attachments": [
{
"id": 4,
"awsS3Id": "1236"
},
{
"id": 5,
"awsS3Id": "1237"
}
]
}
]
AwsS3 looks like that in mongoDB
[
{
"id": "1",
"url": "abc.com/1"
},
{
"id": "2",
"url": "abc.com/2"
}
]
The question is: how can i get Objects included Attachment and AwsS3.url over the RestAPI?
I have try with the include and scope filter. But it didn't work. It look like, that this function is not implemented in loopback3, right? Here is what i tried over the GET request:
{
"filter": {
"include": {
"relation": "Attachment",
"scope": {
"include": {
"relation": "awsS3",
}
}
}
}
}
With this request i only got the Objects with Attachments without anything from AwsS3.
UPDATE for the relation definitons
The relation from Object to Attachment:
"Attachment": {
"type": "embedsMany",
"model": "Attachment",
"property": "attachments",
"options": {
"validate": true,
"forceId": false
}
},
The relation from Attachment to AwsS3
in attachment.json
"relations": {
"awsS3": {
"type": "belongsTo",
"model": "AwsS3",
"foreignKey": ""
}
}
in AwsS3.json
"relations": {
"attachments": {
"type": "hasMany",
"model": "Attachment",
"foreignKey": ""
}
}
Try this filter:
{ "filter": { "include": ["awsS3", "attachments"]}}}}

Perseo rule can't be created: Event type not found

I'm emulating a dummy scenario to play around with Perseo and Orion. I'm using 4 docker containers: Mongo, Orion, Perseo FE, and Perseo Core. All of them running healthy.
The steps that I'm doing are:
First, I create the entity with POST to Orion (localhost:1026/v2/entities). This entity looks like this:
{
"id": "DummyEvent1",
"type": "DummyEvent",
"identification": {
"value": "default",
"type": "String"
}
}
Second, I create a subscription with a POST to Orion (localhost:1026/v2/subscriptions) in order to push this DummyEvent from Orion to Perseo:
{
"description": "A subscription to get info about DummyEvent1",
"subject": {
"entities": [
{
"id": "DummyEvent1",
"type": "DummyEvent"
}
],
"condition": {
"attrs": [ ]
}
},
"notification": {
"http": {
"url": "http://perseo-fe:9090/notices"
},
"attrs": [
"identification"
]
}
}
Third, If I GET all the subscriptions in Orion (localhost:1026/v2/subscriptions), I can see that Orion is forwading the DummyEvent correctly to Perseo:
{
"id": "5ca5c18ab07f5ae96aa12152",
"description": "A subscription to get info about DummyEvent1",
"status": "active",
"subject": {
"entities": [
{
"id": "DummyEvent1",
"type": "DummyEvent"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2019-04-04T08:34:18.00Z",
"attrs": [
"identification"
],
"attrsFormat": "normalized",
"http": {
"url": "http://perseo-fe:9090/notices"
},
"lastSuccess": "2019-04-04T08:34:18.00Z",
"lastSuccessCode": 200
}
}
Fourth, But the problem appears when I try to POST a rule in Perseo (localhost:8080/perseo-core/rules) using this DummyEvent:
{
"name": "dummy_rule",
"text": "select * from DummyEvent",
"action": {
"type": "update",
"parameters": {
"name": "identification",
"value": "updatedValue",
"type": "string"
}
}
}
Perseo tells me this:
{
"error": "Failed to resolve event type: Event type or class named 'DummyEvent' was not found [select * from DummyEvent]"
}
What am I doing wrong?
Thanks!

Elasticsearch - query dates without a specified timezone

I have an index with the following mappings - standard format for a date. In the 2nd record below the time specified is actually a local time - but ES treats it as UTC.
Even though ES is internally converting all parsed datetimes to UTC but it must obviously store the original string as well.
My question is whether (and how) it might be possible to query all records for which the scheduledDT value doesn't have the timezone explicitly specified.
{
"curator_v3": {
"mappings": {
"published": {
"analyzer": "classic",
"numeric_detection": true,
"properties": {
"Id": {
"type": "string",
"index": "not_analyzed",
"include_in_all": false
},
"createDT": {
"type": "date",
"format": "dateOptionalTime",
"include_in_all": false
},
"scheduleDT": {
"type": "date",
"format": "dateOptionalTime",
"include_in_all": false
},
"title": {
"type": "string",
"fields": {
"english": {
"type": "string",
"analyzer": "english"
},
"raw": {
"type": "string",
"index": "not_analyzed"
},
"shingle": {
"type": "string",
"analyzer": "shingle"
},
"spanish": {
"type": "string",
"analyzer": "spanish"
}
},
"include_in_all": false
}
}
}
}
}
}
We use .NET as our client to ElasticSearch and haven't been consistent in specifying a timezone for the scheduleDT field.
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 12,
"successful": 12,
"failed": 0
},
"hits": {
"total": 32,
"max_score": null,
"hits": [
{
"_index": "curator_v3",
"_type": "published",
"_id": "29651227",
"_score": null,
"fields": {
"Id": [
"29651227"
],
"scheduleDT": [
"2015-11-21T22:17:51.0946798-06:00"
],
"title": [
"97 Year-Old Woman Cries Tears Of Joy After Finally Getting Her High School Diploma"
],
"createDT": [
"2015-11-21T22:13:32.3597142-06:00"
]
},
"sort": [
1448165871094
]
},
{
"_index": "curator_v3",
"_type": "published",
"_id": "210466413",
"_score": null,
"fields": {
"Id": [
"210466413"
],
"scheduleDT": [
"2015-11-22T12:00:00"
],
"title": [
"6 KC treats to bring to Thanksgiving"
],
"createDT": [
"2015-11-20T15:08:25.4282-06:00"
]
},
"sort": [
1448193600000
]
}
]
},
"aggregations": {
"ScheduleDT": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 27,
"buckets": [
{
"key": 1448165871094,
"key_as_string": "2015-11-22T04:17:51.094Z",
"doc_count": 1
},
{
"key": 1448193600000,
"key_as_string": "2015-11-22T12:00:00.000Z",
"doc_count": 4
}
]
}
}
}
You can do this by querying the document having a scheduleDT whose field length is less than 20 characters (e.g. 2015-11-22T12:00:00). All the date fields with a specified time zone would be longer.
Something like this should do:
{
"query": {
"filtered": {
"filter": {
"script": {
"script": "doc.scheduleDT.value.size() < 20"
}
}
}
}
}
Note, however, that in order to make your queries easier to create you should always try to convert all your timestamps in UTC before indexing your documents.
Finally, also make sure that you have dynamic scripting enabled in order to run the above query.
UPDATE
Actually, if you use the _source directly in the script it will work because it will return the real value from the source as it was when the document was indexed:
{
"query": {
"filtered": {
"filter": {
"script": {
"script": "_source.scheduleDT.size() < 20"
}
}
}
}
}

How can I use CloudKit web services to query based on a reference field?

I've got two CloudKit data objects that look somewhat like this:
Parent Object:
{
"records": [
{
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"recordType": "ParentObject",
"fields": {
"fsYear": {
"value": "2015",
"type": "STRING"
},
"displayOrder": {
"value": 2015221153856287200,
"type": "INT64"
},
"fjpFSGuidForReference": {
"value": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"type": "STRING"
},
"fsDateSearch": {
"value": "2015221153856287158",
"type": "STRING"
},
},
"recordChangeTag": "id4w7ivn",
"created": {
"timestamp": 1439149087571,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
},
"modified": {
"timestamp": 1439149087571,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
}
}
],
"total":
}
Child Object:
{
"records": [
{
"recordName": "2015221153856287168",
"recordType": "ChildObject",
"fields": {
"District": {
"value": "002",
"type": "STRING"
},
"ZipCode": {
"value": "12345",
"type": "STRING"
},
"InspecReference": {
"value": {
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"action": "NONE",
"zoneID": {
"zoneName": "_defaultZone"
}
},
"type": "REFERENCE"
},
},
"recordChangeTag": "id4w7lew",
"created": {
"timestamp": 1439149090856,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
},
"modified": {
"timestamp": 1439149090856,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
}
}
],
"total": 1
}
I'm trying to write a query to directly access the CloudKit web service and return the Child Object based on the reference of the parent object.
My test JSON looks something like this:
{"query":{"recordType":"ChildObject","filterBy":{"fieldName":"InspecReference","fieldValue":{ "value" : "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57", "type" : "string" },"comparator":"EQUALS"}},"zoneID":{"zoneName":"_defaultZone"}}
However, I'm getting the following error from CloudKit:
{"uuid":"33db91f3-b768-4a68-9056-216ecc033e9e","serverErrorCode":"BAD_REQUEST","reason":"BadRequestException:
Unexpected input"}
I'm guessing I have the Record Field Dictionary in the query wrong. However, the documentation isn't clear on what this should look like on a reference object.
You have to re-create the actual object of the reference. In this particular case, the JSON looks like this:
{
"query": {
"recordType": "ChildObject",
"filterBy": {
"fieldName": "InspecReference",
"fieldValue": {
"value": {
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"action": "NONE"
},
"type": "REFERENCE"
},
"comparator": "EQUALS"
}
},
"zoneID": {
"zoneName": "_defaultZone"
}
}