Select specific field inside a list from mongo collection - mongodb

I have a collection with records in this format:
{
"data": [
{
"type": "UNKNOWN",
"ID": "UNKNOWN",
"payload": {
"value": -56,
"unit": "dBm"
}
},
{
"type": "UNKNOWN",
"ID": "UNKNOWN",
"payload": {
"value": -10,
"unit": "dBm"
}
}
]
}
Now is it possible to get only data[1].payload.value instead of returning the entire record?

Related

Mongoose - Joining based on Object name

Currently, I have 3 schemas that I want to join. The first schema is the user schema.
{
"_id": {
"$oid": "6147f87ac51f060e8c1bc8c7"
},
"userID": "344431410360090625",
"currency": 72590,
"level": 10,
"exp": 78.5,
"sp": 338,
"location": {
"area": 2,
"floor": 3
},
"inv": {
"Rag Hood#363": {
"emote": "",
"description": "",
"rarity": "Common",
"type": "equipment",
"image": "",
"equipmentType": "helmet",
"level": 20,
"ascension": 0,
"exp": 0,
"quantity": 0,
"expToLevelUp": 0,
"equipped": false
},
"Jericho Jehammad": {
"emote": "<:Jericho:823551572029603840>",
"description": "Enhance your weapons with this mysterious item",
"rarity": "Common",
"type": "special",
"image": "",
"quantity": 7147,
"listed": 6964
},
},
"__v": 0,}
I want to be able to use the Object names of "Rag Hood#363" and "Jericho Jehammad". Firstly the equipment schema is shown below.
{
"_id": {
"$oid": "61474cb047a1b66f2cb1b6d8"
},
"itemName": "Rag Hood",
"stats": {
"defense": {
"flat": 2,
"multi": 0
}
},
"equipmentType": "helmet",
"ascensionRequirements": [],
"statsUpPerLvl": {
"defense": 0.5
}}
Next, the items schema is used to join Jericho Jehammad. Equipment in our database is named with #, with the item number after. While other items are identified with just the itemName.
{
"_id": {
"$oid": "60c5c5e6d2d78c794d33b7ae"
},
"itemName": "Jericho Jehammad",
"emote": "<:Jericho:823551572029603840>",
"description": "",
"rarity": "Common",
"type": "special",
"image": ""}
I want to return an object that overrides the value if the value is present in the equipment and items schemas. If the value is not present, I will use the value present in the user schema.

KAFKA connector Apply Tansform.filter.Value

I have create a connector to azureEventhub , it works fine to pull the data into the topic ,
My use case is to filter the messages that I m pullling basing on the message type .
Example :
{
"messageType": "Transformed",
"timeStamp": 1652113146105,
"payload": {
"externalId": "24323",
"equipmentType": "TemperatureSensor",
"measureType": "Temperature",
"normalizedData": {
"equipmentData": [
{
"key": "ReadingValue",
"value": 23,
"valueType": "number",
"measurementUnit": "celsius",
"measurementDateTime": "2022-05-09T16:18:34.0000000Z"
}
]
},
"dataProviderName": "LineMetrics"
},
},
{
"messageType": "IntegratorGenericDataStream",
"timeStamp": 1652113146103,
"payload": {
"dataSource": {
"type": "sensor",
},
"dataPoints": [
{
"type": "Motion",
"value": 0,
"valueType": "number",
"dateTime": "2022-05-09T16:18:37.0000000Z",
"unit": "count"
}
],
"dataProvider": {
"id": "ba84cbdb-cbf8-4d4f-9a55-93b43f671b5a",
"name": "LineMetrics",
"displayName": "Line Metrics"
}
},
}
I wanted to apply a filter on a value like shown in the pic:
enter image description here
the error that appears to me :
enter image description here

avro schema question: TypeError: unhashable type: 'dict'

I need to write a Avro schema for the following data. The exposure is a array of arrays with 3 numbers.
{
"Response": {
"status": "",
"responseDetail": {
"request_id": "Z618978.R",
"exposure": [
[
372,
20000000.0,
31567227140.238808
]
[
373,
480000000.0,
96567227140.238808
]
[
374,
23300000.0,
251567627149.238808
]
],
"product": "ABC",
}
}
}
So I came up with a schema like the following:
{
"name": "Response",
"type":{
"name": "algoResponseType",
"type": "record",
"fields":
[
{"name": "status", "type": ["null","string"]},
{
"name": "responseDetail",
"type": {
"name": "responseDetailType",
"type": "record",
"fields":
[
{"name": "request_id", "type": "string"},
{
"name": "exposure",
"type": {
"type": "array",
"items":
{
"name": "single_exposure",
"type": {
"type": "array",
"items": "string"
}
}
}
},
{"name": "product", "type": ["null","string"]}
]
}
}
]
}
}
When I tried to register the schema. I got the following error. TypeError: unhashable type: 'dict' which means I used a list as a dictionary key.
Traceback (most recent call last):
File "sa_publisher_main4test.py", line 28, in <module>
schema_registry_client)
File "/usr/local/lib64/python3.6/site-packages/confluent_kafka/schema_registry/avro.py", line 175, in __init__
parsed_schema = parse_schema(schema_dict)
File "fastavro/_schema.pyx", line 71, in fastavro._schema.parse_schema
File "fastavro/_schema.pyx", line 204, in fastavro._schema._parse_schema
TypeError: unhashable type: 'dict'
Can anyone help point out what is causing the error?
There are a few issues.
First, at the very top level of your schema, you have the following:
{
"name": "Response",
"type": {...}
}
But this isn't right. The top level should be a record type with a field called Response. So it should look like this:
{
"name": "Response",
"type": "record",
"fields": [
{
"name": "Response",
"type": {...}
}
]
}
The second problem is that for the array of arrays, you currently have the following:
{
"name":"exposure",
"type":{
"type":"array",
"items":{
"name":"single_exposure",
"type":{
"type":"array",
"items":"string"
}
}
}
}
But instead it should look like this:
{
"name":"exposure",
"type":{
"type":"array",
"items":{
"type":"array",
"items":"string"
}
}
}
After fixing those, the schema will be able to be parsed, but your data contains an array of array of floats and your schema says it should be an array of array of string. Therefore either the schema needs to be changed to float, or the data needs to be strings.
For reference, here's an example script that works after fixing those issues:
import fastavro
s = {
"name":"Response",
"type":"record",
"fields":[
{
"name":"Response",
"type": {
"name":"algoResponseType",
"type":"record",
"fields":[
{
"name":"status",
"type":[
"null",
"string"
]
},
{
"name":"responseDetail",
"type":{
"name":"responseDetailType",
"type":"record",
"fields":[
{
"name":"request_id",
"type":"string"
},
{
"name":"exposure",
"type":{
"type":"array",
"items":{
"type":"array",
"items":"string"
}
}
},
{
"name":"product",
"type":[
"null",
"string"
]
}
]
}
}
]
}
}
]
}
data = {
"Response":{
"status":"",
"responseDetail":{
"request_id":"Z618978.R",
"exposure":[
[
"372",
"20000000.0",
"31567227140.238808"
],
[
"373",
"480000000.0",
"96567227140.238808"
],
[
"374",
"23300000.0",
"251567627149.238808"
]
],
"product":"ABC"
}
}
}
parsed_schema = fastavro.parse_schema(s)
fastavro.validate(data, parsed_schema)
The error you get is because Schema Registry doesn't accept your schema. Your top element has to be a record with "Response" field.
This schema should work, I changed array item type, as in your message you have float and not string.
{
"type": "record",
"name": "yourMessage",
"fields": [
{
"name": "Response",
"type": {
"name": "AlgoResponseType",
"type": "record",
"fields": [
{
"name": "status",
"type": [
"null",
"string"
]
},
{
"name": "responseDetail",
"type": {
"name": "ResponseDetailType",
"type": "record",
"fields": [
{
"name": "request_id",
"type": "string"
},
{
"name": "exposure",
"type": {
"type": "array",
"items": {
"type": "array",
"items": "float"
}
}
},
{
"name": "product",
"type": [
"null",
"string"
]
}
]
}
}
]
}
}
]
}
Your message is not correct, as array elements should have comma between them.
{
"Response": {
"status": "",
"responseDetail": {
"request_id": "Z618978.R",
"exposure": [
[
372,
20000000.0,
31567227140.238808
],
[
373,
480000000.0,
96567227140.238808
],
[
374,
23300000.0,
251567627149.238808
]
],
"product": "ABC",
}
}
}
As you are using fastavro, I recommend running this code to check that your message is an example of a schema.
from fastavro.validation import validate
import json
with open('schema.avsc', 'r') as schema_file:
schema = json.loads(schema_file.read())
message = {
"Response": {
"status": "",
"responseDetail": {
"request_id": "Z618978.R",
"exposure": [
[
372,
20000000.0,
31567227140.238808
],
[
373,
480000000.0,
96567227140.238808
],
[
374,
23300000.0,
251567627149.238808
]
],
"product": "ABC",
}
}
}
try:
validate(message, schema)
print('Message is matching schema')
except Exception as ex:
print(ex)

Upserting multiple entities to context broker

Is there a way to upsert multiple enitites to the Context Broker v2 in a single http-request like submitting an array in the request?
I have something like this in mind:
[POST] /v2/entities/?options=upsert
[
{
id: 'urn:ngsi-ld:xyz:123',
type: 'xyz',
...
},
{
id: 'urn:ngsi-ld:xyz:456',
type: 'xyz',
...
},
...
]
Yes, using POST /v2/update with append action type. For instance (example taken from NGSIv2 API walkthrough):
POST /v2/op/update
{
"actionType": "append",
"entities": [
{
"type": "Room",
"id": "Room3",
"temperature": {
"value": 21.2,
"type": "Float"
},
"pressure": {
"value": 722,
"type": "Integer"
}
},
{
"type": "Room",
"id": "Room4",
"temperature": {
"value": 31.8,
"type": "Float"
},
"pressure": {
"value": 712,
"type": "Integer"
}
}
]
}
That will update Room3 and Room4 entities if they previously exist or create them if they don't previously exist.

How can I use CloudKit web services to query based on a reference field?

I've got two CloudKit data objects that look somewhat like this:
Parent Object:
{
"records": [
{
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"recordType": "ParentObject",
"fields": {
"fsYear": {
"value": "2015",
"type": "STRING"
},
"displayOrder": {
"value": 2015221153856287200,
"type": "INT64"
},
"fjpFSGuidForReference": {
"value": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"type": "STRING"
},
"fsDateSearch": {
"value": "2015221153856287158",
"type": "STRING"
},
},
"recordChangeTag": "id4w7ivn",
"created": {
"timestamp": 1439149087571,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
},
"modified": {
"timestamp": 1439149087571,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
}
}
],
"total":
}
Child Object:
{
"records": [
{
"recordName": "2015221153856287168",
"recordType": "ChildObject",
"fields": {
"District": {
"value": "002",
"type": "STRING"
},
"ZipCode": {
"value": "12345",
"type": "STRING"
},
"InspecReference": {
"value": {
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"action": "NONE",
"zoneID": {
"zoneName": "_defaultZone"
}
},
"type": "REFERENCE"
},
},
"recordChangeTag": "id4w7lew",
"created": {
"timestamp": 1439149090856,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
},
"modified": {
"timestamp": 1439149090856,
"userRecordName": "_0d26968032e31bbc72c213037b6cb35d",
"deviceID": "A19CD995FDA3093781096AF5D818033A241D65C1BFC3D32EC6C5D6B3B4A9AA6B"
}
}
],
"total": 1
}
I'm trying to write a query to directly access the CloudKit web service and return the Child Object based on the reference of the parent object.
My test JSON looks something like this:
{"query":{"recordType":"ChildObject","filterBy":{"fieldName":"InspecReference","fieldValue":{ "value" : "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57", "type" : "string" },"comparator":"EQUALS"}},"zoneID":{"zoneName":"_defaultZone"}}
However, I'm getting the following error from CloudKit:
{"uuid":"33db91f3-b768-4a68-9056-216ecc033e9e","serverErrorCode":"BAD_REQUEST","reason":"BadRequestException:
Unexpected input"}
I'm guessing I have the Record Field Dictionary in the query wrong. However, the documentation isn't clear on what this should look like on a reference object.
You have to re-create the actual object of the reference. In this particular case, the JSON looks like this:
{
"query": {
"recordType": "ChildObject",
"filterBy": {
"fieldName": "InspecReference",
"fieldValue": {
"value": {
"recordName": "14102C0A-60F2-4457-AC1C-601BC628BF47-184-000000012D225C57",
"action": "NONE"
},
"type": "REFERENCE"
},
"comparator": "EQUALS"
}
},
"zoneID": {
"zoneName": "_defaultZone"
}
}