Telemetry data (in array format) gets segragated into multiple individual messages by ThingsBoard's rule engine? - rule-engine

I have a BLE gateway which scans nearby BLE beacons and send the data to ThingsBoard via MQTT every n seconds.
That is, in the payload, it contains an array with data of each BLE beacons.
[{
id: 1001,
rssi: -50
}, {
id: 1002,
rssi: -60
}]
The problem is that ThingsBoard upon receiving this payload, it does not pass the whole thing to the rule engine's "INPUT", instead it separates the payload into 2 individual elements. That is, it appears that ThingsBoard has received 2 separate messages.
{
id: 1001,
rssi: -50
}
and
{
id: 1002,
rssi: -60
}
What I expect is that ThingsBoard does not separate the payload into individual elements.

Related

Advice on structuring data for scalability with large number of nested objects

Looking for advice on how best to structure data in MongoDB, particularly for scalability - worried about having an array of potentially thousands of objects within each user object.
I am building a language learning app with a built in flashcard system. I want users to 'unlock' new vocabulary for each level, which automatically gets added to their flashcards, so when you unlock level 4, all the vocabulary attached to level 4 gets added to your flashcards.
For the flashcards themselves, I want a changable 'due date', so that you get prompted to do certain cards at a certain date - if you're familiar with spaced repition, that's the plan. So when you get a card, you can say how well you know it and, for example, if you know it well you won't get it for another week, but if you get it wrong you'll get it again the next day.
I'm using MongoDB for the backend, but am a little unsure about how best to structure my data. Currently, I have two objects: one for the cards, and one for the users.
The cards object looks like this, so there's a nested object for each flashcard, with a unique ID, the level the word appears in, and then the word in both languages.
const CardsList = [
{
id: 1,
level: 1,
gd: "sgìth",
en: "tired",
},
{
id: 2,
level: 2,
gd: "ceist",
en: "question",
},
];
Then each user has an object like the below, with various user data, and a nested array of objects for the cards - with the id of every card they've unlocked, and the date at which that card is next due.
{
id: 1,
name: "gordon",
level: 2,
cards: [
{ id: 1, date: "07/12/2021" },
{ id: 2, date: "09/12/2021" },
],
},
{
id: 2,
name: "mike",
level: 1,
cards: [
{ id: 1, date: "08/12/2021" },
{ id: 2, date: "07/12/2021" },
],
},
This works fine, but I'm a bit concerned about the scalability of it.
The plan is to have about two or three thousand words in total, and so if I had, say, fifty users complete the app, then that would mean fifty user objects, each with as much as three thousand objects in that nested cards array.
Is that going to be a problem? Would it be a problem if I had a thousand (or more) users, instead of 50? Is there a more sensible way of structuring the data that I'm not spotting?

Agora cloud recording not starting

I am following Agora Cloud Recording RESTful apis
The problem is
Acquire api working fine
Start api working fine
Now query api is returning me
{
"resourceId": "rid",
"sid": "sid",
"serverResponse": {
"status": 4,
"fileList": "",
"fileListMode": "string",
"sliceStartTime": 0
}
}
and stop api giving me
{
"resourceId": "rid",
"sid": "sid",
"code": 435
}
which means no one is present in channel.
But 2 users are there in my ongoing channel
My start request was
{
"cname":"80f350442cb2a26ccacb5cfb058c6e82",
"uid":"936239554", // userid who i want to record...is this correct????
"clientRequest":{
"token": "temp_token_generated_from_agora_console",
"recordingConfig":{
"channelType":0,
"streamTypes":2,
"audioProfile":1,
"videoStreamType":0,
"maxIdleTime":120,
"transcodingConfig":{
"width":360,
"height":640,
"fps":30,
"bitrate":600,
"maxResolutionUid":"1",
"mixedVideoLayout":1
}
},
"subscribeVideoUids": ["936239554"], // is this correct??
"subscribeAudioUids": ["936239554"], //is this correct??
"storageConfig":{
"vendor":1,
"region":14,
"bucket":"my_bucket_name",
"accessKey":"xxxx",
"secretKey":"xxxx"
}
}
}
When using Agora's Cloud Recording service, the Recorder instance needs to have its own unique ID that it uses to join the channel and record the other users that are defined in the "subscribeVideoUids": portion of the request.
In the code snippet below first UID is meant to be unique for the Recorder to use to join the channel.This is not meant to be the UID for the user you wish to record.
"cname":"80f350442cb2a26ccacb5cfb058c6e82", "uid":"936239554", // userid who i want to record...is this correct????
if the user's UID is , "936239554" then the recorder should have a different/unique value even just adding an integer to the end "9362395541" is enough.
Im the "subscribeVideoUids" and "subscribeAudioUids" you'll want to include all the UID's of the users in the channel that you want to record. So if there are two users in the channel, include each UID as an element of the Array.
"subscribeVideoUids": ["936239554"],"subscribeAudioUids": ["936239554"],

REST API design for data synchronization service

What is best practice for data synchronization operation between client and server?
We have 2 (or more) resources:
cars -> year, model, engine
toys -> color, brand, weight
And we need to get updated resources from server in case of any updates on them. For example: someone made changes from another client on the same data and we need to transfer those updates to our client application.
Request:
http://api.example.com/sync?data=cars,toys (verb?)
http://api.example.com/synchronizations?data=cars,toys (virtual resource "synchronizations")
Response with mixed data:
status code: 200
{
message: "ok",
data: {
cars: [
{
year: 2015,
model: "Fiat 500"
engine: 0.9
},
{
year: 2004,
model: "Nissan Sunny"
engine: 1.3
}
],
toys: [
{
color: "yellow",
brand: "Bruder"
weight: 2
}
],
}
}
or response with status code 204 if no updates available. In my opinion making separated http calls in not a good solution. What if we have 100 resources (=100 http calls)?
I am not an expert, but one method I have used in the past is to ask for a "signature" of the data, as opposed to always going and getting the data. The signature can be a hash of the data you are looking for. So, flow would be something like:
Get signature hash of the data
http://api.example.com/sync/signature/cars
Which returns the signature hash
Check if the signature is different from the last time you retrieved the data
If the signature is different, go and get the data
http://api.example.com/sync/cars
Have the REST also add the new signature to the data
{
message: "ok",
data: {
cars: [
{
year: 2015,
model: "Fiat 500"
engine: 0.9
},
{
year: 2004,
model: "Nissan Sunny"
engine: 1.3
},
],
signature: "570a90bfbf8c7eab5dc5d4e26832d5b1"
}
}

Atomic consistency

My lecturer in the database course I'm taking said an advantage of NoSQL databases is that they "support atomic consistency of a single aggregate". I have no idea what this means, can someone please explain it to me?
It means that by using aggregates you are able to avoid that your database save inconsistence data by an error of transaction.
In Domain Driven Design, an aggregate is a collection of related objects that are treated as an unit.
For example, lets say you have a restaurant and you want to save the orders of each customer.
You could save your data with two aggregates like below:
var customerIdGenerated = newGuid();
var customer = { id: customerIdGenerated , name: 'Mateus Forgiarini'};
var orders = {
id: 1,
customerId: customerIdGenerated ,
orderedFoods: [{
name: 'Sushi',
price: 50
},
{
name: 'Tacos',
price: 12
}]
};
Or you could threat orders and customers as a single aggregate:
var customerIdGenerated = newGuid();
var customerAndOrders = {
customerId: customerIdGenerated ,
name: 'Mateus Forgiarini',
orderId: 1,
orderedFoods: [{
name: 'Sushi',
price: 50
},
{
name: 'Tacos',
price: 12
}]
};
By setting your orders and customer as a single aggregate you avoid an error of transaction. In the NoSQL world an error of transaction can occur when you have to write a related data in many nodes (a node is where you store your data, NoSQL databases that run on clusters can have many nodes).
So if you are treating orders and customers as two aggregates, an error can occur while you are saving the customer but your orders can still be saved, so you would have an inconsistency data because you would have orders with no customer.
However by making use of a single aggregate, you can avoid that, because if an error occur, you won't have an inconsistency data, since you are saving your related data together.

Is there a design pattern for that : send a blank form and receive it back compeleted

Is there a design pattern for that ?
A rest microservice receives a JSon data structure, with no value (but maybe an id) and returns back the same data with blanks filed? Example :
Input :
{
id: 1546,
name: ,
height: ,
color:
}
Output :
{
id: 1546,
name: "Bob",
height: 40,
color: "yellow"
}
Yes, in the asynchronous world it's called the Document Message pattern. This can be easily adapted to synchronous communication.
In your case the document is passed from one service to another and then sent back completed. More details here.