PubNub to server data transfer - publish-subscribe

I am building an IoT application. I am using PubNub to communicate between the harware and the user.
Now I need to store all the messages and data coming from the hardware and from the user in a central server. We want to do a bit of machine learning.
Is there a way to do this other than having the server subscribe to all the output channels (There will be a LOT of them)?
I was hoping for some kind of once-a-day data dump involving the storage and playback module in PubNub
Thanks in advance

PubNub to Server Data Transfer
Yes you can perform once-a-day data dumps involving the storage and playback feature.
But first check this out! You can subscribe to Wildcard Channels like a.* and a.b.* to receive all messages in the hierarchy below. That way you can receive messages on all channels if you prefix each channel with a root channel like: root.chan_1 and root.chan_2. Now you can subscribe to root.* and receive all messages in the root.
To enable once-a-day data dumps involving Storage and Playback, first enable Storage and Playback on your account. PubNub will store all your messages on disk over multiple data centers for reliability and read latency performance boost. Lastly you can use the History API on your server to fetch all data stored as far back as forever as long as you know the channels to fetch.
Here is a JavaScript function that will fetch all messages from a channel.
Get All Messages Usage
get_all_history({
limit : 1000,
channel : "my_channel_here",
error : function(e) { },
callback : function(messages) {
console.log(messages);
}
});
Get All Messages Code
function get_all_history(args) {
var channel = args['channel']
, callback = args['callback']
, limit = +args['limit'] || 5000
, start = 0
, count = 100
, history = []
, params = {
channel : channel,
count : count,
callback : function(messages) {
var msgs = messages[0];
start = messages[1];
params.start = start;
PUBNUB.each( msgs.reverse(), function(m) {history.push(m)} );
callback(history);
if (history.length >= limit) return;
if (msgs.length < count) return;
count = 100;
add_messages();
},
error : function(e) {
log( message_out, [ 'HISTORY ERROR', e ], '#FF2262' );
}
};
add_messages();
function add_messages() { pubnub.history(params) }
}

Related

How to get total number of records to be synced in Flutter Amplify Datastore

Is there are good way to find out what the total number of records to be synced will be before the records are actually synced via the datastore? This is refering to at the start of time when I am going to sync the datastore with what's in the cloud (so the downstream sync). I'm wanting to create an actual progress indicator for the user (since it takes about a minute for ~1500 records to sync), and don't want to just put up a CircleProgressIndicator().
All I'm currently able to do is:
hubSubscription = Amplify.Hub.listen([HubChannel.DataStore], (msg) {
if (msg.eventName == "ready") {
getAllDevicesInDataStore().then((value) => stopListeningToHub());
}
if (kDebugMode) {
if (msg.eventName == "modelSynced") {
final syncedModelPayload = msg.payload as ModelSyncedEvent;
print(
'Model: ${syncedModelPayload.modelName}, Delta? ${syncedModelPayload.isDeltaSync}');
print(
'${syncedModelPayload.added}, ${syncedModelPayload.updated}, ${syncedModelPayload.deleted}');
}
}
});
I can implement a CircleProgressIndicator() while this is happening, but I want something more definitive.

Running Mirth Channel with API Requests to external server very slow to process

In this question Mirth HTTP POST request with Parameters using Javascript I used a semblance of the first answer. Code seen below.
I'm running this code for a file that has nearly 46,000 rows. Which equates to about 46,000 requests hitting our external server. I'm noting that Mirth is making requests to our API endpoint about 1.6 times per second. This is unusually slow, and I would like some help to understand whether this is something related to Mirth, or related to the code above. Can repeated Imports in a for loop cause slow downs? Or is there a specific Mirth setting that limits the number of requests sent?
Version of Mirth is 3.12.0
Started the process at 2:27 PM and it's expected to be finished by almost 8:41 PM tonight, that's ridiculously slow.
//Skip the first header row
for (i = 1; i < msg['row'].length(); i++) {
col1 = msg['row'][i]['column1'].toString();
col2...
...
//Insert into results if the file and sample aren't already present
InsertIntoDatabase()
}
function InsertIntoDatabase() {
with(JavaImporter(
org.apache.commons.io.IOUtils,
org.apache.http.client.methods.HttpPost,
org.apache.http.client.entity.UrlEncodedFormEntity,
org.apache.http.impl.client.HttpClients,
org.apache.http.message.BasicNameValuePair,
com.google.common.io.Closer)) {
var closer = Closer.create();
try {
var httpclient = closer.register(HttpClients.createDefault());
var httpPost = new HttpPost('http://<server_name>/InsertNewCorrection');
var postParameters = [
new BasicNameValuePair("col1", col1),
new BasicNameValuePair(...
...
];
httpPost.setEntity(new UrlEncodedFormEntity(postParameters, "UTF-8"));
httpPost.setHeader('Content-Type', 'application/x-www-form-urlencoded')
var response = closer.register(httpclient.execute(httpPost));
var is = closer.register(response.entity.content);
result = IOUtils.toString(is, 'UTF-8');
} finally {
closer.close();
}
}
return result;
}

UaSerializationException: request exceeds remote max message size: 2434140 > 2097152

I am a rookie, I tried to use the following code for bulk subscription, but something went wrong, how can I solve this problem
OpcUaSubscriptionManager subscriptionManager = opcUaClient.getSubscriptionManager();
UaSubscription subscription = subscriptionManager.createSubscription(publishInterval).get();
List<MonitoredItemCreateRequest> itemsToCreate = new ArrayList<>();
for (Tag tag : tagList) {
NodeId nodeId = new NodeId(nameSpace, tag.getPath());
ReadValueId readValueId = new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);
MonitoringParameters parameters = new MonitoringParameters(
subscription.nextClientHandle(), //
publishInterval, //
null, // filter, null means use default
UInteger.valueOf(queueSize), // queue size
true // discard oldest
);
MonitoredItemCreateRequest request = new MonitoredItemCreateRequest(readValueId,
MonitoringMode.Reporting, parameters);
itemsToCreate.add(request);
}
BiConsumer<UaMonitoredItem, Integer> consumer =(item, id) ->
item.setValueConsumer(this::onSubscriptionValue);
List<UaMonitoredItem> items = subscription.createMonitoredItems(
TimestampsToReturn.Both,
itemsToCreate,
consumer
).get();
for (UaMonitoredItem item : items) {
if (!item.getStatusCode().isGood()) {
log.error("failed to create item for nodeId={} (status={})",item.getReadValueId().getNodeId(), item.getStatusCode());
}
}
How many items are you trying to create?
It seems that the resulting message exceeds the limits set by the server you are connecting to. You may need to break your list up and create the items in smaller chunks.
I do not know the library that you use, but one of the previous steps for a OPC UA client to connect to a server is to negotiate the maximum size of the buffers, the message total size and the max number or chunks a message can be sent, this process is called by the OPC UA documentation as "Handshake".
If your request is too long it should be split and sent in several chunks according to the limits previously negotiated with the server.
And the server will probably also reply in several chunks, all that has to be considered in the programming of an OPC UA client.

Async sockets in D

Okay this is my first question here on Stack Overflow, so bare over with it if I'm not asking properly.
Basically I'm trying to code some asynchronous sockets using std.socket, but I'm not sure if I've understood the concept correct. I've only ever worked with asynchronous sockets in C# and in D it seem to be on a much lower level. I've researched a lot and looked up a lot of code, documentation etc. both for D and C/C++ to get an understanding, however I'm not sure if I understand the concept correctly and if any of you have some examples. I tried looking at splat, but it's very outdated and vibe seems to be too complex just for a simple asynchronous socket wrapper.
If I understood correctly there is no poll() function in std.socket so you'd have to use SocketSet with a single socket on select() to poll the status of the socket right?
So basically how I'd go about handling the sockets is polling to get the read status of the socket and if it has a success (value > 0) then I can call receive() which will return 0 for disconnection else the received value, but I'd have to keep doing this until the expected bytes are received.
Of course the socket is set to nonblocked!
Is that correct?
Here is the code I've made up so far.
void HANDLE_READ()
{
while (true)
{
synchronized
{
auto events = cast(AsyncObject[int])ASYNC_EVENTS_READ;
foreach (asyncObject; events)
{
int poll = pollRecv(asyncObject.socket.m_socket);
switch (poll)
{
case 0:
{
throw new SocketException("The socket had a time out!");
continue;
}
default:
{
if (poll <= -1)
{
throw new SocketException("The socket was interrupted!");
continue;
}
int recvGetSize = (asyncObject.socket.m_readBuffer.length - asyncObject.socket.readSize);
ubyte[] recvBuffer = new ubyte[recvGetSize];
int recv = asyncObject.socket.m_socket.receive(recvBuffer);
if (recv == 0)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.socket.disconnect();
continue;
}
asyncObject.socket.m_readBuffer ~= recvBuffer;
asyncObject.socket.readSize += recv;
if (asyncObject.socket.readSize == asyncObject.socket.expectedReadSize)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.event(asyncObject.socket);
}
break;
}
}
}
}
}
}
So basically how I'd go about handling the sockets is polling to get the read status of the socket
Not quite right. Usually, the idea is to build an event loop around select, so that your application is idle as long as there are no network or timer events that need to be handled. With polling, you'd have to check for new events continuously or on a timer, which leads to wasted CPU cycles, and events getting handled a bit later than they occur.
In the event loop, you populate the SocketSets with sockets whose events you are interested in. If you want to be notified of new received data on a socket, it goes to the "readable" set. If you have data to send, the socket should be in the "writable" set. And all sockets should be on the "error" set.
select will then block (sleep) until an event comes in, and fill the SocketSets with the sockets which have actionable events. Your application can then respond to them appropriately: receive data for readable sockets, send queued data for writable sockets, and perform cleanup for errored sockets.
Here's my D implementation of non-fiber event-based networking: ae.net.asockets.

Perform long-polling from nodejs (possible memory leak)

I wrote a piece of code that is going to perform a request to Facebook.
Now i wrapped this code into a infinite loop which is going to send those requests every 10 seconds using timeouts.
Code:
var poll = function(socket, userProvider) {
var lastCallTime = new Date();
var polling = true;
// The stream itself, non blocking
function performPoll() {
var results = feed(function (err, data) {
lastCallTime = new Date();
// PROCESS DATA
// Check new posts
if (polling) {
setTimeout(performPoll, 1000 * 10);
}
});
};
// Start infinite loop
performPoll();
};
The feed(cb) is just going to call a request to Facebook requesting data, this works 100% and does what i want it to do, the only problem that i am having now is that this piece of code is keeping to increase my memory usage. After a few minutes it increased by 50MB already (From 50 -> 100).
Is there anybody that can help me identify the cause of this?
v8 does not collect memory immediately. If it stabilizes at 100mb, then it is to be expected. For more information, checkout nodejs setTimeout memory leak?
If you really, really want to clear the memory, use global.gc(). Read this blog about how to call garbage collector manually.