I am sending data from my sensortag 2650 to the Node-RED platform through my smartphone. I am sending at the speed of 10 Hz. But the node is spitting out data at 1 Hz. How can I increase the data rate of the Node ibmiot?
ibmiot node does not play any role in setting the frequency of message rate. Thats done by the node which is sending the events to ibmiot node. So you need to change that part of the flow.
Related
I want to set up a system of 100-200 sensors that send their data (in a frequency of about 30 mins) to a MQTT broker based on RaspberryPi. Sensors data is obtained in an ESP8266, which would transmit via WiFi to the MQTT broker (which is in a distance of about 2 meters).
I wanted to know if is it possible for a broker of these characteristics to handle that many connections simultaneously.
Thank you so much!
Diego
A single broker can handle many 1000s of clients.
The limiting factor is likely to be the size and frequency of message, but assuming the messages are not 10s of megabytes each then 200 messages spread of 30mins will be trivial.
Even if they are all grouped at the same time rough time (allowing for clock drift) then small messages will again not be a problem.
On my simulation, the are several nodes that are trying sending some message to a fixed RSU, I implemented a v2v communication, where the node send the message just to nodes that are closer to RSU. But sometimes, there are not nodes available to the sender node send the message, that is ok in simulations with a low density. But, when I increase the number of nodes, the delivery rate of messages decreases instead of increases, anyone already have this kind of problem?
I tried to adjust my algorithm to route less messages, because the problem might be on collisions.
My question is:
Is there a simple an proper way to create arrivals for multiple deliverylocations with the same poisson rate without using a source block for every delivery location?
So an example to make it perhaps a bit more understandable:
I've 30 delivery location nodes (node1, node2, node3, node4 etc). For all those 30 delivery location there should be something transported to those nodes with the same poission arrival rate for simplicity say 1 and they all have different intermediate points where they have to pass (so say delivery location node1, first need to go to intermediate point2 and than to location node 1, see figure for example of database values).
Now ofcourse I can create 30 source blocks with this arrival rate and the intermediate points as parameters of the agent created in that source, but this is kind of time intensive, so is there a quick solution to model this quickly?
Since it happens randomly, arrivals according to database can not be used, since there is not a specified time of arrival it just happens randomly based on a poisson rate.
Currently we have a pipeline of data streaming: api call -> google pub/sub -> BigQuery. The number of api call will depend on the traffic on the website.
We create a kubernetes deployment (in GKE) for ingesting data from pub/sub to BigQuery. This deployment have a horizontal pod autoscaler (HPA) with with metricName: pubsub.googleapis.com|subscription|num_undelivered_messages and targetValue: "5000". This structure able to autoscale when the traffic have a sudden increase. However, it will cause a spiky scaling.
What I meant by spiky is as follows:
The number of unacked messages will go up more than the target value
The autoscaler will increase the number of pods
Since the number of unacked will slowly decrease, but since it is still above target value the autoscaler will still increase the number of pods --> this happen until we hit the max number of pods in the autoscaler
The number of unacked will decrease until it goes below target and it will stay very low
The autoscaler will reduce the number of pods to the minimum number of pods
The number of unacked messages will increase again and will go similar situation with (1) and it will go into a loop/cycle of spikes
Here are the chart when it goes spiky (the traffic is going up but it is stable and non-spiky):
The spiky number of unacknowledged message in pub/sub
We set an alarm in stackdriver if the number of unacknowledged message is more than 20k, and in this situation it will always triggered frequently.
Is there a way so that the HPA become more stable (non-spiky) in this case?
Any comment, suggestion, or answer is well appreciated.
Thanks!
I've been dealing with the same behavior. What I ended up doing is smoothing the num_undelivered_messages using a moving average. I set up a k8s cron that publishes the average of the last 20 mins of time series data to a custom metric every minute. Then configured the HPA to respond to the custom metric.
This worked pretty good but not perfect. I observed that as soon as the average converges on the actual value, the HPA will scale the service down too low. So I ended up just adding a constant, so the custom metric is just average + constant. I found for my specific case a value of 25,000 worked well.
With this, and after dialing in the targetAverageValue, the autoscaling has been very stable.
I'm not sure if this is due to a defect or just the nature of the num_undelivered_messages metric at very high loads.
Edit:
I used the stackdriver/monitoring golang packages. There is a straightforward way to aggregate the time series data; see here under 'Aggregating data' https://cloud.google.com/monitoring/custom-metrics/reading-metrics
https://cloud.google.com/monitoring/custom-metrics/creating-metrics
I have a little network with a client and a server, and I'm testing the FrameRate, changing the dimension of the packet. Particulary, I have an image, changing threshold, I extract keypoints and descriptors and then I send a fixed number of packets (with different dimension with different threshold). Problems happen when udp packets are under MTU dimension, reception rate decrease and frame rate tend to be constant. I verify with wireshark that my reception times are correct, so isn't a server code problem.
this is the graph with the same image sends 30 times for threshold with a 10 step from 40 to 170.
i can't post the image so this is the link
Thanks for the responces
I think that none will interest this answer, but we arrived to the conclusion that the problem is a problem in wifi dongle's drivers.
The trasmission window does not go under a determined time's threshold. So under a determined amount of data while time remains constant, decreases.